AI models fed AI-generated data quickly spew nonsenseResearchers gave successive versions of a large language model information produced by previous generations of the AI — and observed rapid collapse.
From:
https://www.nature.com/articles/d41586-024-02420-7?WT.ec_id=NATURE-20240801Not to dunk on this research, which I think is interesting and important, but if you've ever explored iterated function systems, discrete dynamical systems, fractals or the like, this is a wholly unsurprising observation. A general class of observations is that repeatedly iterating a function on a given input will diverge from that input and start assuming qualities reflective of the function itself.
For instance, watch some of the videos on this page:
https://www.algorithm-archive.org/contents/barnsley/barnsley.html . In one set, you'll see a square with randomly-placed dots being squished down into various shapes. In another set, you'll see the Barnsley fern itself run through the same functions being squished down to roughly the same shapes. This is a general fixed-point result of this (and all contractive affine) systems:
any input set of points will be squished into the same shapes, and precisely the same fern image will emerge no matter what (non-empty) input you start with when you iterate these processes often enough (by iterate I mean feeding the output of the functions back in as input, as in the linked paper). This is an instance of the Banach fixed-point theorem applied to the Hausdorff metric on images; the theorem states that any self-map that's contractive in the metric has a unique fixed point. In this case, the unique fixed point is the fern image; the map being iterated is a bit complicated but detailed on that linked page about the fern. The theorem tells us this unique fixed point is dependent only on the self-map, not on what input is put in.
Naturally
#GenerativeAI training and input-output procedures are considerably more complicated than affine functions, but the same class of fixed point phenomena are almost surely at play, especially for the image-generating ones. Personally I'd find it surprising and interesting if there
weren't fixed point theorems like this for
#GenerativeAI systems trained on their own outputs.
#ContractionMap #FixedPoint #Fractals #AI #GenAI