By Greg Walters, first on LinkedIn. The Ai will be trained to train on everything in the past, then in the current timeline and finally creating its own reality. I postulated this a while back and yeah...it is was a deep question back in April.(4 months ago) "Here is the crux: After the LLMs are trained, past data is irrelevant because Ai has moved on from being taught WHAT to think to being instructed on HOW to think in REAL TIME. " The premise is simple: Ai learns by modeling from existing, historical data - looking for and comparing patterns. It remembers, every process it models into existence. To a point that it learns in real time; digesting 'data' as it happens in the real world. Creating even more models and developing pattern recognition skills. "can you articulate the comparison between inbreeding and model collapse to a 5th grader?" Meaning All our historical data has been consumed and used to train the Ai - every bit. But the colossus continues to seek out new data finally connecting to the real world - our real world - via sensors. More specifically, the Ai develops all Six of the Human Senses. Will the Ai become sentient when connected to the real world, in real life? Or will the Ai consume EVERYTHING until the only thing left is its tail? If you run out of data, but generating more risks model collapse, does that fundamentally limit today’s AI? If it does begin to happen, how will we know? And is there anything we can do to forestall or mitigate the problem? It is called "model collapse" and similar to the results of making a copy of a copy of a copy - ad infinitum. We've seen an example of this in nature: Inbreeding. Interesting, right? In breeding produced different species of dogs and thoroughbred horses.(it's right there in the name)
Although these two phenomena might seem disparate, they share intriguing parallels. I asked an LLM to expand: "can you articulate the comparison between inbreeding and model collapse to a 5th grader?" This is what it said. "Imagine you have a big box of Legos. Inbreeding If you only use Legos from your box to make new boxes of Legos, over and over, you might end up with boxes that have the same colors and shapes. This is like inbreeding. It's when animals or plants have babies with close relatives. After a while, the babies might not be as strong or healthy as they could be. Model Collapse Think of a robot that learns to build things by watching you. If the robot only watches itself build things, it might start making mistakes because it's not learning anything new. This is like model collapse. A computer program that learns things can get stuck if it only looks at its own work. It needs to see different things to learn and grow. So, both inbreeding and model collapse are like not trying new things. They can cause problems because they don't allow for variety and learning." Once again, science(Ai) follows nature - or is the other way around? Regardless, at this point in the Ai journey, I see model collapse is a viable concern - like the Borg or the “Habsburg jaw”. I am equally sure that the solution will present itself.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
AuthorsGreg Walters Archives
December 2024
|