“We are Forerunners. Guardians of all that exists. The roots of the Galaxy have grown deep under our careful tending. Where there is life, the wisdom of our countless generations has saturated the soil. Our strength is a luminous sun, towards which all intelligence blossoms… And the impervious shelter, beneath which it has prospered.”

When Abominable Intelligence chokes on its own slop

by | Dec 12, 2025 | Philosophy | 4 comments

Most of us are familiar by now, whether we like it or not, with the concept of “AI slop”. This refers to the proliferation of low-quality and genuinely stupid content generated by Abominable Intelligence, whether in the form of text, images, sound, or video. The profusion of AI-generated content has now gotten to the point where AI models are training themselves on things generated by other AIs – rather like that old doggerel about how “big fleas have little fleas”, as it were.

Now, before anyone thinks I am about to launch into an anti-AI rant, let me make it clear that I like and use AI quite a bit in my work. It has been a fundamental game-changer in terms of productivity. The work I do involves a great deal of data analysis, interviewing, and stitching together of insights into various outputs – most especially, PowerPoint decks. (Words cannot express sufficiently how much I truly LOATHE PowerPoint. If I never, ever have to do another PPTX file ever again, I will be VERY happy. Indeed, my dream is to find an AI agent that can do all my PPTX files for me, so that I never have to dick around with icons, graphics, fonts, and alignments EVER again. At that point, I will simply f*** off and retire.)

Tools like ChadGippity have completely changed the way we work at my employer. Indeed, without boasting, I can honestly say that I was heavily involved in the drive to deploy ChatGPT as one of our core platforms. I have since gotten our people to adopt AI meeting assistants that automatically transcribe and compile meeting summaries and notes, and that too has been a massive boost to our productivity.

It is critical to remember that Artificial Intelligence is not intelligent, at all. It is merely software. It is HUGELY IMPRESSIVE software, to be sure. But it is JUST SOFTWARE. And, like all software, when used properly and correctly, it radically improves one’s overall workflow, productivity, and peace of mind. (This is even true of PowerPoint, much as it pains me to admit that. Most people – myself included – misuse the application horribly, which is why we are now awash with so many shitty PowerPoints, and so many overpriced MBAs who think being able to craft pretty slides is a useful skill.)

It is the “used properly and correctly” part above, that is critical to remember.

When used improperly and incorrectly, without regard to what goes into the sausage machine that is your typical GPT LLM, you will inevitably get complete crap coming out the other end.

This is where the concept of “AI Model Collapse” comes in.

The idea is pretty simple. LLMs work by training on colossal datasets to refine their parameters, such that they produce the probabilistically most likely answer to a given question. The mathematics behind it are elegant and powerful, rooted in (among other things) optimisation theory and tensor calculus, which allows for massive parallel solutions to large-scale problems of finding local minima and maxima under specific constraints.

When you ask ChatGPT a question, it strings together the answers based on its training dataset to assemble a sequence of words that probabilistically matches to a high degree the question that you have asked.

The genius of the LLM approach is that you get an answer in plain language, rather than numbers and equations. This is what makes the LLM so powerful to use for the average person. As that old paedophile, Arthur C. Clarke, once said, “any sufficiently advanced technology is indistinguishable from magic”, and from the perspective of the average human being, AI operates pretty much like magic.

It isn’t magic, of course, It is practical and beautiful mathematics, that produces an amazing machine. But, like any machine, if you put garbage in one end, you get garbage out the other.

This is where the “AI Slop” problem comes in.

Keep in mind what I said above about how AI models look for the most probabilistically accurate answer to a given question, using training data. This means their responses use probability distributions to narrow down the potential range of answers into the zones that have the highest likelihood of an accurate response.

This, by literal definition, involves narrowing down the possible range of events and outcomes to specifically the ones that are the most likely to generate a result.

What happens when the training datasets are all themselves AI generated? In other words, what happens when the stuff going into the machine, is generated by other machines?

The answer is conceptually quite straightforward:

Your LLM has to restrict itself to ever narrower datasets to generate an answer. If it takes in data generated by an AI, those data are “curated” to sit within specific boundaries that lack the full range of variability and randomness that we observe in nature and among humans. If another LLM then takes the same dataset you just generated as an output and uses that as its input, this third LLM is operating off an even narrower dataset for possible answers.

Ultimately, this means that the AI which once produced amazingly human-like answers, increasingly starts acting like an overconfident moron that flat-out MAKES SHIT UP. And the issue becomes worse as you apply AI to ever more complex tasks.

If you recursively train a text-generating LLM on a bunch of AI-generated input data, the LLM that once could imitate Shakespeare to a very high degree, will eventually end up writing like a dyslexic retarded five-year-old with ADHD. If you do the same with an image generator, the results will look like nightmare-fuel straight out of a Bosch painting – that was done by a drunk crack addict.

This is the phenomenon of “model collapse”. It is not a new concept. It has been around for a couple of years, at least. But the issue of AI content proliferating across the internet has gotten so severe that we are now staring down the barrel of a very serious potential problem.

Now, you may very well shrug your shoulders and say, “so what”? This is the wrong attitude. We can already see AI content proliferating everywhere. The marginal cost of AI has dropped to the point where content creators can convincingly manufacture videos using Sora and Nano Banana that look nearly indistinguishable from the real thing.

All you have to do, is to go on EWCHOOB and look for videos of dogs playing with toddlers. For the moment, you can tell that many of them are AI-generated junk, because the toddlers are yelling at the dogs in perfect English. But they will not stay so easily distinguishable for very long.

Already, it is very difficult to tell real Instathots from fake ones – believe me, I know the struggle involved. Soon, it will become nearly impossible, because the realism of the AI-thots will be so great as to displace the real ones from the platform.

But… the basement-dwelling nerds who come up with the AI-thots, are using real women to generate their artificial constructs. What happens when there are hardly any real women left on the platform? What kinds of monstrosities will emerge then?

And that is before we get to the use of AI in films and songs. We are now at the point where AI could easily replace your average Hollyweird screenwriter, director, camera crew, and video editing studio, for relative peanuts. We already are at the point where you can create your own video avatar, that looks, talks, and sounds like you – complete with your own facial tics and verbal idiosyncrasies. Imagine that on the scale of a major Hollyweird movie for 2 hours – basically, your typical Michael Bay film on steroids and HGH – and you will get an idea of what awaits us in the future.

But what happens when it is ALL AI?

At that point, the models themselves collapse, and we face a potential future where people are so used to dealing with the virtual and the artificial, that they will have no idea how to handle the real any more. We are already approaching that point. We will soon get to the point where we simply have no clue what is real and what is AI-generated, and we will be so used to AI slop telling us what to think, that we will not have a clue how to get to the reality and the facts.

This has already had a real world impact. Deloitte got hit with a major fine because they used SkyNet in one of their consulting projects, and the LLM straight-up CONFABULATED a source that did not exist, and had never existed. It quoted that source back to the people writing the report as if it were true, and the consultants involved did not bother to check the source, because they were in a hurry. (Consulting projects are real pressure-cookers, particularly when working for the Big 3 or Big 4. Ask me how I know this.)

None of this is to say that we should NOT use AI. These tools can be used for great good AND great evil. As the meme up top points out, there are plenty of high-IQ types who will use AI to make their lives substantially better through vast productivity improvements. I know, because I have seen it myself. Tasks that used to take me hours at work, now take me seconds. This frees up huge amounts of my time to take on extra tasks, or to do other things that interest me personally, rather than wasting my time on doing the same stupid nonsense over and over again.

The productivity gains are real. But so are the risks. We MUST learn how to distinguish between what is real, and what is the confabulated fever-dream of the infernal machine.

Subscribe to Didactic Mind

* indicates required
Email Format

Recent Thoughts

If you enjoyed this article, please:

  • Visit the Support page and check out the ways to support my work through purchases and affiliate links;
  • Email me and connect directly;
  • Share this article via social media;

4 Comments

  1. NIdahoOrthodox

    On the anniversary of the loss of the Edmund Fitzgerald I watched several documentaries about it. One of them was narrated by what appeared to be an actual human. But, some things weren’t quite right. The voice sounded just like so many AI narrated YouTubes, the facial expressions while speaking were exaggerated, and occasionally the voice and lips weren’t synced exactly. Other than that it looked exactly like a real person.

    Reply
  2. lynch

    The use of AI in coding is terrifying as well. Model collapse there will be brutal, but even now you have people using LLMs to write code that they don’t understand at all. It’s Stack Exchange on LSD and PCP.

    Reply
    • Didact

      Indeed. “Vibe code” is useful only for prototyping. It is the “Record Macro” button in MS Excel, writ large using Python and other languages. Anyone who actually knows how to code production-grade software, knows full well that the stuff coming out of ChadGippity is not fit for purpose for real-world applications. It is only good for producing something that kind of sort of works in a test environment under very specific conditions.

      Reply
  3. Dire Badger

    My biggest problem is that when I ask an LLM a simple question about, say, my chapter structure, before I realize it I have wasted 6 hours.

    That, and it lies CONSTANTLY, getting basic facts wrong.

    I consider it an addictive substance.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Didactic Mind Archives

Didactic Mind by Category