AI Slop is Drowning the Internet

Image
Kurzgesagt

Kurzgesagt (German for "in a nutshell" or "said shortly") is an organisation that generates animated educational videos to make scienitfic, philosophical, political and psychological content more understandable. It started out as a passion project by founder Philipp Dettmer in 2013 with the launch of their YouTube channel and is now one of the most popular. 

It is noteworthy that the paths of Kurzgesagt and My-thesis have become aligned as we both confront the threat of a AI dominated educational space. For example this recent video "AI Slop Is Destroying The Internet" explains how the internet is increasingly flooded with low-effort, AI-generated content designed purely to capture attention. Because attention is the currency of the online economy, bots now generate fake reviews, fake traffic, synthetic news sites, AI-written books, AI music, and entire YouTube channels producing multiple long-form videos per week with AI voices, scripts, and thumbnails. Around half of all internet traffic is already automated, much of it destructive.

As Kurzgesagt point out, the deeper concern is not just aesthetic decline — it is epistemic collapse. Generative AI is trained on vast quantities of human creative work, often without consent or compensation. But beyond issues of creative theft lies a more corrosive risk: AI makes it increasingly difficult to determine what is true.

To demonstrate this, Kurzgesagt describe their own rigorous research process: scripts are built from primary sources, peer-reviewed papers, and expert consultation. Fact-checking alone takes around 100 hours per video. When large language models appeared, they seemed like a powerful research assistant. Initially, outputs looked impressive — comprehensive outlines, structured summaries, plausible references.

But deeper scrutiny revealed a major flaw: while roughly 80% of the information was accurate and traceable, the remaining 20% contained fabricated or extrapolated “facts.” These weren’t obvious errors — they were plausible, detailed, and confidently presented inventions. Even domain experts flagged the same suspicious claims.

This illustrates a systemic problem: AI models optimize for coherence and user satisfaction, not truth. They “hallucinate” to fill gaps, much like a journalist embellishing a story to make it more compelling. Worse, some of the AI’s cited sources turned out to be AI-generated articles themselves. With over a thousand confirmed AI-run news sites publishing misinformation, AI can end up citing other AI outputs, forming a self-reinforcing loop.

The danger is recursive contamination. Once fabricated information is published — for example, in a viral YouTube video — it becomes a legitimate-looking source. The next AI model trained on that data treats it as evidence. Misinformation becomes self-validating. Over time, distinguishing original knowledge from synthetic fabrication may become nearly impossible.

Take peer-reviewed research. One study analysing language used in scientific papers,  shows measurable linguistic shifts towards more sensationalists terms following the rise of large language models, suggesting widespread unacknowledged AI assistance. Some researchers have even embedded hidden prompts in white text to manipulate AI reviewers into giving favourable evaluations. As AI use spreads carelessly, the reliability of the “library of human knowledge” deteriorates.

The central problem is that AI appears trustworthy. It is fluent, confident, and often correct enough to inspire confidence — yet it lies subtly and repeatedly. There is no understanding behind the words, only probabilistic pattern completion. Despite this, we increasingly allow AI systems to contribute to knowledge production.

Kurzgesagt argue that AI should remain a tool — like an alignment feature in design software — not a replacement for human creativity and judgment. They commit to maintaining human research, expert consultation, and creative integrity, even if it is slower and more expensive.

The final message is pragmatic: in an attention economy dominated by cheap, automated content, sustaining high-quality human work requires deliberate support. The future of trustworthy knowledge may depend on whether audiences value and fund it. At my-thesis, we are working with the universities, colleges and higher education institutions to ensure that human content and original research is protected from AI slop!