The audience of this post is the algorithm itself
Is AI-generated content truly for humans, or is it feeding an increasingly automated internet?
The explosion and commodification of generative AI and its tools have led, among other things, to an overproduction of content, primarily images, created by various types of machines. Indeed, AI tools are capable of generating images and videos instantly, requiring only a few lines of textual prompts from humans. The machine performs the magic, bypassing human labor and reducing the time needed to create such content to nearly zero.
There is considerable discussion about the implications of this AI-generated method for producing content. Is this content good in principle? Is it good enough by human standards? And above all: what will remain of the arts, as computers and algorithms get faster and faster at doing the work? These are all valid and pressing questions, along with concerns about copyright and attribution – issues that do not lend themselves to simple, straightforward solutions.
However, recent developments in this area have also raised new questions – those linked to the nature of the internet and the inhabitants of its most densely populated spaces. Who is the real audience for AI-generated content? Is it truly being created for humans, or for another technological entity? While this may sound like a purely philosophical question, there are indeed reasons to believe that humans may not be the primary recipients of this content.
Discussions about the dehumanization of the Internet have been ongoing for several years, particularly since bots and other non-human actors began to populate and animate social media spaces. With the rise of bots and other fake, automated digital entities that mimic human behavior and communication, the Internet has become increasingly populated by non-human actors. Several studies published over the past decade have suggested that bots are responsible for nearly half of global Internet traffic. This means that potentially half of what we see and interact with online is not created or distributed by a human.
These figures also fueled the rise of the “Dead Internet Theory,” a conspiracy theory based on the premise that bots have taken over the Internet, effectively “killing” it and leaving it entirely in the hands of artificial actors, whose goal is to stifle human creativity and human relationships. This theory gained momentum around 2016 and 2017, long before the widespread adoption of generative artificial intelligence, large language models, ChatGPT, and Midjourney. Following the explosive success of genAI in 2022, the Dead Internet Theory, while still conspiratorial, is experiencing a new wave of popularity as AI-generated content floods the internet, distributed by automated algorithms, and primarily for the benefit of other algorithms. Enter “AI slop,” one of the new aesthetics of the digital sphere, reflecting the style, shapes, and values of synthetic media created by commercial generative AI tools for consumption by other algorithms, rather than necessarily by humans.
As Arwa Mahdawi wrote in The Guardian, AI slop can be considered the “advanced iteration of Internet spam: low-quality text, videos, and images generated by AI” and is increasingly being used to feed social media algorithms, generate engagement, and ultimately monetize it. In fact, the mass production of AI slop is not just a byproduct of technological advancement; it’s a business model in itself. On social media platforms, where engagement metrics dictate visibility and profit, low-quality AI-generated content floods timelines, engineered to capture attention and favor with algorithms. In the attention economy, where clicks drive success, even the most meaningless interaction can be monetized, turning synthetic media into a lucrative commodity. Operators exploit the system by churning out vast quantities of AI slop, weaponizing engagement to drive advertising revenue and boost platform rankings. Spammy, AI-driven social media pages employ clickbait strategies to lure users into visiting external content farms and low-quality websites. In this landscape, the value of content is irrelevant; what matters is its ability to trigger a reaction within an increasingly automated attention market.
Photo by Anabela Pinto
AI slop content can take many forms and serve many different masters. The X account Insane Facebook AI Slop (@FacebookAIslop) has been collecting examples for a few years now. Scrolling through the account is like visiting a virtual museum of uncanniness and absurdity, and it feels like watching an abandoned theme park that tried to aggregate the bad taste of corporate slides. Just a few recent examples include a fake picture of Elon Musk’s family in a gothic cartoon style, Vladimir Putin ice-skating, a cute young deer cuddling with a rabbit in the snow, and a fighter jet helping a whale remove barnacles from its skin. The list is potentially endless, mixing random weirdness with political propaganda and content that could have come from a Pinterest dystopian vibes board. The potential for AI slop is as vast as the potential for creating generative AI prompts. What is this content used for, primarily?
In the technology news outlet 404 Media, Jason Koebler wrote, “Large parts of the SEO industry have pivoted entirely to AI-generated content, as has some of the internet advertising industry. They are using generative AI to brute force the internet, and it is working.” In information security, brute force attacks are cyberattacks that rely on rapid trial-and-error to guess a password. By automating this process, attackers can eventually discover the correct password through constant and quick attempts. According to Koebler, AI slop works much like the algorithms that control the distribution of content online. It floods these algorithms with engagement driven by humans doomscrolling through AI-generated, low-quality content, all in an attempt to satisfy the machine’s need for constant clicks, shares, and comments to go viral and generate advertising revenue. In this equation, humans are not central, but rather marginal players in a process that could easily be – and probably soon will be – fully automated.
Photo by Anabela Pinto
The brute-force attack certainly works. As researchers at the Stanford Internet Observatory and Georgetown University’s Center for Security and Emerging Technology noted in a recent preprint paper, the Facebook algorithm is rewarding AI slop-spamming content by promoting it to users who don’t follow pages that post that kind of content. The problem with AI slop is twofold: while there is an increasing amount of it, defining it also comes with its own shortcomings. Some AI slop is utterly pointless and created for spamming, some other is AI-powered shitposting and memefication; some is dubious political propaganda; some could be an interesting experiment in AI-powered visual art. Yet, the consequences for the Internet as an ecosystem remain the same. We’re moving in a direction where humans are becoming increasingly less of the online conversation, becoming mere spectators while algorithms create the content, distribute it, and enjoy the show. “The audience of AI slop is not human beings, the audience is the algorithm itself,” said Jason Koebler at a SXSW panel in April.
The explosion of AI slop and the accelerating dehumanization of the internet come as tech leaders are openly discussing systemic changes in how they view social media. While testifying during Meta’s antitrust trial in April, the Facebook founder essentially argued that platforms like his have changed and are no longer what they once were. Zuckerberg openly admitted that his platforms have shifted away from interpersonal communication in favor of a more traditional role as content distribution networks aimed at the broadest possible audience, a shift that includes information, entertainment, and, of course, AI slop. We go to social media for the broadcasting, not the networking, Zuckerberg essentially argued, according to The New Yorker’s account of the hearings.
Under these changing dynamics, it is possible to frame AI slop as part of this shift, and as a part that can be increasingly weaponized for attention, monetization, and even more sinister purposes. In situations where users sit back and let algorithms handle the networking for them, the social component of what we used to call the internet is increasingly disappearing, and this may be the first visible evidence of the impact of AI on human behavior, at least in its online aspect. More broadly, could this also be a sign of the existence of a more autonomous internet that revolves around entirely non-human actors and agents?