There’s something uncanny going on with Facebook lately: posts from friends, family and groups are quickly outnumbered by images of Jesus, newborn babies and smiling families with too many fingers, each with a string of near-identical comments underneath. At the same time, every tech company is rolling out AI tools on their website or app. You can use ChatGPT for anything — even writing your university essays or composing your wedding vows. You keep seeing the same tweet, some variation of “I hate texting”, showing up on your timeline with tens of thousands of likes. Something is wrong.
This unnerving feeling is the basis of a conspiracy theory that has risen to prominence in the last month: the dead internet. Its disciples claim that the vast majority of web content and activity is not “real”, but has been automatically generated using AI or bots. That is, the internet of authentic, human users has been almost entirely replaced by AI-generated art, stories and blog posts and funnelled into echo chambers where chatbots comments “Amen” or “Amazing picture” hundreds of times. Moreover, these theorists claim that this artificial content is being intentionally put out by bad actors to manipulate the public. It’s not entirely accurate, there are still plenty of real users on the internet, but scholars say that it’s a framework through which to view some of the most pressing issues shaping the future of the web. Basically, the internet’s death may not yet be literal, but it is a pervasive feeling.
The web has not always been this way. Since first introduced, it has gone through multiple stages of development. The first web, known as the Web 1.0, was “read only”, with static web pages making up the majority of the internet. The Web 2.0, ushered in around the mid 2000s, describes the internet’s social or participatory dimension. Static pages were replaced by platforms like Myspace, Facebook and YouTube, which encouraged and enabled users to interact and upload their own content to the internet. It was in some ways utopian: a space in which people could share and consume information and stories, unencumbered by borders.
But, there were always some cracks in this facade — as the volume of online content grew and grew, the increased sophistication and opacity of algorithmic recommendation systems presented opportunities for bad actors to exploit. 2016’s Gamergate, a mass harassment campaign led against female gamers and journalists, was accomplished using the help of bot accounts that could sustain their outrage for weeks or months on end. Russian web brigades, state-sponsored trolling bots, have also been credited with starting large-scale campaigns of misinformation and running US election interference. These incidents have only accelerated in the past decade. The average Google search now conjures pages upon pages of asinine blog content created using AI tools, with users being forced to add “reddit” to the end of every search term to access traces of real people having real conversations.
The development and ubiquity of large language model AI, most infamously ChatGPT, has accelerated this issue. ChatGPT works by taking unfathomable amounts of linguistic data — found in the sentences and paragraphs in existing websites and publications — and combing it for patterns, using an unfathomable amount of energy in the process. And, because human language is built on recognisable and replicable patterns, it becomes increasingly sophisticated at putting together outputs that are probably, presumably, coherent answers that are relevant to the question it’s been asked.
The problem with such tools is that they can be used to produce large amounts of natural-sounding text almost simultaneously—generating millions of blog posts and comments each day. After all, why pay a copywriter for two hours of time when ChatGPT can write your company website, 50 blog posts and your next year’s worth of instagram captions within the hour? Because the language produced by these tools sounds so natural, it also becomes exponentially harder for platforms’ automated detection methods to pick up on artificial activity.
These failed detection methods allow insane amounts of ChatGPT generated content to bypass content moderation tools, and therefore to shape the kind of information we are presented with when we use the internet. The internet’s bots may not be being consciously controlled by bad actors, but these tools do have an ideology, and it’s inherently conservative. They can only create things that, in some sense, have already been created, and replicate patterns that are entrenched to a degree that they can be noticed among tens of millions of data points. They comment “Amen” and generate pictures of Jesus and American flags because these are the things that are popular. They will reinforce the status quo because that is all they are capable of doing.
On a broader level, the ‘death’ of the internet presents wide-ranging implications for platforms. Platforms’ business models rely on a consistent stream of free original content from users in the form of posts, comments and discussions that engage others and generally work to make the platform attractive. These platforms pay their bills by placing advertisements before or between these posts, with advertisers paying based on the number of impressions their content will receive. A rise in user accounts and activity at first doesn’t seem to ring any alarm bells, especially if these users are creating seemingly high-quality and engaging posts. But, the “death” of the internet threatens to destabilise this model in two key ways. First, AI generated content inherently lacks originality and creativity — and in most cases cannot (yet) ape it convincingly enough. This results in the type of repetitive and uncanny content that leaves real users bored or repulsed. Second, the existence of millions of fake accounts, and therefore millions of potential fake impressions, drives down the value of individual impressions to potential advertisers — will they continue to allocate large portions of their marketing funds to platforms when their ads are being shown to bots with no identities, bank accounts, or capacity to make a purchase?
This is, of course, not to frame advertisers and platforms as the victims of this state of affairs. If it starts to impact their bottom lines, platforms will inevitably create new methods of cracking down on artificial traffic that exploit the end user in novel ways. Perhaps we will see more widespread adoption of a Netflix-style subscription model. In October last year, Elon Musk announced his unorthodox plan to tackle the influx of bots and fake content on X (formerly Twitter) by introducing new subscription tiers. While users already must pay an $8 premium membership fee to access “premium features” like verification, the changes proposed by Musk would see all users charged $1 per year in the so-called “Not A Bot” program to access the basic features of the app.
Or, perhaps they will demand more and more data to create or verify an account, from bank account details to birth certificates to biometrics like fingerprints and blood type, all in an effort to qualify the humanity of each new user profile, allowing platforms to continue to craft even more accurate user profiles and sell this information on to advertisers.
Neither conclusion brings much solace. Platforms, white-knuckled, will cling to their stronghold on the majority of web traffic in whatever ways they can. And, while the internet is not quite dead yet, for now we may continue to traverse into its strangest corners, look around, and realise that we’re quite alone.