If I get told one more time that I must use AI or I will fall behind, I think I might lose my mind. It seems like every article, every video, every person that wants to talk about technology wants to tell you why you absolutely have to be using AI because if you don’t you will fall behind.
I’ve never had so many people concerned that I might fall behind. Admittedly, very few of these people seem to actually know me.
The excuse often given for this concern is that AI is a tool. Since we learnt to use pencils, or laptops, we must now learn to use generative AI. When so many of these tools mediate our existence, these claims are somewhat understandable, but this is a somewhat deceptive description.
While they’ll tell you that it’ll change your life, it’ll change the way you work, that its revolutionary, in all the hype, the harm gets obscured. It’s easy to forget about the tangible consequences that generative AI can facilitate, from rapid generation of misinformation to the creation deepfake porn, and a variety of terrifying stops in between.
As more of our lives are lived online, these messages are becoming harder to avoid. But the conversations that we have about technology matter. In an essay for Overland, digital rights advocate Samantha Floreani writes about how manipulative debate tactics and reductive rhetoric devices are harming our digital rights, and the conversations that we are having about them.
Floreani explains that “The danger of using polarising language and tactics in a rights context is that it forces people to make false choices. It was never privacy or public health, just as it’s also not privacy or safety.”
When we frame using AI as a tool to use for upskilling for the future or an opportunity to fall behind your peers, we oversimplify the choices available. This tends to mean that we skip the preliminary question altogether — do we even want to use the technology?
Floreani goes on to explain that “Technology—the devices, the industry, the political ideology underpinning it, and the policy and legislative agendas around it—present critical tensions for rights and freedoms in the digital age which stand to impact all of us.”
We need to be able to discuss new technologies critically. To be able to question whether we really them in society, whether we are ready to accept the consequences of their usage, and to decide how they should be regulated. Regulating developing and emerging technologies is already complicated enough without having vast portions of the population misunderstanding, or being misled about, how they work and what their real impact may be.
In his 1980 book The Social Control of Technology, scholar David Collingridge coined the term “the Collingridge dilemma”. The dilemma referred to the difficulty in figuring out when technologies should be regulated, as often it would take a certain amount of time to understand what the impact of the technology might be, however by the time that has occurred it may have already become too entrenched in society to effectively regulate.
Balancing this complex interplay of information and power has no simple solution, but that doesn’t mean that we shouldn’t ask the question. Many advertisements for new technologies try to assert that they will be long-lasting, and as a result, you have to join now or you’ll be behind forever. These messages are often premised on the idea that since you have to use the technology, they might as well tell you about the supposed benefits without needing to deal with the consequences — and yet often, these benefits rarely come true.
Despite the allure of technological determinism and solutionism at the heart of many of these messages, people are still at the centre of technological problems and their solutions. In her book Future Histories, lawyer and digital rights advocate Lizzie O’Shea writes that “Digital technology, for all its scope to watch us, think for us and automate our jobs, is still reliant on us to operate. It is not a mysterious or unexplained apparatus; it creates places that both shape us and can be shaped by us; it is created by people, it is guided by decisions made by people, it is owned by people.”
“People can also stop it from working. We have the opportunity to reclaim the power of technology, to appropriate the machinery and use its gears, wheels and levers, its silicon and glass, and repurpose it for the good of the many rather than the few.”
If technology is to mediate so much of our existence, we need a better way to talk about it. This starts with finding a better way to think about it. Dealing in simplistic absolutes and intentionally confusing marketing language will only get us so far. Better technological futures are possible. They exist on the other side of difficult conversations and the collective actions needed to support them. Perhaps it’s time to let ourselves fall behind and spend more time figuring out where we really want to go.