I’m always a little shocked when people say that privacy is dead. I’m a person who loves finding solutions; I don’t like the idea that something might be so far gone that we can’t fix it. But, I understand why people say it.
All it takes is looking at the news to see another data breach, opening a website with a privacy policy that rivals the Iliad in length and readability, or scrolling through a social media platform only to see that thing you were talking about with a friend right in the middle of your feed. It makes sense then, that when you read the line “I have read and agreed to the privacy policy”, you click yes. Not because you’ve read it, or even because you agree to it, but because it’s quicker, it’s easier and there is no other option.
Despite the structures that create privacy concerns, privacy is framed as a personal responsibility. If privacy is individual, then all that a person has to do to maintain their privacy is to not share their information. If they don’t give it away, it can’t be misused. If they have an issue with that social media platform, then don’t use it. But this doesn’t work, or at least, it doesn’t work anymore. On a practical level, other people in your life probably have your phone number, photos of you and other pieces of information. Unless you refuse to tell anyone anything about you, that information is out there. For young people especially, simply opting out of sharing your information is not an option. It has become a necessary element of interacting with the wider world. Whether we like it or not, most of us live our lives online, albeit to varying extents.
Samantha Floreani, program lead at Digital Rights Watch, says that “we have grown up in this environment where we really have very little power or say or ability to exercise control in this environment that has been designed to invade our privacy and collect huge amounts of our data and to do it in a way that is socially rewarding to participate [in].”
In an age of surveillance capitalism — a term coined by scholar Shoshana Zuboff to describe the phenomenon where more and more of our lives are transformed into data that feeds a global information economy — almost anything is a privacy concern. That program you use in class to submit an assignment. That app on your phone. That rental application you just filled out. That protest you just attended. In a 2022 journal article, Zuboff wrote that “for surveillance capitalism to succeed, privacy must fall. And fall it did.”
In the death — or at least, decline — of individual privacy, we must act to keep collective privacy alive.
———
“The value of our data is in the aggregate,” says Floreani. “It’s in relation to other people’s data.”
As more and more user data is collected, organisations are able to utilise vast data sets to monitor, predict and shape our behaviour. This creates significant information asymmetries between the people collecting the data and the people who the data comes from — us. Floreani describes how the power in these asymmetries is in “who gets the information, who gets the knowledge, who gets to make decisions based on that, and who is subject to those decisions.”
“There’s this huge information asymmetry and imbalance of power between governments and also companies who collect and hold and analyse and use just immense amounts of our data and the people on the other hand have not only very little power over that, but also often very little comprehensive understanding of how that’s happening.”
If we are able to shift the way that we look at privacy, Floreani explains that “protecting privacy becomes much more of a sort of collective demonstration of solidarity with each other rather than this hyper individualistic ‘well, I’m going to keep everything secret to myself’.”
As a collective demand, privacy becomes a connected issue that deals directly with the concentration and use of power.
———
Privacy is not absolute. This means that protecting privacy often involves balancing other interests, rights and freedoms. In the digital rights ecosystem, there are vast power imbalances everywhere. Digital rights activists have to grapple with the interests of the tech industry, from small businesses through to transnational organisations, and governments, including their agencies and policy makers. This also can make conversations about privacy particularly challenging.
“It really benefits those who would seek to collect, generate, use, monetise our data to project privacy as something that gets in the way,” says Floreani. “They really rely on that idea, because of course for them, it gets in the way of their, what I would describe as a harmful business model.”
“If we take them on their terms, if we allow them to set the terms of the debate, that privacy is a barrier, privacy gets in the way of innovation, privacy gets in the way of you being able to enjoy the conveniences of modern life. Then … we cede so much ground to them in doing that because we’ve sort of accepted the premise of the debate, which I think is really false.”
This projects a storyline where it is “a technological necessity” to do this, that if “you want to enjoy all of the tech of modern life, then you have to accept that that comes with your rampant data collection and surveillance capitalism.” This rhetoric matters. It impacts how people understand and think about privacy, and what they demand as a result.
Vast and invasive data collection does not need to be a necessity. For example, messaging app Signal has differentiated itself from its competitors with its default encryption and tangible focus on privacy (rather than the buzzword that it may be for other organisations). Signal Foundation President Meredith Whittaker recently spoke with Rest of World about the advantages that Signal has over other messaging services, saying that “We’re not actually a surveillance company. I’m not trying to pretend Facebook is good. I don’t have to toe a party line that is divorced from reality. And we aren’t Big Tech.” Signal’s solution to increasing requests from governments for data is to not collect the data from the start. Whittaker says, “We literally don’t have the data, which is the only way to actually preserve privacy.” If they don’t have the information, there is nothing to hand over.
———
“I don’t think it’s reasonable to sort of gamble away our rights and freedoms based on this kind of loose hopeful projection that maybe one day this really powerful surveillance mechanism might do some good,” says Floreani.
Facial recognition is one of the most pervasive threats to privacy and anonymity. These systems have continued to gain popularity in recent years, even amidst growing privacy concerns. Facial recognition is expanding further and further into supermarkets and retailers, stadiums, casinos, governments and beyond. Proponents of these invasive, oppressive and punitive systems are quick to offer potential benefits of these technologies.
“When I talk with people about facial recognition, which is quite often, people will often say things like, ‘oh, but there are all of these really good positive use cases.’ Like, we don’t wanna get in the way of the positive uses of it, right? And it honestly drives me wild with frustration because firstly when you ask them what are those positive use cases, they’re very limited, like they kind of run out of ideas really quickly,” shares Floreani.
Amidst these limited positive use cases, there are extensive and significant risks of facial recognition. Beyond the privacy risks that arise from storing biometric data (which is produced from biological measurements of your face), these systems disproportionately impact minorities. From inaccurate results to embedded bias, the use of these systems cause serious harm. And, this is before considering the potential for misuse.
“It just is indicative of how facial recognition technology can be wielded, that you can use it to track people that you consider an enemy or an opponent or somebody troublesome to you,” explains Kashmir Hill. “And they will have no idea that it’s happening unless you act on that information.”
Hill, a New York Times journalist covering technology and privacy, has been following the rise of Clearview AI for the last few years. In 2020, Hill revealed that Clearview AI had developed a facial recognition app that allows you to search someone’s face through the use of biometric data and see the images that exist of them online, complete with links to their sources. They claimed that their database had been formed from billions of images scraped from across the internet including social media platforms like Facebook, often against their terms of service.
Despite investigations from national privacy governance bodies in Australia, the United Kingdom, Canada and Germany finding that Clearview AI had breached their respective privacy legislation, Cam Wilson revealed in Crikey that members of the Australian Federal Police were still meeting with the organisation. After being found to have used the program, the Australian Information Commissioner and Privacy Commissioner issued a determination that the Australian Federal Police had interfered with the privacy of people whose information had been shared with Clearview AI. Whilst the determination included declarations that they were not to repeat their use of the program, their interest in the program has remained.
It was not the creation of a service like Clearview AI that was so surprising, it was that it had appeared from seemingly nowhere. It wasn’t created by Google or Facebook — in fact they both “regarded it as too taboo to release to the world” according to Hill — or a similarly big technology company, instead it was Clearview AI was co-founded by Hoan Ton-That and Richard Schwartz (with the help of various supporters over the years).
In her book Your Face Belongs To Us, Hill discusses the impact of “technical sweetness” — the excitement that scientists and engineers feel about innovation and development that can overpower concerns about the impact of their inventions — on the development of these technologies.
“I talked to all these facial recognition experts who had been working on it for decades, and they all assumed that there was gonna be someone else who thought about the ethical implications,” says Hill.
Hill suggests that the United States has been slow to develop privacy protection because of an emphasis on “technological progress trumping other concerns, of focus on freedom of information, speech, over protecting people’s data, or protecting people from harm.” Much of the protection in the US operates on an opt-out basis, however the effectiveness of these kinds of mechanisms is often limited. Hill suggests that the “opt-in approach is the only one that’s effective at scale.”
“California has a population of 34 million people and over the last 2 years, fewer than 1000 of them have deleted themselves from the Clearview AI database,” explains Hill. “And so I think when you create opt-out mechanisms, there’s just very few people who have the knowledge, time and inclination to exercise those rights and that’s why companies love opt-out because they know that most people won’t.”
Even when these mechanisms exist, their ongoing availability is not always stable. Whilst Clearview AI has previously been allowing European citizens to request the deletion of their data from their database (due to protections arising in their General Data Protection Regulation (GDPR)), their approach changed this month, after legal action including successfully appealing a $9.1 million fine in the UK — they are no longer deleting the data of European citizens.
———
And yet, the hope for a more equitable digital future is not lost.
Whilst there has been interest from the Government, and the public, in privacy reform — particularly in the response to the Privacy Act Review report released earlier this year — transforming this interest into concrete legal reform is more difficult. The Government must take the next step to legislate proposed changes. Floreani emphasises that “It’s really a matter of holding them to that promise and making a really clear strong public mandate for change, really demonstrating that a lot of people really care about this and they expect politicians to act on improving privacy protection.”
At the moment, the privacy protection you have, from at least the law, varies significantly based on where you live. This leaves citizens in the United Kingdom and the European Union with more extensive protection under the GDPR than many other jurisdictions, including Australia. The GDPR represents the current European approach, which favours an opt-in approach to the collection and usage of data (where users agree to what data is collected and how they are willing for it to be used), compared to protection in the United States, which where available, tends to be designed on an opt-out basis (where users can request their data to be deleted or removed). In an inherently global landscape, this makes regulating digital spaces somewhat complicated — but not impossible.
The legislative protection available in each jurisdiction both shapes and is shaped by the collective norms around privacy in the area. With variance across jurisdictions, it can become even more difficult to regulate these vast technology companies that are operating across state borders, often not even needing an office in the state where they are offering the service. In these circumstances, international collaboration would be particularly beneficial, although this again brings its own set of complications. Even if not possible (or probable at present), the strengthening of any one nation’s regulations, particularly if they already have power or represent a significant portion of the market share of the service, has the potential to improve the privacy practices of these organisations globally. As the GDPR was introduced, organisations with users in Europe, even if not operating only there, were required to adapt their privacy practices to be in line with the new requirements if they wanted to keep the market. It is often easier to meet the most stringent protections, rather than operating with different versions available in different jurisdictions — unless the company is willing to completely opt-out of that market.
When we discuss the potential for legislative reform, Hill describes the regulatory response to listening devices. With bugs and wiretaps making people concerned about their ability to have a private conversation, “we didn’t just give up and say… ‘that that’s just how it is because the technology exists’. We passed laws that made it illegal to eavesdrop on people or to wiretap a phone, and only the government was supposed to do it, and if they did, they needed to get a warrant, they needed to get special permission from a court to do it.”
“We did constrain the technology, and we decided that there’s something sensitive about what we say, and we wanted to protect it.”
“I do genuinely believe that technology can be a force for good, but our current experience of it is so dominated by these companies that really only seek to maximise their own revenue,” says Floreani. We have seen, and continue to see, the issues that arise from profit driven platforms — from surveillance to misinformation to radicalisation and beyond. The results that we see as issues seem to often be occurring by design, and for the very purpose from which they arise, profit. “All of these things function in order to reap the most profit as possible by keeping us on these platforms and generating more data, which feeds into the ad tech system and the data broker industry and so on and so forth,” explains Floreani.
“What would technical innovation look like if it was designed to optimise wellbeing or leisure time or genuine connection or whatever the thing may be that isn’t profit … what might technology do for us then?”
Floreani suggests that “capitalist realism” — a concept that suggests that the prevalence of capitalism prevents even the possibility of imagining alternatives — may be limiting the potential digital futures we are imagining. Technology is not inherently good or bad. Its creation, development and use is directly controlled by the people involved. As persuasive (or pervasive, depending on how you see it) as technological solutionism may be, throughout the whole process, it is the person involved that matters most. We have the opportunity to imagine what digital technologies and platforms that were not designed and run for profit and the accumulation of wealth would look like. We can design new digital futures, but we have to do it together.