The masked face of biometric COVID policing

Facial recognition-based programs legitimise a deeply intrusive and dangerous biometric policing system.

Australian states have begun wide-spread trialing and implementation of home quarantine programs that use facial recognition and GPS data to monitor and enforce quarantine restrictions for overseas arrivals. In the context of rapidly expanding electronic surveillance programs and police abuse of COVID-19 health data, there are concerns these systems will play an ugly role in normalising biometric policing on an unprecedented national scale, placing marginalised communities and activists at greater risk.

The quarantine trials mark the most recent instalments in a whirlwind federal expansion of biometric and conventional data collection, surveillance, and usage over the last five years. Notable milestones include the creation of Facial Matching Services by numerous state police departments, the usage of incredibly invasive FinFisher spyware by NSW Police, and the mass expansion of data surveillance capabilities in the recent federal Identify and Disrupt Bill.

Legislation for the creation of a vast, federally centralised police facial recognition database called “The Capability”, failed in 2019 because of vague usage terms and lack of legal safeguards. Nevertheless, individual states have taken to implementing their own versions while updated legislation is drafted.

Privacy experts and activists have sounded the alarm, criticising the rapid expansion of police powers and pointing to its glaring lack of regulation. The Australian Human Rights Commission has called for a moratorium on biometric systems in policing, law enforcement, and social security.

Yet even in this landscape, the introduction of facial recognition-based health programs presents a distinctly concerning development.

Crucially, the programs further blur the line between health and policing data. Police having access to supposedly ‘private’ health data is nothing new, and recent revelations that several state police departments accessed COVID-19 check-in data for unrelated criminal investigations have buried the illusion. Facial recognition quarantine programs are critically under-regulated, with no inbuilt protections against police access to data. While not only grossly violating peoples’ rights to the privacy of their most intimate information, police access undermines genuine public health programs for fear of undisclosed intelligence sharing.

In addition is the way that quarantine programs “help to normalise the use of facial recognition software for policing,” as denounced by UNSW Scientia Professor of Artificial Intelligence Toby Walsh.

Known as ‘function creep’, the initial usage of surveillance technology for ‘palatable’ or ‘neutral’ administrative functions — such as quarantine — can lay the groundwork for a public acceptance of wider-scale, more invasive, and more explicit data collection. “It permits surveillance on a massive scale and in places where previously we could expect privacy or anonymity,” Walsh says. “You would no longer be anonymous when you went about your lawful business.”

The ubiquity of FaceID and other commercial-application programs has undoubtedly played a part in taming fears of facial recognition software. The critical difference with quarantine programs is the breaching of the once-red line of direct government collation and usage of facial data for law enforcement purposes. 

Doing so softens perception of a policing regime that presents deeply troubling implications not only for general public privacy, but particularly for communities and groups that already face the sharp end of police repression. 

Characterisation of overpolicing through facial recognition is often flattened and broad: allusions to Big Brother or China’s Social Credit System are frequently the conceptual beginnings and ends of such discussions in everyday media. In reality, biometric policing’s everyday abuses are less novel, and more amplified versions of current oppressive policing structures.

Existing facial recognition programs are known to chronically and disproportionately misidentify women and people of colour — a product of an institutionally racist and sexist tech industry that coded the programs in the first place. Being a potential determinant of increasingly militarised police showing up at someone’s door, it’s not difficult to see how expanded and normalised usage of discriminatory AI tools like facial recognition can exacerbate police brutality and repression against already marginalised communities.

Those in opposition to government and police practice also stand as likely targets of increased repression. Activists, journalists, and whistleblowers who actively challenge the state are crucial targets for state monitoring. Considering police have all but explicitly stated their intention to use facial recognition to identify protestors, alongside raids on investigative journalists and tireless campaigns against corporate and intelligence whistleblowers, it seems inevitable that those seeking to expose the worst rot of the state will be amongst the first to have its cameras trained on their faces.

The introduction of facial recognition for home quarantine reflects a sad paradigm of advanced technology. Innovations with the potential for mass public health gain when genuinely used in the service of human need, may well play a leading role in facilitating the repression of those striving for it.