What does it mean to be a doctor when a computer can make better decisions than you?

Artificially intelligent clinicians aren’t an inevitability, and nor should they be.

Art by Katie Hunter

The proliferation of artificial intelligence in all industries is an inevitability, or so we are often told by technocrats. In 2020, Andrew Yang ran for US President with this premise at the fore – he argued that the American Left should not attempt to resist or mitigate the proliferation of artificial intelligence and automation but instead should adapt to it. Yang, like many progressive people, believes essentially that automation of the workforce is an opportunity to implement policy priorities like a universal basic income and a cultural shift away from the norm of full-time work. While these are no doubt worthwhile political goals, I have always felt it is uncritical, both of the implications of automation for social connection and the immense potential for oppression when classes of people are entirely unable to sell their labour. 

To tell the whole truth though, I always thought my own career plan was outside the terrain of the automation debate. In spite of all the machines, the awful smelling corridors and pharmaceuticals with jargonistic names, being a doctor is really the most human of professions. Being the political optimist that I am, I thought that most people would prefer that their doctor was not a robot. That was until, in my orientation week of second year medicine, my cohort were earnestly asked to consider what it might mean to be a doctor when a computer can make better decisions than us. Two assumptions at the core of this question strike me as dubious – that the impending dominance of AI is inevitable and that the best clinical decision is one free from errant human-ness.

In professional scientific degrees like medicine, we are rarely taught to distinguish normative and empirical claims;  we are often vulnerable to believing that the way things are is necessarily the way they should be. While we are seldom encouraged to make value judgments or shape policy, I would say it is certainly worth interrogating whether a proliferation of medical artificial intelligence would actually be a good thing.We need not simply accept it will come to pass and hope to adapt.

In limited contexts, automation is conceivably positive in healthcare. Many clerical tasks, for instance charting vital signs over time, have already been semi-automated. This gives clinicians more time than they would have had in the past to attend to the actual patient – to examine them, to interrogate and to counsel them on the causes and management of their medical problems. There are some very common examinations, for example the chest X-ray or electrocardiograph (ECG) that are often misinterpreted and hence, serious preventable illnesses  are sometimes missed. Technology that assists human clinicians to center patients and keep them informed is most likely a good thing for healthcare when used critically. 

I believe that this is the point where automation should end. The automation of clinicians themselves must be resisted. The most obvious reason for this is that while objective criteria for diagnoses exist, clinicians must situate them within the context of a very human person. A small woman who has heavy menstrual periods is likely to have lower iron levels than the average male. Indigenous Australians are genetically predisposed to have on average 30% fewer glomeruli, the functional unit of kidney filtration, than the average Caucasian. Human clinicians, exclusively, are taught to navigate these complexities and are taught to navigate conversations about normal variations which are often culturally and personally sensitive. Importantly, handling these complexities can not be “machine learnt” as they are highly contingent on patient-doctor rapport. 

Additionally, contact with the healthcare system is often traumatising, especially for women and minority groups. While some of this is inherent to the process of being sick, clinical failure to manage emotional traumas and isolation exacerbates it substantially. Presumably a robot that cannot, categorically, feel like a human would be unable to respond appropriately. Healthcare is also rife with structural biases (sexism, racism, ableism and fatphobia) that reduce many people’s ability to access care and undermine their trust in the system overall. Artificial intelligence must be taught to trust the norms of diagnosis and treatment that exist presently; the interwoven biases within these norms would then be locked in without the possibility for change. Medicine is deeply flawed but humans with the ability to dynamically respond to and question existing norms will always be better equipped.

Humans are errant, they often have irrational preferences and they are usually scared of their mortality. In medicine, these things matter and it is why, despite the artifice of objectivity, we should never replace clinicians with artificial intelligence. To finally answer last month’s question: it will always mean an awful lot.