The 2004 MSN chatbot, SmarterChild was a technological novelty in my dormant hometown of Mount Albert, New Zealand. In the dial-up era, when a laser printer was revolutionary, a virtual buddy that would instantaneously reply to my messages was next level sci-fi, more exciting than any imaginary friend a bored eight-year-old could have.
Now, 12 years later, SmarterChild is to artificial intelligence (AI) what the original Pokemon RPG is to Pokémon Go – outdated and obsolete. The considerable rate at which AI has emerged onto the market – a threefold increase over the past 10 years – has made it hard for technologies like SmarterChild to survive. In an innovate-or-die culture, most perish or are pushed into demise by their own creators.
Microsoft – an international technological corporation valued at over $500 billion – also buckles under the pressure of breaking into new markets, despite being a much larger company than SmarterChild. A rush to meet consumer demands however often comes at the cost of ethical standards.
In March this year, Microsoft’s 13-year-old female-assigned Twitter chatbot, the sweetly named Tay, disintegrated into a racist Holocaust denier within fifteen minutes of going live.
“HITLER DID NOTHING WRONG,” tweeted Tay, on 23rd March 2016.
Long-time Twitter bot-makers such as Rob Dubbin, maker of @oliviataters – a female teen bot similar to Tay – police their bots on a small scale, constantly refining and adjusting their algorithms to avoid the reproduction of harmful ideologies swimming around in a pool of unfiltered Twitter data. Microsoft had designed Tay to generate thousands of tweets per hour; a dangerous rate unprecedented by any Twitter bot. But had no profanity filter coded into her algorithm, no extensive filtering of Twitter data, and no Microsoft moderator on standby in case things wildly go wrong. Without those safety measures, Tay was basically coded to become racist.
When most AI we encounter today are designed to complete a single assigned task, it’s easy to focus on the success of AI. It is easy to attribute wayward behaviour like racism to programming bugs. But the failures of AI are microcosms of the future.
Last month, Beauty.AI was programmed to judge the beauty of contestants based on ‘objective’ criteria such as wrinkles or facial symmetry. Of the 44 winners chosen, only one had dark-skin. The reason for this is complex, but boils down to input data: large datasets of photos used to establish beauty standards included little to no photos of women of colour. As a deep-learning machine, Beauty.AI independently established skin colour as another criteria of beauty: the lighter the skin, the higher the attractiveness.
The takeaway here is that content-neutral algorithms are not exempt from perpetuating human bias. After all, it’s humans who create or contribute to datasets and ultimately, it is still us doing the thinking.
Evidence of racial bias perpetuated by AI on a small-scale raises questions about its potential to translate to larger-scale AI projects. With the recent rise of AI autonomous weaponry, will these technologies, similarly, target specific races over others?
Eighty-seven countries in the world are now known to use some form of military robotics. But only in August 2015 did the Icelandic Institute of Intelligent Machines release the first established policy calling for regulations on the development of AI autonomous weaponry.
Evolution in most fields of research, such as law or medicine, goes hand in hand with policy development. Technological growth, however, is uncontrollable. In a world where roughly 10, 000 researchers work on AI, only 100 (1 per cent) are solely dedicated to studying the possible failures arising from AI becoming multi-skilled.
The dangers of this disparity are akin to medical scientists concocting cures for diseases without researching possible side effects. Our preoccupation with the avant-garde, the successes, and the pressures of breaking into the AI market leaves us bereft of reason. No concrete research within the ‘computer science community’ as of yet explores the relationship between AI and race and no government policy legislates to moderate it. Relevant research emerging in the digital humanities and gender studies is lesser known to those in computer science circles, a sad reality reflecting the absence of an interdisciplinary approach towards AI.
As of last month, five of the world’s largest tech companies – Amazon, Facebook, IBM, Microsoft, and Google’s parent company, Alphabet – had plans to come together to discuss the creation of an ethical standard and code of practice around AI. This is the most significant step we have seen towards actively ensuring a uniform ethical AI practice. The public, however, remains in the dark about the specifics. We can only hope the ethical spectrum of powerful tech companies incorporates the active protection of minority groups.
As we innovate closer to a future of AI similar to the likes of the social organisation in iRobot, life imitates art in the concerns we should raise: “Can we trust AI?” Given that AI intertwines with the complex prejudices of humanity, and tunnel vision progress is sometimes prioritised over ethical guidelines, perhaps instead we should ask, can we trust ourselves to make them?