The race for artificial intelligence

In this unfettered rush towards developing AI, what do universities need to stop and think about?

Artwork by Victor Lee. Modified by Alison Xiao.

In April this year, over 50 academics from around the world sent a letter to KAIST, a top South Korean science and engineering research university. The letter announced a boycott: the signatories were refusing to work with KAIST and its partner, defence manufacturer Hanwha Systems. This was not a campaign undertaken lightly. Professor Toby Walsh, an artificial intelligence Researcher at the University of New South Wales, says this was the first time researchers had taken direct action.                         

Led by Walsh, the brief but effective campaign had one focus: to prevent KAIST and Hanwha, a known developer of internationally banned cluster munitions, from developing an AI weapons lab and autonomous weapons. The creation of autonomous weapons is a “Pandora’s Box that will be hard to close if it is opened”, the letter warned. Hours after the campaign began, the president of KAIST announced that the university had no intentions to conduct such research. Four days later, the boycott was lifted.

***

Over the past few years, public and private interest in AI has surged. AI research concerns itself with making programs ‘smarter’. There is a particularly strong interest in the subfield of machine learning—its algorithms allow programs to teach themselves: from large datasets, self-teaching machines can extract useful patterns, allowing them to make increasingly accurate predictions and classifications.

AI has already been used in university settings: Dr. Danny Liu, a senior lecturer at USyd, has implemented the Student Relationship Engagement System (SRES), an analytics system that applies machine learning algorithms to help teachers “uncover patterns that may be difficult or impossible for a human to see”. For example, analysis of one class revealed that the more students engaged with online discussions, the worse their grade outcomes were. Mechanisms like this allow education to be personalised for each student.

This mirrors a wider, global trend of incorporating AI use into university teaching. Take the 2017 Georgia Tech AI class, for instance. Unbeknownst to students, their favourite teaching assistant, Jill Watson, was a software bot created by Professor Ashok Goel, based on IBM’s Watson supercomputer.

But as Walsh says, “AI, like most [things], is dual use.” Walsh is a member of the national Artificial Intelligence Ethics Committee and his AI-centered book, 2062, is being published this Monday. He is hyperaware of how AI has the power to help and to harm. The negative consequences of technological progress were apparent earlier this year, for instance, as USyd saw outcry over possible privacy breaches on the learning management system Canvas. Students discovered that Canvas-based class surveys, which were described as anonymous, could easily be de-anonymised, allowing lecturers to see the names of students who had left potentially critical feedback.

***

Aware of both the benefits and risks of AI, the Australian government has allocated $30 million to AI research in its most recent budget. Part of this sum is marked for building a national Ethics and Standards Framework that “will guide the responsible development of these technologies”. The difficulty comes from agreeing on where to locate the ethical focus, in the short or the long term.

The short-term focus is on how automation will disrupt—and already is disrupting—traditional labour dynamics. For some, like New Scientist journalist Alice Klein, it’s not all doom and gloom: “As intelligent machines take jobs, they’re also going to create them—and potentially more interesting ones,” she stated in last Tuesday’s Outside the Box: Ethics in AI panel. But for others, the prospects are bleak: USyd Professor of Philosophy David Braddon-Mitchell believes that automation will launch us into an era of “neo-feudalism” where control over goods and services rests solely in the grasp of future “robobarons”.

One widely-discussed long-term risk is a distant but chilling scenario: a world where AI can create better AI programs than human programmers can, and by creating every-improving iterations of itself, generates an explosion in intelligence. Think tanks like the Centre for the Study of Existential Risk, led by USyd alumnus Huw Price, focus on preventing the existential threat a superintelligent AI might pose to the human race. But Professor Marcus Hutter of the Research School of Computer Science at the Australian National University argues that this focus is misplaced. “Established philosophers seem to be mostly out of touch… [with] assessing realistically what is possible,” he says in an interview earlier this year.

While philosophers and computer scientists diverge on the realisation of long-term risks, they seem to agree on the importance of a regulatory framework in the short-term—something to ensure that AI is developed with purpose and control.

“AI is not an actual existential threat”, Walsh agrees, “we should worry about stupid AI.”

***

We don’t need to imagine social cataclysm to see ethical questions in AI research—the question of who does the research and how it is done raises problems in itself. As Braddon-Mitchell notes, AI research is no longer centralised in universities. Before, “you could keep good track on what was happening, who was doing it, and if you wanted it deregulated or controlled in some way.” Now, however, “the vast bulk of AI research is now in the private sector, and it’s being done by the giant and incredibly wealthy tech companies”. Many of these companies, like Google and Facebook, have their own internal ethics guidelines, but Walsh argues their high revenue turnover means “there’s immense temptation for them to behave badly”.

Even when research is done in universities, it is often sponsored by industry groups, and so it is vulnerable to their interests and aims. USyd’s AI research hub is named after UBTECH, an AI and humanoid robotics company, who funded the centre with a $7.5 million donation last year. Google has also established a presence, setting up four fellowships for computer science PhDs at different Australian universities.

Walsh suggests there is significance in how AI is applied, beyond how its funding is sourced. Meanwhile Professor Manuel Graeber, of USyd’s Brain and Mind centre, does not believe the source of AI funding can be divorced from its application. “It should not be left to the mercy of companies,” he says. Instead, he criticises decreases in public tertiary education spending, arguing “funding for universities… should primarily come from governments” to ensure autonomy.

Both Walsh and Graeber agree that AI research must be rooted in public interest scholarship. The pair decried the current lack of research focusing on the social impacts of AI. This has ramifications for risk management and policy, as it’s difficult to pinpoint what to regulate when the risks of AI remain uncertain.

But the emphasis might be shifting—Walsh has recently received a government grant to set up the Centre for the Impact of AI and Robotics at UNSW, which will be tasked with studying AI impact and promoting its benefits. This pattern is playing out worldwide: last year, the UCLA School of Law established an AI legal course after receiving a $1.5 million non-profit grant to study “disruptive societal and legal changes stemming from artificial intelligence.”

Hutter remains a dissident voice, arguing funding is too heavily skewed towards “useful AI systems in the short to mid-term” and instead believes “we should pour more money in long-term…fundamental, basic research.”

It’s difficult to convince donors to sponsor this kind of research, Hutter explains, because the immediate benefits are hard to demonstrate. “Even with government funding, you have to add a paragraph about the social benefits of your research,” he says. “It’s very hard to get funding.”


***


As automation proliferates, the study of AI is changing from a standalone discipline into a multi-industry field. Its applications now extend from helping retail companies target their advertising to improving leak detection in the water industry.

With this development, the importance of AI-literate graduates is increasing, as is the role of universities in creating them. Perhaps recognising this shift, part of the Australian government’s AI budgetary allocation is towards PhD scholarships, to increase knowledge and develop the skills needed for AI and machine learning.

Currently, Australian students who want to specialise in AI have three options: undertaking a Bachelor of Computing (Data Science and Artificial Intelligence) at Griffith University, a Master of Computer Science (Machine Learning and Big Data) at the University of Wollongong (UoW), or specialising in artificial intelligence through a Master of Computing at the Australian National University (ANU).

These degrees  programs share a commitment to teaching fundamental skills in intelligent systems, perceptual computing, data mining and robotics. What they also share, however, is the absence of any mandatory study of AI ethics.

Salah Sukkarieh, Professor of Robotics and Intelligent Systems at USyd, noted in the Ethics in AI panel that a national ethical framework must go hand in hand with education reform.

“Engineers perceive AI as a set of tools we want to advance, and see how far we can get,” Sukkarieh argues. “There is no value system around [AI]—we’re not trained to think about that,” he says, stressing the importance of teaching AI ethics and human rights.

In the United States, Carnegie Mellon University is one of the few institutions following Sukkarieh’s approach. In the next few months, the university will launch the first undergraduate AI degree in the US. Students enrolled in the new course will be required to complete at least one elective in AI ethics. There will be three electives on offer: ‘Artificial Intelligence and Humanity’, ‘Ethics and Policy Issues in Computing’ and ‘AI, Society and Humanity’. While the range of ethics subjects is not particularly broad, Carnegie Mellon is so far outdoing ANU, UOW, and Griffith, which have no courses in AI ethics so far.

For Manuel and Walsh, incorporating ethical frameworks into the study of AI is rooted in a basic principle: engineering and science cannot be learnt without engaging in the ethics of their application.

“These [engineering and science fields create] technologies that change our society, and you need to worry about that,” says Walsh.

That is particularly the case, he notes, as the rate of change has been exponential—far greater now than even 20 years ago. With the growth and spread of technology in our everyday lives, the potential for drastic impact is high.

Carnegie Mellon’s School of Computer Science appears to agree with Walsh’s belief in the importance of studying science’s social impacts: the course guidelines for their AI major emphasises “AI for social good”. To apply that philosophy, students enrolled in the degree can take part in “independent study projects that change the world for the better—in areas like healthcare, transport, and education.”

***

It’s difficult to evaluate whether studying ethics at university will make students better employees when they enter the workforce. Can individual programmers and developers nestled within large corporations really pursue responsible programming? Can they really refuse to work on projects that contravene their ethical codes—even when their employment depends on it?

This question came up at the Ethics, Safety, Industry and Governance panel held at an Artificial Intelligence / Human Possibilities event at Melbourne last year. AI researcher Peter Cheeseman echoed Braddon-Mitchell’s arguments that control over AI will be concentrated in the hands of a few, arguing that the owners of technology make the decisions. He referred to the Manhattan Project, where the US government deployed the atomic bomb in spite of scientists’ pleas. “What the scientists think doesn’t really matter,” Cheeseman says.

CEO of SingularityNET Ben Goertzel agreed with Cheeseman, saying one person who quits an objectionable project does little to stop that work going ahead, especially if it’s successful. “It’ll have to be a big team effort,” Goertzel said, almost as an afterthought.

Goertzel’s comments were prophetic. In June this year, Google announced it would not be renewing Project Mavern—a research partnership with the Pentagon focusing on AI that could recognise faces and objects in drone footage. The announcement came after thousands of Google staff protested the project, citing fears the technology could be deployed for warfare, following in the footsteps of Walsh’s campaign only one month prior. Over 4,000 Google employees signed a petition calling for a clear policy that “neither Google nor its contractors will ever build warfare technology”. A handful of employees also went a step further and resigned to protest the breach of ethical codes.

***

If AI ethics is so important, the question remains: why have Australian universities been slow to include it in their overhaul of science education? Of course, general AI courses are new, and anything new will always have teething problems. But part of the failure to embrace AI ethics may be attributed to the context in which universities operate today. In a market- oriented education system, universities are places where students go to become the best possible option for the labour market in their chosen field of study. Therefore, engineering and science studies are skewed towards what is profitable rather than what is ethical, reflecting industry metrics for ‘employability’.

There’s no clearer example of this paradigm than Griffith University’s new drone-focused engineering major. The Unmanned Aerovehicle (UAV) major, offered for the first time this year, produces graduates who are not only qualified electronics engineers, but also certified drone pilots.

In its press release announcing the new major, Griffith stresses the variety of industries which have started using UAV technology: UAV graduates will possess unique, in-demand skills, the argument seems to run—they will be profitable, attractive employees. There is no mention of the ethical problems around drones, including not least their potential to invade privacy and wage war.

The Griffith experience reveals that the behaviour of universities is often opportunistic, where sudden changes in industry represent marketing angles for cash cow degrees. Unless there is a shift in approach, further expansion of AI education risks continuing on the same path.

***

In a society increasingly reliant on data and algorithms, it is clear ethics have become non-negotiable. As Sukkarieh notes, AI is a tool—one that is powerful, but morally neutral: whether it does good or evil depends entirely on the user’s mindset and motivations. The future of AI, according to Walsh, is a function of the choices we make today.—whether right or wrong.

“It’s a mistake that people tend to think you just have to adapt to the future somehow,” says Walsh. “Depending on the choices we make, there are many good futures we can wake up to.”

Perhaps there is a reason why so much academia surrounds the AI doomsday: fear may well be the only way to jolt ourselves awake, to make choices now rather than let the future just happen to us. Braddon-Mitchell agrees: “Talk to your friends, scare the crap out of them about the future.”

“Exaggerate if you have to.”