No matter how much effort is put into your assignment, a tutor is only marking the final product — regardless of whether you have used AI to develop ideas, check, improve or write sections of your paper. All those hours of work that you put in, including the hours of sleep that you missed out on so you could get it in before the deadline, produce something that an online robot can write faster than you can snap your fingers — and sometimes your tutor can’t tell the difference. Because generative AI relies on the instructions that it is given, it’s generally viewed as acceptable to use it when brainstorming ideas for things, as long as the original idea created by the bot is adapted enough that it counts as the genuine work of the person. Where exactly this line sits is hazy, and academic institutions haven’t had enough time as AI develops at a galloping pace to decide or define what is and isn’t acceptable.
The recent frenzy around this issue was triggered by the invention of ChatGPT in late November 2022 (following the release of GPT-3 in 2020), and rapidly evolved with the subsequent release of GPT-4 in March 2023, which is notable because unlike ChatGPT, GPT-4 is able to browse the internet, so its data access is unlimited. The development of this technology is too fast for universities to declare long-term responses or solutions, and announcements by various tertiary institutions can contradict each other or quickly be revised. The University’s Academic Integrity Policy 2022 states that “it is an academic integrity breach to inappropriately generate content using artificial intelligence to complete an assessment task.” The definition of “inappropriate” is not quite concrete, as the use of AI varies between units and can entirely forbid it, or include the use of AI to generate ideas.
On the other hand, generators are increasingly being integrated into scholarly writing, to the extent that sections or entire papers are being written by technology such as ChatGPT. While AI detection systems like Turnitin have become widespread, it’s less obvious to check whether your academic paper about AI is written by a bot — and whether this still counts as an academic paper, if a substantial portion of it isn’t written by the human authors. The responses that AI generates are based on a vast amount of data that is fed to it from across the internet, which it cannot identify as reliable or unreliable.
To respond to the challenges for assessments, the University encourages teachers to try other forms of exams like oral or multimodal exams, as well as running assessments in multiple stages. Its page on “How AI can be used meaningfully by teachers and students in 2023” also stated that teachers could use ChatGPT to create lesson plans, quiz questions or exemplars for critique, in much the same way as students are allowed to use it for developing ideas in assignments and refining it. Ultimately, the University concedes that “it is not possible to design an unsecured assessment that is completely “AI-proof”.
Honi spoke to Jose-Miguel Bello y Villarino, a University of Sydney professor who specialises in artificial intelligence, how we should respond to it, and about the way that AI is spreading through our education system. Bello y Villarino stated that “the issue was that when ChatGPT in the current version was released… people discovered what it could do in a way that was much more interactive and for free.” He added that “if you want students to use generative AI, or other types of AI, whatever it is, and be able to develop a skillset to use it in the future, then assessments have to change much more substantially… instead of starting with a blank space, you would say, ‘What is the common knowledge that generative AI can give me?’”
While AI is both controversial and unpredictable regarding academic integrity, it is rapidly developing into a popular field of study, with census data from 2021 showing that 630 people qualified for a degree in AI, which was a 200% increase on the 2016 census. This doesn’t reflect the rising prominence of generative AI and the variety of concerns that it raises, so the 2026 census is likely to reflect another significant increase. Other adjacent fields included Information Technology, which included 470,000 graduates in 2021, a 36% increase from 2016. At USyd, there are units like COMP3308, Introduction to Artificial Intelligence, while over at UTS there are entire Bachelors and Masters degrees for AI. These new courses reflect the growing opportunity for AI to become a pathway to entire careers, including “AI analyst, machine learning engineer, AI specialist,” and so on.
Bello y Villarino used the example of a paper he reviewed which seemed to have been written using generative AI, to explain his views on the ethical implications on academia. “The people, given the type of literature they used, they were probably not native English speakers, they probably didn’t have access to editorial services, all these kinds of things. But clearly, the underlying research was their data and their work… But it made me reflect: these people could have gotten assistance the same way that they could have gotten an editor… It would be less noticeable, because it was a human.” He called ChatGPT “the great equaliser… now everybody is on the same playing field. The problem is, if the knowledge you’re trying to generate, it’s coming from what ChatGPT is doing… creating the appearance that you’re generating new knowledge… I think people should be transparent about generative AI. I think the blank banning makes no sense. I think you should be clear about what is the research behind it and where you got the assistance from.”