In my first history lecture of the semester, a professor stepped up to discuss “the elephant in the room”: generative Artificial Intelligence (AI). However, before giving us the accurate but predictable set of reasons AI was flawed, he made an important concession — he’s never used it.
Technology has always outpaced education, but his admission acknowledged that despite a massive top-down effort to prepare the University of Sydney for AI, the results are inconsistent and scattered. At the classroom level, unit coordinators and tutors adapt without consistent guidance and within a student body unlikely to move away from their growing reliance on AI chatbots.
The most obvious shift university-wide is the across-the-board reintroduction of in-person exams. Popular majors in the humanities like Philosophy, International Relations, and Ancient History, as well as the Law school, have quickly shifted away from take-home tasks and essays.
Exams are the easiest solution, ensuring students cannot use AI tools like ChatGPT in the first place. In African American Literature this semester, a third-year course now filled with exams, the lecturer repeatedly bolded the section of her emails she sent students declaring herself anti-AI.
This solution, while seemingly simple, is imperfect. For many older students, this may be their first pen and paper exam since Year 12. Most of the students I talked to are understandably worried that they have been trained to write traditional assignments and are now being thrown into the deep end without much support.
Beyond that, exams, as one third-year law student told me, are “not very similar to a workplace environment.” There still needs to be room for longer form assessments that train research and drafting skills. In some courses, an overcorrection toward exams threatens that balance.
With that said, the traditional essay may be dead. Almost every tutor I spoke to told me their fair share of AI horror stories, but also said there were just as many assignments where AI use was slipping by before the University shifted its policy to allow it.
The University has tried to get students to use AI ‘well’, but none of its approaches have seemed to pass the pub test. There is a lot of information on their AI in Education site but no one I asked has read it. Studiosity, a tutoring service that offers personalised feedback on writing, structure, and citations, was offered last year to try and redirect students away from using AI to rewrite work. As of 2025, it has been discontinued after presumably not enough people used it.
Currently, the University’s position is getting students to acknowledge AI like any other source. It’s bold to assume no one has done this, but I would feel silly telling my tutor a paragraph was developed with ChatGPT. The alternative to exams, therefore, is an increasingly bizarre mix of new assessment types where AI is allowed but supposed to be less helpful.
At its worst, the changes are lazy, like allowing AI to be used for online quizzes with predictable results. One tutor told me that all students receive near-perfect scores, or the questions must be artificially tweaked to trick the bots. If supervised well, however, more quizzes can encourage incremental learning in ways essays don’t. Offering a couple of percent here and there is the easiest way to get people to work.
On the more creative side, posters, vlogs, and oral assessments where students sit with a tutor are increasingly common. Interviews are unsustainable for larger courses, but forcing unit coordinators to come up with new assessments may be the silver lining of AI. After two years of an arts degree, I have only learned how to write essays. Using a few other parts of my brain could only be beneficial. Not all of these experiments will work, but admirable staff are putting in the effort under a lot of managerial pressure.
When I asked how the University should correctly balance integrity and education, a second-year politics student said she wanted them to “require more evidence of process for written work.”
Take the third-year Shakespeare course. Instead of doing a closed book exam or turning in an essay after a week or two, each assessment is broken into small chunks designed to be partially completed in tutorials. Arguing with the University’s Cogniti robot about metaphors in Hamlet, which I did two weeks ago, felt satirical and, as one student told me, “distracts from the plays themselves,” but those more cringe elements are often optional.
I would not go as far as the Shakespeare lecturer did when he argued in a Sydney Morning Herald op-ed that “chatbots can revive the university essay” (he conceded in class it took him ten minutes to program). However, an assessment style that dangles AI at you while demonstrating you can do the work yourself is a clever form of positive reinforcement. Theoretically, students can still use AI at every step of the process but the incremental nature of the assessment lowers the incentive to generate something towards the end.
AI is inevitable, and the University is correct that attempts at a ban will fail. We are in a non-negotiable period of trial and error where new ideas are thrown at the wall to see what sticks. Hopefully the University won’t give up and simply shuffle the next generation of students into exam halls.