If you’re a first or second year student (or like me, an embarrassingly old undergraduate still trudging their way through compulsory units), the phrase ‘early feedback task’ (EFT) may ring a bell. It might have been a quiz you completed on the bus home from uni one day, or the subject of a nagging and more than slightly spammy Canvas announcement from a unit coordinator. Chances are it was automatically marked. If you didn’t do very well — or didn’t bother to sit it at all — you will have received an email linking you to academic and support services.
EFTs are part of USyd’s Support for Students Policy, a policy that Australian universities are required to maintain under the Higher Education Support Amendment Bill 2023 passed in response to the interim report of the Australian Universities Accord. The tasks, designed to identify and support “students at risk of not successfully completing their units of study”, were integrated into 1000 and 2000 level units at the beginning of 2025. The university has plans to roll these tasks out across all undergraduate and postgraduate units by 2027.
The idea is sound in principle: a low weight task early in the semester allows for students to assess their understanding and progress through course material without the stress and pressure to perform that comes with more substantial assessments. However, Honi Soit spoke to several students who were underwhelmed by the quality and rigour demanded by the early feedback tasks they had been assigned.
While students understood the broad purpose of the task was to allow for self-assessment of their own ability to complete a given unit of study, they doubted the capacity of early feedback tasks, often delivered as multiple choice quizzes which can be retaken an unlimited number of times, to facilitate this.
“None of them have actually been helpful for that [purpose]”, one first year student told Honi Soit. “You can do [the task] a million times and get 100 per cent.”
Others have reported that the tasks don’t test substantive course material. Honi Soit has seen early feedback tasks that quiz students on attendance requirements of a course, or where to find the unit of study outline on university websites, or how to use the library.
Professor Adam Bridgeman, Pro-Vice-Chancellor (Educational Innovation), agrees that students prefer early feedback tasks centred around course content: “Some [unit coordinators in designing early feedback tasks] did opt for making sure the students are orientated around the first assignment and maybe the Canvas site. I think the evidence we’ve got is that those are less successful than the ones with the course content.”
However, he maintains that the diversity of tasks was necessary to ensure university compliance with the changing legislative requirements: “This legislation came as a bit of a shock in November, and we had to get something up and running for February.”
He also notes that flexibility allows for the tasks to be tailored by the unit coordinator to their specific discipline or unit.
“Ultimately, we don’t like to give coordinators too many rules, because the best engagement from our side, when we’re working with coordinators, is if they have the agency to pick what suits their unit.”
As to the tendency of students to retake the tasks until they get perfect scores, Bridgeman argues this might actually be a good thing: “This is ultimately a feedback task. We want students to engage with the feedback and understand whether they’re going to struggle or not. Retaking a quiz multiple times could show real engagement and wanting to learn from the feedback”.
There is also the risk that the purpose of these tasks to identify at-risk students may be undermined if students use artificial intelligence (AI) to complete them. Under the university’s “two-lane approach” to assessments in the age of generative AI, early feedback tasks would fall into the second ‘lane’ of assessments which allow for “human-AI collaboration”.
“I think it’s, again, assessment for learning, assessment for feedback”, says Bridgeman. “If a student were to do that, they might get the five marks, but they haven’t got the feedback. And the evidence that we have is that the students who read the feedback, act on the feedback and go to support services [if they do not perform satisfactorily] improve their marks by 20 per cent.”
While these figures are impressive, time will tell if the use of generative AI undermines proper engagement with early feedback tasks, and in turn the initiating process for students accessing academic support services.
SRC Education Officer Luke Mešterović is critical of both the capacity of early feedback tasks and the Support for Students Policy more broadly to ameliorate the difficulties faced by students in their studies.
“Asking if students are capable of navigating a Canvas page or comprehending a unit’s attendance requirements is not an adequate measure of whether or not they are ready for ‘academic success’ in the unit. The policy is a band-aid solution to fundamental problems that lie at the heart of our neoliberal university sector”.
One does wonder if early feedback tasks are being asked to do too much. There are numerous forces working to diminish the quality of education at Australian universities, from routine job cuts across the higher education sector to cost-of-living pressures resulting in students having to work more and study less. The ability of a single assessment to test for, identify, and resolve an at-risk student’s learning difficulties in this environment is questionable.
The roll-out of early feedback tasks is still in its early stages and it’s clear the university is taking student feedback into account in designing these assessments going forward. There is also student enthusiasm for this model of assessment, with many reporting to Honi Soit the potential for the tasks to be “really helpful” if designed well. Early data on student marks and learning hub attendance also suggest success in identifying and reaching at least some at-risk students through the tasks.
Nonetheless, student disquiet about the efficacy of these assessments and the risk of AI undermining meaningful engagement with the feedback provided can not be ignored. While early feedback tasks may themselves be assessed automatically, we may have to wait a little while longer for the full impact of this policy to become apparent.