SRC ELECTIONS 2018
News //

How the University Was Hacked

Chris Howell assesses cyber security and the ORSEE hack.

hacker The 'hidden web' allows communication between users which is not traceable to an IP address.

It’s generally accepted within the information security community that security through obscurity is, in many cases, akin to having no real security at all. What this means is that systems should be secure because of their design, not because attackers don’t know how they work. It’s a principle exemplified by the story of the German WW2 Enigma code, which was thought to be unbreakable until it was cracked by Benedict Cumberbatch and Keira Knightley. Closer to home, a reliance on security through obscurity seems to be partially responsible for the February 2015 breach of the University’s Online Recruitment System for Economic Experiments (ORSEE), which disclosed the personal information of 5684 students and an unknown number of staff to an as yet unidentified attacker. The incident has led to concerning revelations about the University’s information security policies, serving as yet another reminder that large organisations are failing to do all that they can to secure our data.

The ORSEE breach occurred because the system contained a fundamental security flaw. Known as an SQL Injection vulnerability, an examination of the ORSEE source code reveals the bug existed unpatched in the system from its first main release in 2004 to August of 2014; for a little over a decade, ORSEE was trivially easy to break into. The moment the School of Economics deployed the system in 2008, they were risking the disclosure of users’ private information.

Honi has obtained a copy of a report detailing the findings of the University’s internal investigation into the breach. The report indicates that ORSEE was initially deployed by the Faculty of Arts and Social Sciences without a security audit. In 2013, five years after its deployment, the University’s ICT group identified the vulnerability as part of a security review and developed a patch for it, seven months prior to the official patch distributed by the ORSEE developers. Shockingly, the University didn’t bother to deploy its security fix “as further development work was being done”, seemingly waiting until a planned upgrade to use the Unikey authentication system was complete. For over a year, the University knew about the vulnerability, but relied on security through obscurity and, utterly unsurprisingly, it was as if they had relied on no security at all.

SQL Injection vulnerabilities are a common, well known source of insecurity. Why, then, did nobody catch such an elementary bug? The answer lies in the nature of ORSEE’s development. As an Open Source software project, anyone is permitted to use the software and to access its source code. This access is a double-edged sword, making it easy for volunteers to contribute features and improve the system while also giving malicious actors the access required to identify vulnerabilities. The Open Source Software (OSS) model is often considered attractive from a security standpoint, as a system that allows anyone and everyone to attempt to break it is considered more trustworthy than something that is Closed Source, inspected only by the manufacturer. In the case of ORSEE, however, the project’s niche application meant that it escaped the level of scrutiny afforded to larger, more widely deployed OSS projects. While the developers of ORSEE clearly bear some responsibility for writing and distributing insecure code, they are explicit in making no guarantees about the performance of their software. Ultimately, the University’s failure to adequately evaluate the security of its software is the most troubling aspect of the saga.

The University’s failure sees it join the ranks of Sony and Walmart: large, wealthy organisations that have compromised the data of their users through inadequate security practices. As always, it is the people who used the system, and trusted the University to manage it, who will bear the consequences for the breach. The student data disclosed varied depending on what information each student had entered. At most, the breach revealed a student’s first and last names, gender, email address and phone number. For staff members, the breach also disclosed password hashes, a source of additional concern. Password hashes cryptographically obscure the actual text of a user’s password, but weak passwords can often still be cracked. Staff who used the same weak password for ORSEE as they did for other services (like their Unikey) face the very real possibility that their accounts could be compromised as a result of the breach. While students do not face this level of risk, and information such as their name and email are generally available from places like Facebook and Twitter, the disclosure of their private information is still a matter of concern. Data dumps from similar breaches usually make their way online, to be stored and referenced with data from other attacks, building files that can later be used for identity theft. Meanwhile, “doxing” is a common practice in internet disputes that involves revealing information like a person’s address or phone number and inciting mass harassment that can be directed along those channels.

Ultimately, users and organisations often fail to appreciate the potential impact data leaks can have, and this underestimation is reflected in the way we as a society combat cybercrime. The ORSEE breach was reported to the NSW Police, but whether they successfully identify the perpetrator or not (Honi has before revealed that a 16 year old hacker has claimed responsibility for the attack), the breach will still have occurred. The challenge we face is in ensuring that those who keep our data keep it safely. Currently, the only legislative requirement the University breached in failing to deploy their ORSEE patch as soon as it was ready was a section of the Privacy Act that mandates information be secured “by taking such security safeguards as are reasonable in the circumstances”. It may be that the Privacy Commissioner needs to offer firmer guidance, mandating what specific technologies and practices are actually “reasonable”. Without some form of intervention, organisations are likely to carry on down the path of least resistance, even if that path is one that sees your information shared beyond your control.