Every millennial’s father has, at one point, disparaged autotune. This is a fact of life. Articles decrying or defending the ubiquitous audio processor have been common in music publications for years—but very few people seek to understand the tool, or its place in modern music.
The term itself has become genericised, a byword for any sort of vocal pitch correction, and synonymously a byword for lazy production. Autotune is often seen as a crutch that contemporary vocalists lean on instead of ‘actually being able to sing’. But this a narrow conception that belies how it has revolutionised modern music.
‘Auto-Tune’ is actually a brand name for an audio software that was first thrust into the spotlight a year after its 1997 launch with the releases of Eiffel 65’s ‘Blue (Da Ba Dee)’ and Cher’s ‘Believe’ (two absolute bangers that also cop unfair flak).
Autotune is supposed to slightly shift the pitch of one’s voice to the nearest semitone, in order to smooth out minor variations. When it’s used well—or rather, when used by a talented vocalist—you can hardly tell. In fact, the studio versions of most vocal performances are autotuned, and that’s precisely the point of the software: not to be a substitute for talent, but to amplify it.
But what Cher and Eiffel 65 realised early on is that if Auto-Tune is turned up to its most aggressive settings, it makes voices sound highly synthetic, by hyper-correcting vocal input. Ever since, artists have deliberately used the software to acquire that famous mechanical sound.
Autotune is different to other forms of vocal effects such as talk boxes (used in 2Pac’s ‘California Love’ or Bon Jovi’s ‘Livin’ on a Prayer’) or vocoders (think Stevie Wonder’s ‘Race Babbling’ or Daft Punk’s ‘Harder Better Faster Stronger’) in that it directly modifies vocal input. If you sing badly into a vocoder or voice box, you’ll sound like a tone-deaf robot. If you sing badly and autotune it, it won’t magically make you a better singer – but you’ll be on key.
This sort of harsh Auto-Tuning was pervasive in the early 00s and contributed to autotune’s negative perception today. Too many artists were earnestly and egregiously using full-blown autotune. It became an expectation within the industry that artists of all genres would use the software, much like the number of hit singles a decade later pushing dubstep ‘drops’.
It is within this context of software over-saturation that a new use of autotune was born. Hip-hop artists realised by singing in a particular way—varying your cadence to specifically manipulate the software—you could produce a highly choral yet alien sound. Artists like Lil Wayne, T-Pain and Kanye West pioneered the use of autotune as an instrument, rather than a vocal aid. Rolling Stone music critic, Jody Rosen said “autotune doesn’t just sharpen flat notes: It’s a painterly device for enhancing vocal expressiveness, and upping the pathos.” Kanye’s ‘808s and Heartbreak’ is perhaps the best example of this use of autotune: Kanye makes his vocals drip with emotion yet feel uncannily inhuman through autotuning, the perfect way to paint this album’s themes of sorrow, dissociation and drug usage with. Hip-hop has gifted the music world with another conception of autotune, one that challenges autotune’s role of a ‘musical safety net’.
Autotune is today best understood as a digital tool equally valuable to physical instruments. The guitar is such a valuable instrument because it reaps equal rewards for both amateurs and auteurs. Jimi Hendrix famously said making good music with a guitar only takes “five chords, three notes, two fingers and one asshole”… and autotune is the same. Whether subtle, blatant or deliberately outlandish, it is a tool with endless potential.
Autotune and vocal synthesisation are generally derided by music critics as cheapening music’s art form. But this technology is misunderstood. It is time to demystify autotune and challenge the popular conception of what it detracts (or adds) to music.