“Articulate” comes from the Latin “articulus”, meaning small connecting part. To articulate something is to divide it into small joints, each of which comprise a whole. Over time, this took on a metaphorical meaning: to break an idea into parts for ease of communication. Now, “articulate” has a positive value judgement imbued in it. Being articulate means being able to use language well.
I acquired language, I expect, in the way that every child does. Babies start babbling at around six months, testing the sounds that they can make with the parts of their nascent mouths. They don’t know it, of course, but the parts of their mouths that they are getting used to — their larynx, their soft palate, the alveolar ridge and the gums from which their teeth will start to grow — are called articulators. At one and a half years old, babies typically start to utter words with meaning. These words are often short, simple patterns of consonants and vowels. By virtue of a baby’s still-developing articulators, the first consonant sounds they are able to utter are usually bilabial, where the point of articulation is the baby’s lips: [m], [p], [b]. It is no coincidence that the most common first word for babies is “mama”. “Mama” is designed to be said by newborn mouths.
The exact origin of language is unclear. Linguist Noam Chomsky proposed the idea of a Language Acquisition Device (LAD): an organ possessed by humans and no other animal which makes the development of language possible. It explains, he argues, how humans are able to acquire language so easily, and why no other species has developed something equivalent to human language. Animals do, of course, have communication systems. These systems can be quite complex. There is evidence that dolphins have systems of individual reference, and Campbell’s monkeys can develop protosyntax. Human language, however, is infinitely more complex and advanced. It is self-referential — we can talk about prior conversations. It can talk about the past and the future, and other abstract ideas. It provides means to use metaphor. We are, so far, alone in this capacity. Chomsky’s LAD is widely contested. Academics like JF Stein suggest that humans were able to evolve language due to several physical factors in our evolution: our left brains developed, our larynxes dropped, we started walking upright, and these factors gave us greater control of our vocalisations. Additionally, with the advent of society came an alleviation of natural selection. As we formed communities, built houses, cared for each other, learned to farm, it was no longer true that everything we did was driven by survival. In these communities, the language that we now know — the gossipy, poetic, metaphor-rich way we speak — was born.
It seems obvious to say that language has always been important to me. It is a tool that enables us to ask for things, to apologise when we have hurt our friends, and to learn more about the world around us. From the moment we start putting concepts to sounds or signs, language is a part of what it means to be a human. I have always loved language, though, in a particularly nerdy way. I prided myself on being able to spell words correctly, and still remember bitterly losing a class spelling contest because I was tricked with a homophone. I read voraciously. I wrote my own stories, too.
Words, to me, were comforting, and not just in the images and emotions they proved themselves able to conjure up. Words were little keys to ideas, to arguments, and the more I had in my hands, the more I was able to learn, to say, to communicate. I collected words like a bower bird collects shiny things, keen to adorn my homework and speeches and dinner-table conversations with them. I did this for myself, for my own insatiable appetite for learning. Just like a bower bird, though, I knew my adornments made me stand out. Teachers told me I was smart. Adults praised me for being eloquent. To be articulate, it seemed, was to be esteemed.
The first evidence of written language comes from Mesopotamia. Tablets inscribed with Ancient Sumerian, over five thousand years old, are some of the earliest records of writing we have. Before that, there is evidence of proto-writing — depictions of objects or people, painted or etched in public places to tell stories. “True writing”, though, is the correlation of sounds to symbols. Writing is thought to have spread from Sumeria, being adopted as a useful tool for record-keeping, forming contracts, and recording the law. For much of history, though, written language was exclusively the domain of the elite — those who had the capital to barter contracts or the prestige to have their words etched into stone. Before the invention of printing, it was laborious to replicate written words. The vast majority of people had no reason to encounter orthography – they learned through spoken words, stained glass Biblical scenes in Church windows. The printing press helped democratise writing, but even then, it remained exclusive to those who could afford schooling, both the cost of tuition and the time taken away from work. Today, despite ever-increasing efforts to make literacy more accessible, there are still over 770 million illiterate adults in the world. Most of them are women.
Not having access to orthography, though, does not mean that people cannot speak language. It also doesn’t mean that people cannot speak language well. Of the over 7000 languages spoken around the world, just over 4000 have a written component. Writing can be useful, especially when communicating with large groups over large periods of time or geographic distances. It is not an essential component of language. In ways, it can stagnate language evolution – meticulously kept records of language can make deviations from its past forms obvious. Writing makes things like grammar, punctuation, and spelling more central to language expression. It hinges “good” communication upon having the fortune to be educated. On top of this, dominant languages are more likely to be taught, further shaping what we understand articulate to mean based on being part of a social or ethnic majority. This, of course, is arbitrary. Some of the world’s greatest poets may have never even picked up a pen.
Being known as a grammar nerd meant that, even in primary school, my friends would ask me for help with their English homework. I did it gladly — part of it was fun, and another part of it was a chance to prove how well I knew words, how strictly I could follow the rules of the English language. Every comma I added, split infinitive I repaired, hung preposition I cushioned in noun phrases was proof that I was articulate. Articulate was the last few marks on my English assignments, the judge’s feedback from the debates I won, the glowing words printed on my report card. To be articulate was to be intelligent. To be worth listening to.
Although humans do not have a LAD, Chomsky’s vision of language as an organ has some merit. Spoken and signed language lives and grows, shaped by the mouths and hands of those who use it. With time and new speakers, words change their meanings, grammar slips and slides, vowel sounds dance around our mouths. Linguistic change is organic and important. It reflects new concepts, like the creation of new words like “unfriend”; new attitudes, in how pejorative terms for certain people or concepts ameliorate over time; and it embeds metaphorical meaning into words that, when we use them now, we don’t realise ever used to be literal (like “articulate”). Divergence from a written or formalised standard of language is not a sign of incompetence, but, rather, a natural byproduct of language as a living, growing thing.
The speakers of language who are most likely to innovate it are those who are most likely to be excluded from the bodies that regulate it. Young people, women, racial and ethnic minorities, and members of subcultures with jargon are consistently at the fore of linguistic innovation. This is true for a number of reasons. In some cases, in-group language is a necessity to create a community, especially a covert one. Participants in Ballroom culture in the US — a queer subculture dominated by Black and Latinx speakers — produced coded language that could be used to identify other members of the same group without outing yourself. Many of the words they innovated are used widely today: “throwing shade”, “realness”, “yas”. The same is true for Polari, a dialect created by queer people in the UK — the name itself is a pun on “parlare”, the Italian word meaning “to speak”. Linguistic innovation brings safety and community.
Beyond this, innovation is an act of reclamation and ownership. Dialects like African American English and Aboriginal English were born of colonial contact between English speakers and the people who they enslaved or stole land from. Colonisers stripped the people they subjugated of their languages, separating speech communities and forcing them to speak English. To take the coloniser’s tongue and make it your own, imbuing it with Yoruba or Wiradjuri grammar, subverting the arbitrary conventions it inherits from generations of elitist academia, is to make it your own. Evidence of this reclamation is evident in AAE in its embedded copulas, habitual tense, and double negation. Contact between speech communities produces innovation, especially when that innovation fortifies the identity of those communities.
Linguistic innovation occurs when language is spoken freely, without judgement, in groups not concerned with prestige. The frontier of linguistic innovation has historically been women’s meeting places, the kitchens and laundries that they gathered in to chat. The act of idle conversation, unconcerned with proper syntax or pronunciation, in groups which span wide networks of people, breeds linguistic change. The same is true of young people on playgrounds. These conversations reset norms around what is and isn’t understandable, swirl together outside influences, weed out redundancies or complications. If language is an organ, linguistic innovation — the very kind championed by those most excluded from academia — is the heartbeat which pumps blood through it.
One of the factors that sets human language apart from animal communication is its capacity to be self-reflexive. We can use language to talk about language. In the last five years, I have done an awful lot of talking about language, both in my coursework and the articles that I write. I began University clutching the descriptor “articulate” to my chest, holding it tightly for fear of losing the value it had brought me. Learning more about language has caused my grip to slacken. The value I gleaned from “articulate” was a blunt and arbitrary tool used to diminish the opinions of those who didn’t speak the way I did. Perhaps, there was no pride in knowing what an Oxford comma was. Of course there wasn’t. The value of an idea should not be contingent on the words and punctuation used to express it.
Beyond that, though, I was confronted with another jarring revelation: I’m actually not that articulate. In written word, I make grammatical mistakes all the time. I don’t know the difference between “which” and “that”. I begin sentences with conjunctions and end them with prepositions. My spoken language is even worse. Listening to recordings of myself interviewing others or making an argument in a debate, I recoil at my half-sentences, my mismatched verb and noun agreement, the filler words I use to plug silence with. I’m not concise at all, I overuse specific turns of phrase like “that is to say”. I am no better a language user than anyone else.
And that’s perfectly fine. In fact, it’s kind of beautiful.
When a child learns language, they are not unlocking some part of their brain which has vocabulary and syntax ready to go. Everything they learn, they work out from the sounds they hear. This deductive process extends beyond initial language acquisition. The lexical ornaments my young self adorned her writing with were plucked from the books I rabidly consumed, the conversations I had, the movies I adored. When I debate, I steal turns of phrase from speakers who I admire for their eloquence. These words and expressions enter our mental lexicons both consciously and not. The tools with which we express ourselves are fashioned from years of hearing others’ self expression, from learning and loving and making things anew. I would like to think that, with thought and revision, I write things coherently. When I speak aloud, though, my bumbling mess of phrases is nothing but a string of all the words and syntaxes I have heard said to me.
Before it gained the metaphoric sense of eloquence, to be articulate was to be composed of a sum of small parts. The way we speak is, fundamentally, a sum of small parts. The dropped larynx of a Neanderthal eons ago. The first “mama” from a baby’s lips. A fragment of an engraved tablet. Quips traded around a communal oven. A defiantly embedded copula. A phrase with a covert meaning drenched in both love and fear. A slip of my tongue as I make a point, echoing the friends I admire. To speak is to be human. To be human is to be articulate.