AI is generally not intelligent. If it is scoring well on human intelligence tests, then our examination, definition and cultivation of intelligence must change. Otherwise, the exponential evolution of computing may outpace humanity’s intellectual growth, leaving us unfit to remain on our cushioned thrones atop the food chain.
Humanity (yes, you) can be incredibly intelligent. And if cultivating intelligence is a primary objective guiding our systems of education, the embryonic stage of (at least) a symbiotic relationship between human intelligence and AI begins with a question: What is university for? Seriously, think about it. Should it really be for nothing more than memorising facts for exams and walking out with a degree?
I flipped a coin and it landed tails (no), so unfortunately for both of us we can’t take the heads’ way out. Instead, we must uncover how our schools and universities can evolve into the birthplaces of novelty rather than pipelines for processors.
Defining Intelligence Beyond the Barbie House
An intelligent agent surprises you by extrapolating creations for novel problems based on established knowledge. Also, as with many traits reserved mainly as ‘humans-only’, the language around intelligence is anthropocentric, so humbling ourselves is half the challenge here. Intelligence is not a toolkit of skills; it’s what Google Senior Staff Engineer François Chollet calls “skill-acquisition efficiency,” that’s needed to build the next toolkit.
AI is skilful in interpolation, whilst struggling with extrapolation (excuse me while I unpack crucial and boring definitions around this). Mostly, the AI models dazzling students are Large Language Models (LLMs) — don’t be fooled by their wand-and-sparkles icons; these are statistical powerhouses, not magic. LLMs break down the relationships between the words you type and their enormous training datasets, then spit out the most probable associated text. Artificial Neural Networks (ANNs) — AI’s computerised copy of our brains’ neuronal webs — just add layers of complexity to this process. For instance, simpler autocorrect models would show the three words most probably linked to the letters in “Creatn” — “Create”, “Creatine” and “Creating”. More complex ANNs make for image recognition models, which can identify an input image (e.g. Peter Griffin) by analysing graphic features (eyes, ears, nose, and suggestively-shaped chin in each layer) to determine the highest-probability match. That’s all AI boils down to; and though it may be a dwarfing data-cruncher to your two-minute-input group assignment member, it cannot — must not — stand in for human innovation.
To look further through the facade, let’s scale down to ‘Small Language Models’ (‘Slam’ sounds cooler, too): develop an AI model to play chess, and forget about its LLM-ANN superpowers — it will get smoked in Battleships. Or train it to do a puzzle, and it will just place the LEGO pieces side by side when tasked with building a child’s (or adult’s, not judging) Barbie house. The AI acquires all the bias without the intelligence, and prior training just means more clambering from its self-dug rabbit-holes to unearth new, unrelated skills.
However, you can also train it on custom data to unlimitedly generate proposals, find collaborations, revamp marketing… and take the human-sowed seeds of an idea to a full-grown hard maple in seconds. AI is an undeniably incredible interpolation machine.
But read those last two lines again. What do you notice? I like wasting your time, so I won’t tell you what’s crucial here until later on. The point remains, though: while LLMs can infer links within the enmeshing “convex hull” (Chollet) defined by training data, they are currently unable to transcend; incapable to dream; inadequate in extrapolation.
I’m not denying being humbled, like any student, by AI’s ability to outpace my 86-billion-neuron brain in producing extraordinarily efficient writing. But this misses the larger point; in fact, harnessing this interpolative ability is key to driving a new revolution in intellect and education — a synergy of human intelligence and AI.
The Parrot Problem vs. Standing on Stars
Let’s ourselves dig out of this technical rabbit-hole and look skywards, at the big picture. I have hopefully convinced you that AI, by design, cannot extrapolate. By our definition of intelligence, we thus should not seek competition with its undeniable skills. But these skills have come into question recently with Gemini and Chat-GPT infamously struggling to answer questions including “How many R’s are there in ‘strawberry’?” and “If President Trump enters a room does his brain (generous) enter also?”.
Does the model even understand its output, or are we faced with an age where a computerised parrot supersedes human intellect?
Now, we shall see humanity can extrapolate — but may not be able to for much longer.
To truly extrapolate intelligently is to generalise, not memorise. Our renowned artists, scientists, politicians and many others have taken pre-existing knowledge and completely transgressed its purblind paradigms. They absorb the most conservative, status-quo ideals, internalising and truly understanding them — only to break them down mercilessly, shattering them beyond recognition, leaving us questioning how we ever believed that was the fundamental truth.
It isn’t the technology itself, but this revolving door of innovators throughout history that has propelled the beautifully evolving process that is human innovation — though crucially, we’ve carried technology as a tool, from one movement to the next. Each and every one of us should not only strive to stand on these stars, but propel even further out; each of us must be taught how to think novelly. We must take the responsibility for evolution in any field, no matter how small.
Education: From Repetition to Revolution
The superficial writer may end there. The scientist in me must continue down a different rabbit-hole, however; how can we impact, encourage, and even cause such innovation? The answer lies in the lines I left uncovered before: the ideas come from humans. But further back, it lies in education.
Innovation and novelty have always driven intellectual human evolution — and interpolation is the antithesis of such creation. If we continue to simply sit slumped at our desks, having our professors prompt us with 20-year-old questions as we output (average, in my case) answers, not only will the creative muscle within our minds weaken, but our efforts will be misplaced towards running an interpolation race with AI … one that I think you can guess by now has an ominously obvious winner.
What I propose is our projection of our learning curves onto an entirely new dimension — shift our attention away from the planes of pure memorisation and towards fields of full mastery, proper understanding and ultimately, intelligence. The problem is that pure memorisation can sound like intelligence. But when a LLM like ChatGPT is presented with an extremely difficult problem that scholars once turned over for decades, it is merely spitting out the modified regurgitation (disgusting, I know) of data it has been fed; data that has been divulged from human innovation, the extrapolation of previous theories.
Under our current definition of fear, this should worry us: our jobs, our self-value, and our special place in the intellectual food chain are certainly at risk. Yet if we toss and turn at thoughts of substitution by superior machines, it clearly is high time to redefine the ends which justify the harsh-enough means of education.
This is to say: human intelligence, innovative extrapolation, truly valuable work — all of these lie in our ability to create, not compute. Yet a vast majority of us cannot actualise our creations — however hard some deny it, whether it be in art, engineering, coding, finance, we all strive to be creators — without learning how. And so our teachers must be the original prompters, creators and innovators.
Eventually, we must leave jobs of replicating what has been done before to the super-interpolators and memorisers — Artificial Unintelligence — in favour of our own creative intelligence and novelty. Teachers need to provide new problems which students can solve with AI at their disposal, so that solutions can be interpolated from the creative ideas of students.
This will not be easy. I appreciate that teachers were students once, and their teachers weren’t themselves taught to create, as they had teachers who weren’t taught to create … you see the problem here. But once we make a clean break and commit to teaching novelty — not even the novel ideas of others, but how to think novelly — students and the society we are pathfinders for will forever be unrestricted by the data with which we are trained.