AI: the unequal amplifier
Knowledge, skills and the widening agency gap
There are two dominant narratives when it comes to AI in discussions. Firstly, the utopian transformative narrative where AI is described as the great leveller. A tool that democratises intelligence and gives everyone access to expertise, opening access and raising the potential to improve living standards across the globe. While on the other hand, we have the dystopian threat storyline where we see warnings of mass displacement, institutional breakdown and existential risk. The tone swings between idealistic and apocalyptic, sometimes within the same article. The reality will play out, and I am sure (hope) there’ll be plenty of good to counter the negatives.
The stats are not particularly encouraging for many. This Fortune article is pretty sobering - 1.2 million UK graduate applications for 17,000 graduate-level jobs. While the dystopian narrative of AI replacing human work is often sensationalised, these figures suggest that AI is already acting as a disrupter in entry-level recruitment and the Center for Global Development suggests global inequality may widen. If employers can access higher output from fewer, more capable individuals using AI, the demand for entry-level labour plummets further. The signal then becomes not just qualifications, but the ability to use these tools better than everyone else. That has clear implications for graduate markets already under pressure, and therefore schools.
AI isn’t new any more and has blended almost seamlessly into normal life for many. Since the mainstream arrival of ChatGPT at the end of 2022, people have had access to tools like this for nearly four years. It is now more surprising to hear that someone isn’t using it in some capacity.
We know it is disruptive and, with that in mind, a more convincing take is that AI is not the flattening or equalising force some had hoped it would be. I see it better understood as an amplifier, particularly of leverage. And leverage, in economic terms, rarely benefits everyone equitably.
In economics, human capital is the accumulated knowledge, habits and skills that make people productive, which in turn determines their value to employers through their marginal revenue product. Human capital builds slowly but compounds over time. Education and training raise it, and governments invest in it to improve productivity and, in theory, benefit the wider economy. Governments therefore look to schools to deliver a curriculum that builds that capital for the future. The knowledge, habits and expectations they establish determine whether students can make effective use of tools like AI, or whether they remain dependent on them.
It is unlikely that AI will fully replace that human capital any time soon. What is becoming clear, however, is that people interact with it unevenly, significantly enhancing productivity for some far more than others.
Rather than democratisation, what we are seeing is a rise in the returns to prior knowledge and investment, with social mobility coming under further pressure.
AI and learning: the great amplifier
I see clear parallels between AI and how memory and schemas work. The science of learning tells us that the more prior knowledge you hold, the easier and faster it becomes to learn more, because new information has something to attach to. This is already starting to play out very clearly with students where AI is concerned.
A student with a strong internal mental model of a subject can use AI to probe their thinking more deeply. They can ask it to critique their reasoning, generate counterarguments, surface blind spots and understand what comes back. When they return to their own work, they can use it to dismantle an essay and expose weak logic. Crucially, they can sense when it is wrong, because they have something to compare it against, and their thinking is strengthened by the speed and depth that AI gives them.
Another student can use the same AI tool, even starting from the same prompt, to generate something that reads well. It may even sound coherent. But without the underlying schema, they cannot properly interrogate what is produced. Sloppy definitions are accepted, thin examples pass and they cannot iterate or refine the output because, to them, it already appears to do the job. The work gets done. A good grade might even follow, particularly if it is handwritten and submitted cleanly the next day, reinforcing the behaviour. But the thinking has barely moved on. Same tool, very different experience.
In the short term, the outcomes may even look similar. One student could produce an essay independently. The other arrives at something comparable through AI. On paper, they may sit at the same grade, but the knowledge gap widens, as only one of them can replicate this independently.
This begins to look a lot like increasing returns to knowledge. The more you know, the more effectively you can use the tool, and the more you get back from it. Those starting further behind do not just gain less; they struggle to convert access into understanding at all.
Prior knowledge and agency are the dividing line here.
The learning misconception - knowledge is still essential
People, teachers included, still talk about the importance of skills and how, in a world shaped by AI, students need creativity, communication and critical thinking more than ever. With information instantly accessible, the argument goes that knowing things matters less than being able to use them well. That is right, to an extent. But what is often missed is that all of these are underpinned by knowledge. You cannot just learn to “think critically” in the abstract.
We now know that generic skills are not transferable in the way we once assumed. You do not learn to think critically in general; critical thinking is rooted in domain-specific knowledge. A student may be able to think critically about a text they understand well, but that does not mean they can do the same in a different subject or context where their knowledge is weaker. Knowledge is the raw material of thought. It is not possible to think critically about something you know nothing about.
Analysis and evaluation, two of the most valued higher-order skills, rest on knowledge. You cannot evaluate a scientific claim without understanding the rules of scientific evidence. To analyse, you need the components. Imagine trying to analyse a historical event without an understanding of context, causes, consequences or chronology. Scientists and historians think differently, and that thinking is shaped by the knowledge and methods of their disciplines. Critical thinking in one domain does not easily transfer to another.
Daniel Willingham’s line that “memory is the residue of thought” makes this point more directly. If students do not have knowledge stored in long-term memory, there is very little for them to think with in the first place.
In that sense, knowledge is not in opposition to skills; it is what makes them possible. In the context of AI, the same principle applies. Like most IT systems, it is GIGO - garbage in, garbage out. The idea that “you can just look it up” and therefore do not need to know anything has not aged well in light of what we now understand about how learning works, particularly the evidence that novices struggle without sufficient prior knowledge.
Agency and judgement
A similar pattern is emerging with adults. It is now very easy to outsource the difficult first draft of a message, to ask AI to frame a decision or soften tone. There is nothing inherently wrong with that. I do it regularly.
But there is a difference between using AI as a thinking partner and using it as a substitute for judgement. If the habit becomes letting the machine produce what you think, cognitive ability begins to weaken. Over time, the muscle of judgement diminishes.
Used differently, it can do the opposite. You can use AI to tear apart an argument, to surface objections not yet considered, or to highlight where you might sound self-righteous or unnecessarily combative. In that sense, it introduces friction, and that friction tends to improve the quality of thinking.
This looks very much like skill-biased technological change, potentially leading to structural unemployment and job displacement. Technology that disproportionately rewards those who already possess certain forms of capital, or the understanding and access needed to use new technologies well. We are seeing AI increase the returns to being well educated, rather than reducing the gap as many had hoped.
There is increasing recognition that education systems will need to adapt to an AI-driven economy. Demand for technical skills will not translate neatly into more jobs, particularly as many of those tasks are being automated by the same technology. The advantage will sit with those who can think with these tools, not just use them. Those who cannot risk displacement and continual retraining.
That has to be taken seriously. Does it mean that those same students with libraries in their homes are going to see even more relative success? We know students with disciplined home environments, strong vocabularies and established habits of reading benefit most from our current education system. Are they now about to move even further ahead through AI?
Will those who already have the habits and knowledge to engage deeply extract disproportionate value? Will AI widen the agency gap rather than narrow it?
The upstream problem
In schools, the initial response seemed to default to the plagiarism conversation when AI first emerged. Detection, containment and handwritten essays were the main focus. There is still an important place for all of that, but these are starting to feel like surface issues. The real issues sit further upstream.
As discussed, knowledge matters, and probably more than ever. Without a strong mental framework within the subject domains, you cannot properly critique what you are given or judge whether something is accurate. You will not spot hallucinations if you do not understand the terrain.
Information asymmetry grows as a result. The person who understands the domain can see where AI overreaches. The person who does not may experience it as authority and stop there. The gap is already visible. If we are not seeing it, it may be because we are inside it.
Deepfakes and distortion
There is another layer to this which schools cannot ignore, and it connects directly to student agency.
Deepfakes are no longer theoretical. Convincing audio and video can be fabricated with relative ease. You do not need particularly high levels of technical expertise. A teacher’s voice can be cloned or a student’s face mapped onto something explicit. Fabricated material can circulate long before anyone has time to verify it. The safeguarding implications are obvious, and most schools are far from equipped to deal with them.
Guidance is beginning to respond, with awareness of digital harm no longer limited to social media, screen time or cyberbullying. AI-generated content is now firmly within that conversation. But the pace of change remains the issue. By the time guidance is updated, the tools have already moved on.
Jonathan Haidt has accelerated a broader shift in how we think about phones, attention and childhood. Schools and parents are increasingly questioning constant connectivity and the effects of distraction. Going device free is no longer radical for a school, and as a parent, choosing not to give my kids a phone has been met with increasing support over the last year or two. A few years ago, they were often the only ones without one, which made us feel like we were in an outlier position.
AI arrives into that already fragile space. It is no longer just about distraction, but distortion.
Agency, in that context, is not simply the ability to act. It is the ability to discern, to verify, to pause and to consider the consequences before deciding whether to believe, engage with or pass something on. This is where laws and government guidance can have the greatest impact, in much the same way they have with smoking, alcohol and other demerit goods, reducing the need for individual judgement in the most obvious cases while learning has time to take place.
Understanding the system
The arrival of AI has brought into question the need to learn to code. Not long ago, every child was being told they need to learn it. Coding was framed as the new literacy. That position shifted quickly. With no-code tools and increasingly abstracted interfaces, many began to argue that coding would become unnecessary.
Balance has returned to this debate. Not everyone needs to code fluently. But understanding how systems are structured, how models generate outputs and where they fail is increasingly important. Not just to produce software, but to avoid becoming passive in relation to these tools.
What matters is less syntax and more how you think. The ability to break complex problems into smaller steps, to diagnose where something has gone wrong and to iterate towards a solution. Those habits extend well beyond programming. They shape how people approach work, decisions and uncertainty more broadly.
There is also a shift towards what might be described as a hybrid approach. The advantage will sit with those who can combine human judgement with the speed of AI, using it to accelerate output while retaining control over logic, structure and purpose.
Underlying all of this is a more basic requirement. Technology will continue to change, and quickly. The people who benefit will be those who can adapt, learn new tools and operate without complete certainty. In other words, those who can learn well and exercise agency.
It is the difference between using a tool and being shaped by it. I see it as servant versus master.
Four years on, AI is already everywhere, or at least it feels that way, particularly in writing. The frictionless tone, the consistent cadence and the polished structure that bear little resemblance to how that person has ever written before. Occasionally, the accidental inclusion of “here’s another draft…” is left at the bottom of an email by mistake. Either people do not realise how visible it is, or they no longer care. That, in itself, says something about authorship, whether the goal is thinking or simply producing.
I had a passing thought that AI is a bit like alcohol. It does not change who you are so much as expose it. For some, curiosity sharpens. For others, laziness becomes easier. Those with discipline find it compounds, while others settle quickly into dependence. What was already there is simply amplified.
AI is not levelling the field. It is widening the gap between those who can think with it and those who cannot.
What happens next will determine how far that gap grows.


