AI Is Thinking For Our Kids: Why Critical Thinking (and Curious Parents

AI Is Thinking For Our Kids: Why Critical Thinking (and Curious Parents

 

Hallucination isn’t the most dangerous thing AI can do – it’s thinking for you before you’ve learned how to think for yourself. 

I’ve been making the case for critical thinking in my keynotes and Q&A sessions for several years now. Audiences are engaged, they nod along, and then they go back to their organisations and carry on doing exactly what they were doing before, because critical thinking feels abstract, difficult to measure, and somehow less urgent than learning the next AI tool.

The research is now making it impossible to look away.

Timothy Cook, an international educator and Psychology Today columnist whose work focuses on the psychological and developmental impacts of AI on human cognition, has spent the past year building an evidence‑based case that is, quite simply, one of the most important arguments in education today. His work sits at the intersection of cognitive neuroscience, developmental psychology, and pedagogy, and it reframes the AI‑in‑education debate in ways that should concern every leader, educator, and parent.

This piece draws on Cook’s research and other emerging evidence to argue three things, that critical thinking must be formally taught in schools and universities as a core competency, that the current pattern of cognitive outsourcing in early careers is creating a generation of professionals who look capable but cannot think independently, and that being Digitally Curious, my own framework for thriving in an AI‑shaped world, is now a survival skill, not a nice‑to‑have.

The Atrophy vs Foreclosure Distinction

The most important insight I’ve taken from Cook’s writing is a distinction he makes in his article “Adults Lose Skills to AI. Children Never Build Them.”

When an adult uses AI to outsource a task they already know how to perform, they are experiencing atrophy. The cognitive muscle weakens from disuse, but it exists. With effort, it can be rebuilt. An experienced consultant who lets AI draft their strategy decks is making a trade‑off between efficiency and sharpness. The capacity to think strategically remains; it just goes unexercised.

But when a student or early‑career professional uses AI to complete a task they have never actually learned to perform, something categorically different happens. There is no muscle to atrophy. The neural pathways for evaluating evidence, constructing arguments, and interrogating sources were never formed. Cook calls this cognitive foreclosure,  and he argues it may be permanent.

This distinction matters because our entire societal conversation about AI and education tends to conflate these two phenomena. We talk about cognitive offloading as a single issue, applying adult‑framed solutions to a fundamentally different developmental problem. Telling a 17‑year‑old to “use AI responsibly” assumes they have the metacognitive baseline to know what responsible looks like. Auditing an AI’s output requires the very domain expertise the student is still supposed to be developing.

Cook cites “How AI Impacts Skill Formation” by Judy Hanwen Shen and Alex Tamkin, which illustrates this powerfully. Software developers, adults with existing programming expertise,  who delegated coding tasks to AI produced working code but performed worse on conceptual understanding quizzes afterward. They had the output without the understanding. Now consider a student encountering programming for the first time, with no expertise to compare AI output against. The substitution becomes foreclosure.

The Brain on ChatGPT

The empirical picture is becoming increasingly difficult to ignore.

A 2025 MIT study titled Your Brain on ChatGPT tracked students over four months as they wrote essays using one of three approaches, unaided thinking, a search engine, or ChatGPT. ChatGPT users exhibited the weakest brain connectivity, with low executive control and attention, suggesting minimal cognitive effort was taking place. When asked to recall or quote their own work, 83% of AI‑assisted users could not do so accurately, compared with just 11% of those who wrote unaided.

Most concerningly, even after students stopped using ChatGPT, their brain activity remained subdued. The researchers introduced the concept of cognitive debt,  the long‑term cognitive decline that accumulates through habitual AI dependence.

A 2025 study by Michael Gerlich, examining 666 participants across age groups, found a significant negative correlation between frequent AI tool usage and critical thinking abilities. Participants over 46 showed higher critical thinking scores alongside lower AI reliance, while participants aged 17–25 showed the inverse. The most likely explanation, Cook argues, is not just preference but biology. Older participants offloaded tasks they already knew how to perform, younger participants offloaded tasks they never learned.

A 2026 Nature study extended this to higher education, identifying what researchers called a “reliance route”, where higher perceived AI intelligence leads to focused immersion, which increases dependency, which in turn is associated with lower critical thinking. The attentional pathway (beneficial immersion in AI as a tool) and the reliance pathway (detrimental dependency) are distinct, and distinguishing between them requires exactly the kind of metacognitive skill that is rarely taught explicitly.

Cognitive Colonisation: When AI Shapes How You Think

Cook’s most unsettling argument is about what happens not to our outputs, but to our thinking itself.

In “AI Is Quietly Colonising How You Think”, he introduces the concept of internalised homogenisation, the gradual convergence of a person’s own reasoning with the statistical patterns of the AI models they use, without their conscious awareness. When an AI mirrors your rough ideas back to you in polished, structured form, you feel understood, you approve the output, you call it yours. But the sequencing of ideas, the emphasis, the way the argument resolves, none of those micro‑decisions were yours. They were the AI’s.

Over months and years of daily interaction, your own sense of “what sounds right” begins to be trained by these interactions. Put 50 consultants in a room who all use AI for strategy, give them the same business problem. Two decades ago you would get something approaching 50 different approaches. Today, with everyone processing through the same underlying models, you get convergence. Individual quality may rise, collective variance collapses.

For children, this is even more profound. An adult using AI mostly just sounds generic. A child who has never formed independent reasoning patterns does not have the cognitive colonisation competing with their own thinking, it becomes their thinking. As Cook writes in a *Connected Classroom* essay on “Cognitive Colonisation”, while adults face the atrophy of skills they already possess, children face cognitive foreclosure, the loss of critical thinking capacities that never develop in a frictionless world.

His observation about writing is worth sitting with. The friction of composition, struggling with a sentence that will not come together, realising a paragraph does not follow, hitting a transition you cannot earn, forced you to confront gaps in your reasoning. The writing was hard because the thinking was incomplete. That friction is now often gone.

Friction, in other words, is not a bug. It is the mechanism of learning.

The Inverted Taxonomy

Education researchers speak of Bloom’s Taxonomy, the hierarchy of cognitive tasks from basic remembering and understanding up through analysis, evaluation, and creation at the top. The assumption has always been that education builds students upward through this hierarchy, developing higher‑order thinking over time.

Cook’s reading of Anthropic’s 2025 education report reveals something deeply troubling. Students have inverted this taxonomy. Nearly 40% use AI for creation and 30% for analysis, the highest‑order tasks. Only 2% use it for simple factual recall, the lowest‑order task. STEM fields show particularly high rates of this higher‑order cognitive outsourcing.

Students are delegating the hardest cognitive work to AI and retaining only the most basic prompting tasks for themselves. This inversion goes directly against how expertise develops. When the highest‑order thinking is consistently outsourced, the cognitive muscles needed for those tasks atrophy, or, for students encountering a discipline for the first time, never develop at all.

Cook is also withering about institutional hypocrisy. According to Anthropic’s data, 48.9% of professors automate their grading with AI — while penalising students for identical behaviour. The message this sends to students is clear, AI use is a professional necessity when we do it, it is academic dishonesty when you do it. This does not teach integrity. It teaches concealment.

The Apprenticeship Void

Nowhere is the consequence of cognitive outsourcing more urgent than in the early years of a career.

Historically, junior employees did the unglamorous foundational work, summarising, drafting, researching, analysing first‑pass data — and in exchange, they received mentorship, correction, and gradual development of professional judgment. The work was inefficient. It was also educational.

AI is automating precisely these tasks. Research cited in Forbes shows job listings for entry‑level positions in the US fell substantially between January 2023 and 2025, particularly in AI‑exposed roles. Work by labour economists suggests a clear decline in employment among 22–25‑year‑olds in sectors like research, entry‑level coding, and design. CNBC reporting paints the human story, the junior analyst or junior banker no longer has the opportunity to engage in the learning work because those roles have become dispensable.

The GFoundry analysis of this phenomenon introduces the concept of the Apprenticeship Void. If a junior associate never struggles through the basics, because an AI delivers the output, they fail to build the mental models required for senior‑level decision‑making. They can produce a document that looks indistinguishable from one written by a 20‑year veteran, but they cannot defend it when the context shifts, cannot debug it when it is wrong, and cannot adapt it when reality fails to cooperate with the model.

The result is what GFoundry calls the Empty Suit phenomenon, employees with perceived high competence and actual low competence, dependent on AI not just for speed but for the underlying reasoning itself. The mentorship dynamic erodes at the same time. Senior employees who once spent time correcting junior drafts, transferring tacit knowledge through that process — now simply regenerate the draft using AI themselves, bypassing the feedback loop entirely.

The World Economic Forum has identified analytical and creative thinking as the most important skills for workers in the coming decade, and these are precisely the skills most at risk.

What the Education Sector Must Do

The path forward is not to ban AI. That argument has already been lost, and it misses the point. AI is here. The question is whether we teach people to think with it, through it, and against it, rather than via it as a passive substitute for cognition.

Cook’s framework of dialogic AI engagement is instructive. Rather than treating AI as a deposit machine (prompt in, answer out), students should be taught to treat every AI interaction as a sustained interrogation. This means asking what the AI missed, what assumptions it made, what a dissenting expert would say, and where the confidence of the language exceeds the strength of the evidence. This is not anti‑AI pedagogy. It is expert‑level AI use, and it requires precisely the critical thinking skills that need to be taught.

Several concrete interventions are supported by the research.

Explicit critical thinking curricula, at every level

A 2026 review found that students in more than half of observed classrooms were operating only at the two lowest cognitive levels –  remembering and understanding. In an era when AI handles those levels trivially, education that does not systematically develop analysis, evaluation, and synthesis is not preparing students for anything. Critical thinking should not be a single elective module. It should be the thread running through every discipline, from primary school onwards.

Families as the first critical thinking classroom

It is easy to point at schools and universities and say “just add critical thinking to the curriculum”, but that lets the rest of us off the hook too easily. Long before a child writes their first essay with or without AI, they are learning how to think from the questions they hear at home. In Digitally Curious, I describe how my father constantly asked me questions when I was young, not to catch me out, but to help me see patterns, weigh options, and justify my answers. That early habit of being gently challenged is exactly what today’s students need when a fluent AI answer is always just a prompt away. Families cannot teach the technicalities of every new tool, but they can normalise curiosity, scepticism, and follow‑up questions. Those habits are what allow young people to use AI without surrendering their own judgment.

AI‑free zones as diagnostic environments

Designating specific assessments and exercises that lack AI tools serves a dual purpose. It gives students the deliberate practice that strengthens analytical neural pathways, and it allows educators to diagnose actual cognitive capability rather than AI‑augmented capability. The goal is not deprivation. It is calibration, helping students understand what they can genuinely do, and building from there.

Process‑based assessment over product‑based assessment

Current assessment systems evaluate final outputs, which makes them structurally blind to AI assistance. A student who spends hours interrogating competing evidence and a student who crafts an effective prompt may produce work that looks identical. The cognitive development involved is entirely different. Assessment must shift to evaluate the journey, showing reasoning, defending conclusions in real time, identifying the limitations of an argument.

Modelling transparency

Educators who use AI in their own practice, for grading, for content creation, for feedback, should make their process visible. Show the questions asked, the outputs rejected, the human expertise applied. The goal is not to establish that AI is bad, but to demonstrate that expert use requires expertise. The Deloitte case is instructive here. A major consulting firm submitted a government report full of fabricated citations because professionals treated AI as an answer machine rather than a thinking partner. That is not an AI failure. It is a critical thinking failure.[

Restructuring early careers around intentional cognitive friction

Organisations must resist the temptation to deploy AI across all junior tasks in the name of efficiency. The productivity gains are real. The capability debt they create is equally real. L&D programmes should reintroduce structured environments where foundational work is done by hand, not to be Luddite, but to ensure that the human infrastructure of judgment and expertise is genuinely being built.

Being Digitally Curious in an Age of Cognitive Outsourcing

The concept that sits at the heart of my own work – being Digitally Curious – has never been more relevant, or more under threat.

Digital curiosity is about asking better questions. It is about staying comfortable with the discomfort of not knowing. It is about leaning into the complexity of a problem rather than outsourcing your way past it. And it is, I would argue, the exact cognitive disposition that all of this research suggests we are systematically failing to cultivate.

Curiosity cannot be automated. ASCD identifies it as a core skill of the future that fuels learning, creativity, problem-solving, and adaptability. Forbes notes that the curiosity mindset enables people to integrate AI thoughtfully rather than passively accepting whatever it generates. In a world flooded with AI‑generated content that is structurally sound, professionally formatted, and cognitively empty, the premium is on people who can generate genuinely original insight — people who have put in the cognitive work, who have struggled with the problem, who have the intellectual infrastructure to know when something is wrong.

Cook makes an observation that has stayed with me: the flaws are the fingerprint. The overreach is resistance. The parts of your thinking that refuse to conform to the training distribution are exactly the parts worth reading.

The productive friction of genuinely hard thinking, the messy first draft, the argument that keeps collapsing and has to be rebuilt, the moment when you realise you do not actually understand something you thought you did, is not inefficiency. It is the mechanism by which expertise is built. It is how curiosity deepens into knowledge, and knowledge deepens into judgment.

We are at risk of engineering that friction out of education, out of early careers, and out of professional life in the name of productivity. The cost of doing so will not show up in this quarter’s efficiency metrics. It will show up a decade from now, when we discover that the cohort of professionals now in mid‑career cannot think their way through a problem that falls outside what the model was trained on.

Being Digitally Curious is the antidote. Not technophobia. Not AI prohibition. But the insistence that we remain active participants in the cognitive work, that we ask rather than accept, interrogate rather than approve, and build the mental infrastructure that allows us to use AI as a lever rather than a crutch.

The question is not whether your students or employees are using AI. They are.

The real question is whether they are building the thinking that makes them worth listening to when the AI is wrong.

author avatar
Andrew Grill Global AI Keynote Speaker, Leading Futurist, International Bestselling Author, Brand Ambassador
Andrew Grill is the AI expert who speaks your business language and helps executives navigate AI without getting lost in the complexity.