I teach elementary school. For most of the past year I've written about enterprise AI risk through an adult lens. There’s examples of analysts who can't verify their own tools and consultants who submit fabricated citations. Those arguments still hold. What I've been slower to say clearly, and what I'm saying now, is that none of them transfer intact when the user is a child.
Every preventable cognitive harm in the last century has followed the same pattern. The damage shows up in children first. The body of evidence of harm accumulates while the governance response arrives years after the harm is irreversible. Children are the canaries in the coal mine. They were with lead. They were with social media. And they are, right now, with AI.
Table of Contents
- A Pattern of Harm
- A New Attack Vector
- Foreseeable Risk Is Already Present
- Where Does Institutional Responsibility Lie?
- Responding to AI's Cognitive Impact
- Are Developing Minds Worth Protecting?
A Pattern of Harm
If you authorize AI tools in schools, you need to understand this argument. Not because the law requires it yet (though the legal trajectory is unmistakable), but because the moral responsibility exists regardless of whether the law has caught up.
Assessing cognitive impact of AI tools on children will become standard practice. Will you have already acted before the negative consequences are fully realized?
For most of the twentieth century, children were exposed to lead through paint, gasoline and drinking water. The research on lead's effect on developing brains was not hidden. It was accumulating steadily. What lagged was institutional action. The evidence was considered, debated, relitigated and deferred while a generation of children absorbed a neurodevelopmental toxin that would shape their cognitive development permanently. When the regulatory response finally arrived, it arrived after the harm.
Social media followed the same arc on a compressed timeline. By the time the US Surgeon General issued an advisory on youth mental health and social media use in 2023, the evidence linking heavy use to rising rates of anxiety and depression in adolescents had been accumulating for years. Children were becoming addicted to their phones. The research was available. The response was late.
The pattern is consistent. The earliest and most severe consequences of a harmful deployment appear in children. The adult population surfaces the same damage later, more muted, harder to attribute. By the time adults are the story, the developmental window for the children who absorbed the first wave has closed.
Related Article: Agentic AI in Education: Ready to Move to The Next Level?
A New Attack Vector
AI in education is the next instance of this pattern. The difference this time is the attack vector. Lead damaged the physical structures of the developing brain. Social media altered the emotional and attentional environment children developed within. AI does something the prior two precedents could not — it inserts itself into the cognitive process itself, at the exact developmental window when that process is being built.
The research on AI and adult cognition is where the pattern of harm first becomes visible. When adults use AI in their writing, it begins to reflect the model's positions. More concerning is that the user’s own stated beliefs shift to match the model's framing. The influence outlives the session. The users’ beliefs change, and they do not notice.
Users rarely push back on the model's output. They select among the options it offers and accept the one that feels close enough. When you always rely on shortcuts or outside help, you forget the exact details of how things should be. Without those details, your brain can no longer tell the difference between "close enough" and "100% correct."
Both studies describe adults. Adults had a baseline before AI arrived. Their convergence toward the model's distribution is a drift from a known position and its easier to observe the changes. Remove the tool, and the position is still there, partially compromised, probably recoverable.
A child has no baseline. A child using an AI tutor is not drifting from a position. A child is assembling the position, in real time, from what the model offers. If the adult research describes influence, the child's situation is formation. The homogenization is not a drift from their standard. It is their standard.
Foreseeable Risk Is Already Present
This is the canary. When a thirteen year old uses an AI tutor to write an argumentative essay, the tool is not extending a skill she possesses. The model is producing the structure of the argument, the sequencing of ideas, the emphasis, the resolution. She is selecting among its options and calling the result her reasoning.
The adult studies describe what happens to a reasoner whose sense of "what sounds right" drifts toward the model. The child is building that sense in the first place, from the model, without a pre-existing standard to anchor her. This is a developmental issue, not a learning one.
Foreseeability is how institutions become responsible for harms they could have prevented. The standard is simple. Foreseeability does not require regulators to have identified the exact injury in advance. It requires that the category of risk was foreseeable given the information available. That threshold has been crossed.
Every element of the foreseeability test is present in AI in education. The research describing cognitive homogenization in adults has been published in peer-reviewed journals. The research on cognitive development in adolescents has been established for decades. The combination of the two, which is to say the foreseeable consequence of deploying cognitive-influence technology on populations whose cognition is still forming, is not really a speculative risk. It’s a fairly predictable application of two established bodies of evidence to each other.
Where Does Institutional Responsibility Lie?
The school leader reading this shouldn’t need to imagine what unaddressed foreseeable harm in education looks like. It looks like lead in school drinking water in districts where the pipes were known to be a risk and the testing had not been done. It looks like one-to-one device programs deployed into middle schools while the research on adolescent social media use was already in circulation. In both cases, the institutions that eventually faced liability did not cause the harm by deploying a tool. They faced liability because the harm was foreseeable and they had not assessed it.
This is the position every school district now occupies with respect to AI.
The research exists. The regulatory signals exist. The vendor data on student usage patterns confirms that the research applies to the actual deployment. What does not yet exist, in most districts, is a governance process that evaluates cognitive impact before authorization. That gap is where foreseeable harm becomes institutional responsibility.
Foreseeability, in other words, has already done its work. The threshold has been crossed. What remains is the institutional response.
Responding to AI's Cognitive Impact
Every major category of impact assessment in modern regulatory history followed the same path. Environmental review, data protection, child rights: each began as documented harm, became legislation, became judicially enforceable and eventually became a standard of care whose absence was itself evidence of negligence.
Cognitive impact assessment for AI in education is on the same path. The early stages have begun. The mandates will follow.
The institutional question is where on that path you choose to act. The version of this argument I could make is the legal version. I could describe the cases that are moving. I could describe the verdicts that have come down when institutions were found to have deployed technology without assessing its effect on students. I could describe the specific moment at which a school board's failure to act becomes the thing that makes the board liable rather than the vendor.
But this essay is not the legal version. It’s the one that has to come before the legal version, because the law is the last institution to recognize harm, not the first.
The people responsible for what shows up on children's screens in a school are, in order: the school board, the superintendent, the curriculum director, the IT director, the principal and the teacher. At no point in that chain is there a person whose role description includes evaluating whether an AI tool supports or prevents the cognitive development it claims to serve. Data privacy has a reviewer. Accessibility has a reviewer. Cost has a reviewer. Cognitive impact has no one. The institution carries the risk, and no individual inside it has the expertise, the authority and the mandate to act on it.
Related Article: What Students Might Not be Telling Us About AI Friends
Are Developing Minds Worth Protecting?
Every preventable cognitive harm in the last century has left behind the same question, asked after the fact, by the same kinds of institutions. What did we know, and when did we know it?
For lead, the honest answer is that institutions knew for decades and deferred. For social media, the answer is that institutions were still deferring when the advisory came down. For AI in education, the answer is being written right now, in the procurement decisions being made this spring, by people who do not yet know that they are writing it.
The generation of students currently learning with AI will not ask you whether their tools were legal. They will ask you whether the people responsible for their education treated their developing minds as worth protecting before the law required it. That is the question every school leader should be prepared to answer.
The deployment gap is not a technical problem. It is a moral one that the law will eventually formalize. The institutions that move before the formalization will not need to explain themselves later. The ones that wait will.
Learn how you can join our contributor community.