Open laptop and book compilation in a classroom
Editorial

Agentic AI in Education: Ready to Move to The Next Level?

13 minute read
Nick Jackson avatar
By
SAVED
Generative AI grabbed the headlines. Agentic AI could change the stakes entirely.

There is a reasonable argument that education has only just begun to grapple with generative AI. Schools have updated policies, some universities have redesigned assessments and some teachers have figured out what to do with AI tools that, for many, still feel unfamiliar. Yet while that work continues, something considerably more significant is already taking shape.

Agentic AI is not simply a better version of what we have now. It represents a different kind of system, thinking and processes altogether, one that does not wait to be prompted but instead pursues goals, makes decisions across multiple steps and takes action on a user's behalf. Where generative AI responds, agentic AI acts. That distinction matters enormously, and education has barely begun to reckon with it.

Table of Contents

What Agentic AI Actually Means

The term can sound abstract, so it is worth being direct about what it involves in practice. Agentic AI systems are capable of breaking down complex tasks, using tools and external services, adapting to new information and operating with a degree of autonomy that goes well beyond answering a question or generating a piece of text.

Products that demonstrate this are already available and in use. OpenAI’s Codex, various tools offered by Google such as CLI and AntiGravity and Claude Code by Anthropic, give AI direct access to a user's codebase and file system. These allow the capacity for AI to read, write, execute and debug across complex workflows autonomously.

Claude Cowork, released more recently and now available to all paid subscribers, extends that same agentic architecture to non-technical users, allowing someone to point the system at a folder of files, describe what they need done and let it plan and execute the work from start to finish. Sorting documents, compiling research, generating reports, processing data from multiple sources, scheduling recurring tasks — these are not things Cowork helps a user do. They are things Cowork does, with relative simplicity and reliability.

What is significant here is not just the capability but the speed at which agentic AI arrived. Claude Cowork was reportedly built in roughly a week and a half, largely using Claude Code itself. That kind of recursive development loop, AI used to build AI tools, points to a pace of change that education institutions or systems are not currently equipped to match (not to mention an entrepreneurial mindset that is typically lacking in education). But this also points to something else: the barrier to building capable, custom agentic tools has dropped dramatically. Such a lowering of barriers has direct implications for how education institutions might choose to engage with the technology, which is explored further below.

Agentic AI Operates Across Departments

It is tempting to think about agentic AI primarily through the lens of teaching and learning. But an education institution is not just a classroom. It is an organization with administrative functions, communications, finance, human resources, enrollment, facilities management and compliance obligations. Agentic AI is already being positioned to operate across all of these areas, not just the ones that involve students and teachers directly.

Consider what that looks like outside the classroom.

  • In enrollments, an agentic could handle inquiries, process applications, cross-reference eligibility criteria, request missing documentation and update records, all without a staff member initiating each step.
  • In HR, agents could manage onboarding workflows, monitor compliance training and draft communications.
  • In finance, they could process invoices, reconcile accounts and flag anomalies.

Tools like Cowork are already being adopted in enterprise settings across exactly these functions, in marketing, legal and finance teams, which means the pathway into institutional education contexts is not a distant prospect. It is forming now. And some institutions are already playing in this space.

Then there is teaching and learning itself, where the implications are equally substantial. An agentic system in an educational context does not just help a student draft an essay. It might research the topic, organize sources, identify gaps in the argument and submit a finished version to a learning platform, all without being asked to do each of those things individually. A system monitoring student progress could adjust the resources it surfaces, flag concerns to a teacher and update records accordingly, all operating in the background between human interactions.

The breadth of that picture, operational and academic, administrative and pedagogical, is what makes agentic AI genuinely different from what has come before.

Related Article: Students Speak Out: AI Is Changing School, and No One's in Charge

Build, Buy or Both?

Up to this point, the conversation about AI tools in education has largely been framed in the tried and tested manner of edtech, as a procurement question: which platforms should institutions adopt, which vendors should they trust and what do contracts need to say? Agentic AI complicates that framing in an important way, because, for the first time, it is genuinely realistic for education institutions to build their own tools rather than buy them.

Claude Code, in particular, makes this more accessible than it has ever been. A school or university with even modest technical capacity could use it to develop bespoke agentic workflows tailored precisely to institutional needs, a tool that manages a specific enrollment process the way that institution actually runs it, or monitors student engagement data in ways that align with how a particular faculty operates. And some are doing just that right now. Tools like these do not need to compromise on fit because they are designed by an external vendor for a generic market. They can also be developed and iterated quickly, without lengthy procurement cycles or vendor dependencies.

The cost implications are worth considering too. Many institutions are paying significant licensing fees for platforms that deliver a fraction of what they actually need, or that require expensive customization to function as intended. An agentic system built internally, drawing on a capable foundation model through an API, can, in some cases, replace multiple purchased tools at substantially lower ongoing cost. This is not hypothetical. It is already happening in other sectors where organizations are building targeted internal agents rather than purchasing off-the-shelf solutions that only partially fit.

For schools and universities, the appeal is obvious. Greater control over data, tools designed around actual workflows, no dependency on a vendor's roadmap and the ability to modify or retire a tool when needs change. But the appeal comes with a substantial set of obligations that are different in character from those that apply when deploying someone else's product, and in some respects more demanding.

The Edtech Market Question

Whether institutions build, buy or pursue some combination of both, the edtech market is facing genuine disruption. But to understand why that disruption cuts deeper in education than in many other sectors, it is worth considering how schools and universities have historically approached technology investment, and why those patterns are now a liability as much as a habit.

Most schools, and particularly those without large central IT functions, have relied on a relatively small ecosystem of trusted vendors. Relationships with established edtech providers are often longstanding, built on familiarity, peer recommendation and the kind of ongoing support that a small in-house technology team depends on. Procurement decisions in education are rarely purely rational assessments of capability. They are shaped by data security, privacy and sovereignty, laws, trust, by what neighboring institutions are doing, by what a regional authority recommends and by the practical reality that a tool someone already knows how to use is less disruptive than one that requires retraining.

Agentic AI does not slot neatly into that ecosystem. The providers best positioned to deliver capable agentic tools at scale are not necessarily the established edtech vendors schools have relationships with. Many are large AI companies, cloud infrastructure providers and, in some cases, no one at all, because the tool is built in-house.

For schools accustomed to buying from people they know, within procurement frameworks designed for a different era of software, this represents a meaningful shift in how decisions need to be made and who needs to be involved in making them.

A New Edtech Development Lifecycle

The knock-on effects of that shift are worth naming directly. If schools begin replacing existing platforms with agentic alternatives, whether purchased from new providers or built internally, the vendors currently holding those contracts lose revenue and, in some cases, may not survive. That might matter because many of those vendors also provide the professional development, implementation support and ongoing training that some schools rely on when adopting new technology. Disrupting the vendor ecosystem may not just change what tools are available. It could also change the support structures that sit around them, and what this will mean for some institutions is hard to judge.

There is also the question of what gets lost in the transition, though the answer is less straightforward than it might appear. Some established edtech tools were built with educational contexts specifically in mind, considering safeguarding requirements, age-appropriate design and accessibility standards. But it would be equally fair to say that many are poorly designed, inflexible and ill-suited to the actual workflow realities of the teachers and administrators who use them daily.

Edtech has a long history of tools that were sold to decision-makers rather than built for practitioners, creating friction rather than reducing it, and locking institutions into ways of working that serve the platform more than the people. For those institutions, the arrival of agentic tools that can be shaped around how a school actually operates, rather than how a vendor assumed it would, may represent an improvement rather than a loss. The more honest question is not what gets lost but what was genuinely worth keeping in the first place.

Some of the tools schools and universities have adopted in the last two years are built around generative AI in its current form: assistants, content generators, feedback tools. These have some genuine uses. But agentic systems, whether purchased from large providers or built in-house, are likely to make a significant portion of today's edtech landscape look like a transitional phase rather than an endpoint. For the developers of those tools, the challenge is whether they can evolve fast enough to remain relevant. The fact that a capable agentic tool can be built from scratch in under two weeks should give any edtech provider pause. Smaller, bespoke developers, often where genuine classroom innovation happens, are unlikely to match that pace or that resourcing, and some will find their market position undercut not by a competitor but by the institutions they used to sell to.

Data and Privacy in a Different League

Generative AI already raised legitimate concerns about data. What information are students sharing? Where does it go? Who can access it? These questions have not been fully resolved, and agentic AI makes them considerably more complex, regardless of whether an institution is using an off-the-shelf tool or one it built itself.

Learning Opportunities

For purchased tools, the concerns are already visible. Anthropic's own documentation notes that Cowork activity is not captured in audit logs, that conversation history is stored locally on the user's device and that the product is not recommended for regulated workloads. That is a significant caveat for institutions operating in contexts where the privacy of children and young people is subject to legal protection.

For institutions building their own agentic tools, the picture is different but not simpler. When you build a tool that accesses student records, learning histories, staff data and operational systems, you are also taking on full responsibility for how that data is handled. There is no vendor to hold accountable for a breach or a misuse.

Data governance, retention policies, access controls and consent frameworks all need to be designed in from the start rather than inherited from a platform's existing structure. This is work that requires expertise many institutions do not currently have in-house, and it needs to happen before a tool is deployed, not after. Providers of agentic tools such as Anthropic, OpenAI, Google, Microsoft offer reassurance, education accounts that do not share data, but at this stage, those in education are cautious to fully trust what they have been told or sold.

And the long-standing trust dimension really does matter here. Schools and universities have developed data sharing agreements, privacy notices and consent processes around the vendors they already work with. Those arrangements took time to build and are understood, however imperfectly, by parents, students and staff. Shifting to new vendors or to internally built tools could mean rebuilding that trust infrastructure from scratch, updating agreements, rewriting notices and re-establishing the understanding that personal data is being handled appropriately. That is not a small undertaking, and it is one that may require genuine legal and communications expertise, not just a technology team working quickly.

Cybersecurity: A Concern That Cannot Wait

Cybersecurity in education has historically been under-prioritized. Schools and universities hold significant amounts of sensitive data but often lack the resources of better-funded sectors such as in medical fields. Agentic AI increases that exposure considerably, and the nature of that exposure changes depending on how the technology is being used.

For deployed tools like Cowork, the risk of prompt injection, where an agent encounters malicious instructions embedded in content it is processing and acts on those instead, is explicitly acknowledged by Anthropic in its own product guidance. The company notes it has built defenses but is clear that agent safety remains an active area of development across the industry. That is an honest position from a responsible provider. It is also a direct signal to institutions that deploying these systems without adequate security consideration carries real risk.

For institutions building their own agentic tools, the cybersecurity obligations are more demanding still. A bespoke AI agent built on an API and integrated with institutional systems is not protected by whatever security infrastructure a commercial provider maintains around its own product. The institution is responsible for securing the agent itself, the data it accesses, the integrations it uses and the actions it is permitted to take. If a custom-built agent is compromised, the consequences could extend to financial records, staff data, student information and operational systems simultaneously.

There is a further dimension specific to schools that is easy to overlook. The shift away from established vendors towards new providers or internal builds also means moving away from the security track records and certifications those vendors carried. A school that has relied on a vendor's certifications or child safety compliance record as part of its own due diligence now needs to establish those assurances independently, either by evaluating new vendors with the same rigor or by meeting those standards itself. That requires capability that most schools likely do not currently have, and it does not arrive automatically along with the decision to build.

Governance, Accountability, Sustainability and Project Management

Perhaps the most significant challenge across all of this is structural. Agentic AI, however it arrives in an institution, requires a level of governance, clear accountability and ongoing management capacity that most education systems are not yet equipped to provide. That challenge intensifies significantly when institutions are building their own tools rather than relying on a vendor's infrastructure.

Governance

Governance, when deploying a commercial product, at least involves a set of external controls: the vendor's own policies, enterprise admin settings and contractual obligations. Tools like Cowork include features that allow administrators to disable functionality, manage access by role and set usage parameters at an organizational level. Those controls only work if someone with the knowledge and authority to use them is actually doing so, but they exist. For schools used to relying on a trusted vendor to carry some of that governance burden, the shift to building internally means taking on the full weight of it.

When an institution builds its own agentic tool, governance is entirely self-constructed. There are no default safeguards inherited from a commercial platform. Every decision about what the system can access, what actions it can take, who can use it and under what conditions is the institution's to make and maintain. That requires policies, processes and people, ideally before a line of code is written, not after the tool is already running.

Accountability

Accountability follows the same logic. When a purchased tool makes a decision or takes an action that affects a student or a staff member, there is at least a question to ask of the vendor. When an institution-built agent does the same, the accountability sits entirely with the institution. That is not necessarily a problem. It may in fact be preferable to diffuse accountability across a complex vendor relationship, but it requires the institution to be genuinely ready to own the outcomes of what its systems do.

Sustainability

Sustainability is a challenge in both contexts but takes a particular form when institutions are building their own tools. Commercial products are maintained by the provider. Internal tools need to be maintained by someone inside the institution, updated as the underlying models change, monitored for performance, reviewed as operational needs evolve and eventually retired when they are no longer fit for purpose.

That ongoing commitment is often underestimated at the point of building, particularly in organizations where the person who built the tool may not be the person responsible for it two years later.

Project Management

Project management, finally, is where good intentions most commonly break down. Building an agentic tool, even a relatively focused one, is a significant undertaking. It requires clear scope, defined ownership, realistic resourcing, iterative testing and structured review. The fact that Claude Code can produce something that looks functional in a matter of days does not mean the work of safely integrating that tool into institutional systems, training the people who will use it and maintaining it over time can be done at the same pace. Moving quickly to build is possible. Moving quickly to govern is not.

Related Article: What Every President & Provost Should Demand Before Buying 'AI-Enabled' Anything

Learning Opportunities

Agentic AI is not on the horizon. Tools like Claude Code and Claude Cowork are already in use, and their reach into institutional workflows, academic and operational alike, will only grow. For education institutions, the question is not whether to engage with this technology but how, on what terms and with what level of strategic plans developed to work in very different ways.

For those considering what to build and what to buy, the calculus is genuinely complex, and the familiar reference points are shifting. The vendors schools have trusted, the procurement frameworks they have relied on and the support structures they have taken for granted are all under pressure. That does not make the old relationships worthless. Established vendors who understand educational contexts, safeguarding requirements and the realities of classroom use carry knowledge that a capable but generic agentic tool does not automatically replace. But it does mean that those relationships can no longer be the primary basis for technology decisions.

What matters most, whether building or buying, is that decisions are made deliberately, with clear governance frameworks, genuine cybersecurity capability, defined accountability and a realistic commitment to ongoing management. The institutions that will navigate this well are those that treat agentic AI not as a tool to deploy but as a capability to govern. That distinction, in the months or years ahead, will make a very significant difference.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Nick Jackson

Nick Jackson is the leader of digital technologies at Scotch College in Adelaide, Australia and founder of Now Future Learning, providing help to educational institutions and businesses on the integration and use of generative AI. Jackson is also the co-author of the book “The Next Word: AI & Teachers.” He holds a Ph.D. and two master's-level degrees. Connect with Nick Jackson:

Main image: Maksym Yemelyanov | Adobe Stock
Featured Research