2025 VKTR contributor of the year Emily Barnes
Interview

2025 VKTR Contributor of the Year: Emily Barnes

4 minute read
Michelle Hawley avatar
By
SAVED
A profile on AI researcher and higher education executive Emily Barnes, one of VKTR's top contributing authors in 2025.

VKTR contributor Emily Barnes is not afraid to drop some hard truths on the world of AI, covering topics like AI’s massive resource appetite, exploitative AI models and exactly how bias gets into our technology — and what we can do about it.

In a recent interview, she breaks down what’s currently on her radar and the issues she thinks will test leaders navigating AI in 2026. 

An Interview With Emily Barnes, 2025 VKTR Contributor of the Year

Tell us about your professional background — what keeps you busy today?

My professional path sits at the intersection of research, leadership and real-world systems work. I began in academia, teaching and researching computer science and technical literacy, then moved into senior executive roles in higher education, including Dean, Provost, Chief Digital Learning Officer and Chief AI and Innovation Officer.

Those roles forced a shift from theory to consequence. I spent years inside the systems where technology decisions land, including in learning platforms, student data systems, enrollment operations, analytics and AI-enabled workflows and I saw firsthand how good intentions often collapse under operational pressure.

The throughline has never been tools; it has always been architecture, policy, law joined with who holds power, who bears risk and who is left exposed when systems scale faster than ethics.

Why do you like to contribute to VKTR?

I write for VKTR because the platform allows honesty without dilution. Too much AI commentary either celebrates innovation uncritically or buries risk in abstract language. VKTR values clarity, accountability and action, which aligns with how I think leadership should function. Writing here creates space to interrogate regulatory gaps, ethical failures and institutional blind spots without turning them into marketing narratives.

VKTR reaches people who make decisions (e.g., executives, policymakers, educators and operators) and that matters. The goal has never been commentary for commentary’s sake. The goal has always been sensemaking that leads to responsibility. VKTR provides a rare forum where difficult truths are not softened to preserve comfort, and that makes the work feel necessary rather than performative.

What's your favorite article you’ve published and why?

Two articles stand out because they demanded different kinds of honesty.
 
"Digital Innocence Lost: How AI and Deepfakes Are Fueling the Next Generation of Child Exploitation" remains unforgettable because the research itself was deeply disturbing.
 
While working on that piece, I encountered constant platform warnings, content flags and safeguards. All this signals that the material crossed lines most systems were never designed to handle at scale. The harm was impossible to abstract. Writing that article felt less like analysis and more like witnessing a failure of law, policy and platform responsibility unfolding in real time. That weight never really leaves.
 
"To the Dreamers: AI Was Never for Us" matters for a different reason. That piece was the most honest reckoning I have written.
 
After years of teaching, advocating and believing AI could be steered toward equity from within existing power structures, I had to confront a harder truth: many ethical narratives around AI function as branding for profit, not restraint. Writing that article required letting go of neutrality and naming disillusionment openly. Together, those two pieces represent my full scope of the work. Protecting those being harmed now and naming the illusions that allowed that harm to become normalized.

What AI-related challenges will leaders face in 2026?

This is one of those small, yet mighty questions. Of course I have a lot to say. There are seven primary AI-related challenges Leaders will face in 2026 and beyond.

Governance Will Eclipse Model Performance

The central challenge in 2026 will have nothing to do with models or the tech itself, but whether governance exists at all. AI-assisted decision-making has introduced discovery risk, unclear data provenance and accountability gaps that most institutions were never designed to manage. Retained prompts, inferred intent and automated recommendations now sit inside everyday workflows, quietly expanding legal exposure like a dough starter on your kitchen counter that you forgot to vent.

Many organizations still underestimate how quickly experimental AI use becomes evidentiary record. It’s already a collaborative, company-wide doomsday project most leaders have no idea exists.

Executive Orders Are Accelerating Adoption Without Clarifying Compliance

Federal executive orders and agency guidance have created momentum without operational precision. While framed as innovation and competitiveness mandates, these actions have produced widespread confusion about what is permitted versus what remains regulated.

In education, healthcare and the public sector, some leaders have mistakenly interpreted AI directives as permission to bypass existing law. Executive orders do not override FERPA, privacy statutes, civil rights protections, procurement rules or due-process obligations, yet AI is often deployed as if they do.

The Compliance Free-for-All Is Creating Institutional Risk

Absent clear statutory guardrails, organizations are filling gaps with inconsistent policies, vendor assurances or informal “pilot” logic that collapses under scrutiny.

AI use is frequently treated as an exception rather than an extension of existing compliance regimes. This has created a patchwork of practices where similar organizations operate under radically different risk profiles. When enforcement or litigation arrives, inconsistency will matter more than intent.

Regulation Is Being Shaped by Power, Not Public Risk or Harm

AI governance is increasingly driven through intellectual-property enforcement, litigation and private licensing agreements rather than democratic consensus. Oversight is moving quietly from legislatures into courts, contracts and compliance regimes designed to protect capital first and people second.

Access to data, content and model capability is becoming a function of who can afford licensing, legal insulation and compliance infrastructure. Definitely not who/what serves the public interest.

Talent Gaps Are Becoming Governance Failures

Most start-ups, technology organizations can deploy AI; very few can audit, secure, explain or govern it. The shortage is not technical talent but governance talent. The professionals who combine deep subject-matter and industry expertise with architectural and systems-level precision.

This is a significant gap in the market. Without professionals who understand the industry, law, data, risk and systems together, institutions are outsourcing judgment to vendors and hoping brand reputation substitutes for accountability. This approach is a massive failure. Like buying a new car without liability insurance.

Trust Erosion Will Accelerate

Employees, students, customers and citizens are becoming aware that opaque systems are shaping outcomes without clear paths for explanation or appeal. Child safety and harm prevention will remain the public-facing justification for new rules, but ownership, liability and control are the real enforcement drivers.

Learning Opportunities

Leaders who equate regulation with protection, or ethical language with ethical outcomes, will be unprepared for how quickly trust collapses once harm becomes visible.

What Will Separate Survivors From Casualties

The organizations that endure will treat restraint as a leadership capability, not a weakness. They will redesign governance intentionally rather than waiting for it to be imposed through litigation or enforcement.

In 2026, leadership will be measured less by how fast AI was deployed and more by whether governance structures were built early and before rules hardened, power consolidated and options narrowed.

About the Author
Michelle Hawley

Michelle Hawley is an experienced journalist who specializes in reporting on the impact of technology on society. As editorial director at Simpler Media Group, she oversees the day-to-day operations of VKTR, covering the world of enterprise AI and managing a network of contributing writers. She's also the host of CMSWire's CMO Circle and co-host of CMSWire's CX Decoded. With an MFA in creative writing and background in both news and marketing, she offers unique insights on the topics of tech disruption, corporate responsibility, changing AI legislation and more. She currently resides in Pennsylvania with her husband and two dogs. Connect with Michelle Hawley:

Main image: Simpler Media Group
Featured Research