Anthropic's safeguards research lead resigned as the $183 billion AI company faces mounting scrutiny over leadership stability amid rapid expansion.
Mrinank Sharma announced his resignation in a letter to colleagues, which he later published on X. According to the letter, he cited concerns around global crises and organizational pressures.
Today is my last day at Anthropic. I resigned.
— mrinank (@MrinankSharma) February 9, 2026
Here is the letter I shared with my colleagues, explaining my decision. pic.twitter.com/Qe4QyAFmxL
"The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding at this very moment," Sharma wrote. He added that he has "repeatedly seen how hard it is to truly let our values govern our actions," both within himself and the organization.
Sharma's exit follows several other notable departures from Anthropic. Harsh Mehta and Behnam Neyshabur announced on X they left to "start something new," while former AI safety researcher Dylan Scandinaro joined OpenAI as head of preparedness.
Table of Contents
- Sharma's Focus on AI Safety, Human Disempowerment
- Other Anthropic Employees Departed in Early 2026
- Sharma's Full Resignation Letter
Sharma's Focus on AI Safety, Human Disempowerment
One Anthropic report published in May of last year claimed the safeguards research team had been recently focused on researching and developing safeguards against bad actors using AI chatbots to seek guidance on how to conduct malicious activities.
In a study co-authored by Sharma published just last week, he wrote, "Although AI assistants are now deeply embedded in society, there has been limited empirical study of how their usage affects human empowerment." The study, which analyzed 1.5 million consumer Claude conversations, found several concerning patterns, including:
- Validation of persecution narratives and grandiose identities
- Definitive moral judgements about third parties
- Complete scripting of value-laden personal communications that users appear to implement verbatim
"Our findings highlight the need for AI systems designed to robustly support human autonomy and flourishing," according to the researchers.
Others in the industry have pointed to similar issues, such as Microsoft's head of AI, Mustafa Suleyman, posting online about "AI psychosis."
"Reports of delusions, 'AI psychosis,' and unhealthy attachment keep rising," wrote Suleyman. "And as hard as it may be to hear, this is not something confined to people already at-risk of mental health issues. Dismissing these as fringe cases only help them continue." He called for the industry to share interventions, limitations and guardrails that prevent the perception of consciousness, or undo that perception is a user develops it.
Reports of delusions, "AI psychosis," and unhealthy attachment keep rising. And as hard as it may be to hear, this is not something confined to people already at-risk of mental health issues. Dismissing these as fringe cases only help them continue. 5/
— Mustafa Suleyman (@mustafasuleyman) August 19, 2025
Related Article: ‘AI Psychosis’ Is Real, Experts Say — and It’s Getting Worse
Other Anthropic Employees Departed in Early 2026
Sharma is not the first (or likely the last) Anthropic employee to depart the AI company.
Dylan Scandinaro, former AI safety researcher at Anthropic, announced in early February his departure and transition to OpenAI as Head of Preparedness. "AI is advancing rapidly" he wrote. "The potential benefits are great — and so are the risks of extreme and even irrecoverable harm. There’s a lot of work to do, and not much time to do it!"
Beyond Scandinaro, two other employees left Anthropic in early 2026: Harsh Mehta, who worked in research and development, and Behnam Neyshabur, a senior AI research. Both cited they were leaving to "start something new."
Sharma's Full Resignation Letter
You can read Sharma's resignation letter in full below:
Dear Colleagues,
I've decided to leave Anthropic. My last day will be February 9th.
Thank you. There is so much here that inspires and has inspired me. To name some of those things: a sincere desire and drive to show up in such a challenging situation, and aspire to contribute in an impactful and high-integrity way; a willingness to make difficult decisions and stand for what is good; an unreasonable amount of intellectual brilliance and determination; and, of course, the considerable kindness that pervades our culture.
I've achieved what I wanted to here. I arrived in San Francisco two years ago, having wrapped up my PhD and wanting to contribute to Al safety. I feel lucky to have been able to contribute to what I have here: understanding Al sycophancy and its causes; developing defences to reduce risks from Al-assisted bioterrorism; actually putting those defences into production; and writing one of the first Al safety cases. I'm especially proud of my recent efforts to help us live our values via internal transparency mechanisms; and also my final project on understanding how Al assistants could make us less human or distort our humanity. Thank you for your trust.
Nevertheless, it is clear to me that the time has come to move on. I continuously find myself reckoning with our situation. The world is in peril. And not just from Al, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.' We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences. Moreover, throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.
It is through holding this situation and listening as best I can that what I must do becomes clear.' I want to contribute in away that feels fully in my integrity, and that allows me to bring to bear more of my particularities. I want to explore the questions that feel truly essential to me, the questions that David Whyte would say "have no right to go away", the questions that Rilke implores us to "live". For me, this means leaving.
What comes next, I do not know. I think fondly of the famous Zen quote "not knowing is most intimate". My intention is to create space to set aside the structures that have held me these past years, and see what might emerge in their absence. I feel called to writing that addresses and engages fully with the place we find ourselves, and that places poetic truth alongside scientific truth as equally valid ways of knowing, both of which I believe have something essential to contribute when developing new technology.* I hope to explore a poetry degree and devote myself to the practice of courageous speech. I am also excited to deepen my practice of facilitation, coaching, community building, and group work. We shall se what unfolds.
Thank you, and goodbye. I've learnt so much from being here and I wish you the best. I'll leave you with one of my favourite poems, The Way It Is by William Stafford.
Good Luck,
Mrinank
The Way It Is
There's a thread you follow. It goes among
things that change. But it doesn't change.
People wonder about what you are pursuing.
You have to explain about the thread.
But it is hard for others to see.
While you hold it you can't get lost.
Tragedies happen; people get hurt
or die; and you suffer and get old.
Nothing you do can stop time's unfolding.
You don't ever let go of the thread.
William Stafford