We're loading the full news article for you. This includes the article content, images, author information, and related articles.
Anthropic’s AI safety lead Mrinank Sharma resigns with a chilling warning that the "world is in peril," leaving the tech giant to study poetry and escape the AI arms race.

In a chilling departure that reads like the opening of a dystopian novel, the head of AI safety at Anthropic has resigned with a dire warning for humanity. Mrinank Sharma, the man tasked with preventing artificial intelligence from destroying the world, has quit, declaring that "the world is in peril" and opting to study poetry instead.
Sharma’s resignation is not just a career move; it is a protest. As the lead of the Safeguards Research Team at the $350 billion AI giant, he was on the frontlines of the battle to align superintelligent systems with human values. His exit letter, published on X (formerly Twitter), exposes the terrifying reality inside the labs building our future: the pressure to prioritize profit and speed over safety is becoming insurmountable.
"I continuously find myself reckoning with our situation," Sharma wrote, his words stripped of corporate jargon. "The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment." His decision to pivot to poetry—a pursuit of raw, human meaning—stands in stark contrast to the cold, algorithmic future he helped architect.
This departure comes hot on the heels of the release of "Opus 4.6," a more powerful version of Anthropic’s Claude chatbot. The timing suggests a loss of faith in the company's ability to control its own creations. Sharma revealed he had faced "pressures to set aside what matters most," a damning admission for a company that brands itself as the "safety-first" alternative to OpenAI.
There is a profound irony in Sharma’s choice to study poetry. In the face of existential risk from machines that can write code, diagnose diseases, and potentially deceive humans, he is retreating to the one thing AI can mimic but never truly understand: the human soul. His resignation is a signal flare.
For the tech world, this is a PR disaster. For the rest of us, it is a warning. When the safety chief decides that the only rational response to the state of AI is to read verse and hide, we must ask ourselves: what did he see in those algorithms that terrified him so much?
Keep the conversation in one place—threads here stay linked to the story and in the forums.
Other hot threads
E-sports and Gaming Community in Kenya
Active 9 months ago
The Role of Technology in Modern Agriculture (AgriTech)
Active 9 months ago
Popular Recreational Activities Across Counties
Active 9 months ago
Investing in Youth Sports Development Programs
Active 9 months ago