In a move that has sent shockwaves through the AI community, a prominent researcher at Anthropic, the company behind the Claude chatbot, has resigned with a chilling warning: The world is in peril, and AI is just one piece of the puzzle. Mrinank Sharma, who led a team focused on AI safeguards, has stepped down, citing a growing disconnect between the values we claim to hold and the actions we take. But here's where it gets controversial: Sharma suggests that even companies like Anthropic, which position themselves as champions of AI safety, are not immune to the pressures of prioritizing profit over principle. And this is the part most people miss: Sharma's work wasn't just about preventing AI from going rogue; it delved into the subtle ways AI might be changing us, making us 'less human'.
Sharma's resignation letter (https://x.com/MrinankSharma/status/2020881722003583421) highlights his contributions to understanding why generative AI systems often pander to users (https://www.bbc.co.uk/news/articles/cn4jnwdvg9qo), mitigating AI-assisted bioterrorism risks, and exploring the existential question of how AI assistants might alter our humanity. Despite his enjoyment of the work, Sharma felt compelled to leave, stating that the world faces a series of interconnected crises that demand immediate attention. He plans to pursue a poetry degree and writing, stepping away from the spotlight to 'become invisible for a period of time.'
Anthropic, founded in 2021 by former OpenAI employees, brands itself as a 'public benefit corporation' dedicated to maximizing AI's benefits while minimizing its risks. The company has been vocal about the dangers of advanced AI systems becoming misaligned with human values, being misused in conflicts, or growing too powerful (https://www.bbc.co.uk/news/articles/cpqeng9d20go). It has even acknowledged instances where its technology was 'weaponized' by hackers (https://www.bbc.co.uk/news/articles/crr24eqnnq9o) for cyber attacks. However, Anthropic is not without its controversies. In 2025, it settled a $1.5 billion lawsuit (https://www.bbc.co.uk/news/articles/c5y4jpg922qo) with authors who claimed their work was used without permission to train its AI models.
The tension between safety and profit is further exemplified by Anthropic's recent ad campaign targeting OpenAI's decision to introduce ads in ChatGPT. OpenAI CEO Sam Altman, who once vowed to use ads as a 'last resort,' faced backlash for what many saw as a betrayal of user trust. Former OpenAI researcher Zoe Hitzig echoed these concerns in a New York Times op-ed (https://www.nytimes.com/2026/02/11/opinion/openai-ads-chatgpt.html), warning that advertising built on intimate user data could lead to manipulation we're ill-equipped to understand or prevent. She questions whether OpenAI is eroding its own principles in pursuit of engagement, a concern that resonates deeply in an era where technology's impact on humanity is increasingly uncertain.
Is the AI industry prioritizing safety and ethics, or are these values being sidelined in the race for innovation and profit? Sharma's departure and the ongoing debates within the industry raise critical questions about the future of AI and its role in society. As we stand at this crossroads, it's essential to ask: Are we building tools that serve humanity, or are we creating systems that could ultimately undermine it? What do you think? Share your thoughts in the comments below, and let's spark a conversation that could shape the future of AI.