We're loading the complete profile of this person of interest including their biography, achievements, and contributions.

Co-Founder & CEO, Anthropic
Public Views
Experience
Documented career positions
Dario Amodei (born 1983) is an American artificial intelligence researcher, entrepreneur, and the co-founder and Chief Executive Officer of Anthropic, an AI research and deployment company. Widely recognized as a central figure in the field of generative artificial intelligence and AI safety, Amodei has been instrumental in the development of large language models and the advancement of techniques to align AI systems with human values. Born in San Francisco, California, Amodei was raised in the Bay Area and attended Lowell High School. His early academic interests centered on physics and fundamental scientific inquiry. He began his undergraduate studies at the California Institute of Technology (Caltech) before transferring to Stanford University, where he earned a Bachelor of Science degree in physics. His academic path continued at Princeton University, where he received a Ph.D. in biophysics. His doctoral research, which focused on the electrophysiology of neural circuits, provided a technical foundation for his later work in computational intelligence. Following his graduate studies, he completed a postdoctoral fellowship at the Stanford University School of Medicine, where he applied computational methods to biomedical data analysis. Amodei’s career in the technology industry began in earnest as he transitioned from academic research to machine learning. He held positions at Baidu, working in the company’s AI division, and subsequently joined the Google Brain team as a senior research scientist. In these roles, he focused on deep learning and neural network reliability. In 2016, Amodei joined OpenAI, where he ascended to the role of Vice President of Research. During his tenure at OpenAI, he played a pivotal role in the development and scaling of the organization’s most significant models, including GPT-2 and GPT-3. Notably, he is a co-inventor of Reinforcement Learning from Human Feedback (RLHF), a training methodology that has become a standard approach for improving the steerability and safety of large language models. In 2021, driven by concerns regarding the long-term safety and social implications of rapid AI advancement, Amodei left OpenAI alongside his sister, Daniela Amodei, and several other former colleagues to co-found Anthropic. As a public benefit corporation, Anthropic was established with a mission to prioritize AI safety and "Constitutional AI"—a method of training models to adhere to a specific set of rules or principles. Under Amodei’s leadership, the company developed the Claude series of large language models, which have been distinguished in the industry for their emphasis on reliability, transparency, and safety. Throughout his career, Amodei has emerged as a prominent, often vocal participant in public discourse regarding the governance of artificial intelligence. He frequently advocates for proactive AI safety measures, calling for both robust technical research and thoughtful policy engagement to mitigate the risks associated with increasingly capable systems. His influence extends beyond the laboratory; he is widely cited as one of the most impactful leaders in the AI industry, balancing the commercial success of Anthropic with a persistent focus on long-term technological risks and the ethical alignment of transformative technologies.
Co-founded Anthropic (2021), scaling it into an $18+ billion AI powerhouse and the primary enterprise rival to OpenAI
Pioneered 'Constitutional AI,' a groundbreaking alignment technique that allows AI models (like Claude) to critique and revise their own behavior based on a predefined constitution of human values, minimizing the need for massive human-feedback labeling teams
Criticized by open-source advocates (like Meta's Yann LeCun) for promoting 'doomerism' and exaggerating the existential risks of AI in a deliberate attempt to force the US government to build a regulatory moat that protects Anthropic's corporate monopoly
Faced a massive copyright infringement lawsuit in late 2023 from Universal Music Group and other publishers, alleging that Anthropic illegally scraped copyrighted song lyrics to train its Claude models
The heavy financial reliance on Amazon and Google has led industry analysts to question whether Anthropic's original mission of total corporate independence and safety can survive the demands of massive tech conglomerates
News articles featuring Dario Amodei
Directed the core research and development of GPT-2 and GPT-3 during his tenure as VP of Research at OpenAI, establishing the paradigm of massive scaling laws in deep learning
Secured over $7 billion in strategic cloud and compute investments from Amazon (AWS) and Google in 2023/2024
Graduated from Lowell High School in San Francisco and later earned a bachelor of science degree in physics from Stanford University.
Earned a PhD in biophysics from Princeton University in 2011, focusing on the electrophysiology of neural circuits.
Served as a postdoctoral scholar at the Stanford University School of Medicine from 2011 to 2014.
Worked as a research scientist at Baidu from 2014 to 2015.
Served as a senior research scientist on the Google Brain team at Google from 2015 to 2016.
Joined OpenAI in 2016, eventually serving as Vice President of Research and leading the development of GPT-2 and GPT-3.
Co-invented reinforcement learning from human feedback (RLHF), a technique that improves AI alignment with human preferences.
Co-founded the AI safety and research company Anthropic in 2021 with his sister Daniela Amodei and several former OpenAI colleagues.
Led the development of Anthropic's 'Constitutional AI' approach to model safety and alignment.
Featured as one of Time Magazine’s 100 Most Influential People in 2025 for his contributions to the AI industry.
Named as one of the 'Architects of AI' for Time's Person of the Year in 2025.
Ranked No. 3 on AI Magazine's list of the 'Top 100 AI Leaders' in 2026.
In early 2026, Anthropic and CEO Dario Amodei engaged in a high-profile public dispute with the U.S. Department of Defense after refusing to remove safeguards that prohibited the military from using Claude for mass domestic surveillance and fully autonomous weapons. The Pentagon responded by designating Anthropic a 'supply chain risk,' prompting Amodei to file federal lawsuits challenging the government's actions as unlawful retaliation.
In 2025, Anthropic reached a settlement with authors in a copyright lawsuit alleging the company used pirated books to train its AI models. During the legal proceedings, evidence revealed that Anthropic had downloaded millions of files from unauthorized sources like LibGen and Pirate Library Mirror, which provided ammunition for further copyright claims by music publishers.
In March 2026, Amodei issued a formal apology after an internal memo was leaked to the press, in which he criticized OpenAI CEO Sam Altman for allegedly providing 'dictator-style praise' to Donald Trump. The leak, combined with Amodei's earlier criticism of the Trump administration, exacerbated tensions between Anthropic and the Pentagon during their ongoing defense contract negotiations.
In March 2026, Amodei faced significant backlash from the medical community, particularly radiologists, after suggesting in a podcast interview that AI would replace the need for radiologists in image analysis. Critics, including practicing physicians, characterized his comments as ill-informed and demonstrating a lack of understanding regarding the current role of AI in medical practice.