We're loading the complete profile of this person of interest including their biography, achievements, and contributions.

Founder, Safe Superintelligence Inc. (SSI)
Public Views
Experience
Documented career positions
Ilya Sutskever (born 1986) is a prominent Israeli-Canadian computer scientist and researcher widely recognized for his foundational contributions to the field of artificial intelligence, particularly in deep learning and large language models. He is a co-founder and the chief executive officer of Safe Superintelligence Inc. (SSI), a research organization dedicated to the development of safe superintelligent AI systems. Born in Nizhny Novgorod, Soviet Union (now Russia), Sutskever emigrated with his family to Israel at the age of five. He spent his formative years in Jerusalem before moving to Canada at age 16. His academic journey took place at the University of Toronto, where he demonstrated early promise in mathematics and computer science. He earned a Bachelor of Science in mathematics in 2005, followed by a Master of Science in computer science in 2007. In 2013, he completed his PhD in computer science under the supervision of Geoffrey Hinton, a pioneer in neural networks and a Nobel laureate. Sutskever’s early career was marked by groundbreaking research that catalyzed the modern deep learning era. In 2012, while working in Hinton’s lab, he co-authored the influential paper on AlexNet with Alex Krizhevsky and Geoffrey Hinton. This convolutional neural network significantly outperformed existing models in image recognition, serving as a primary catalyst for the subsequent surge of interest in deep learning. Following a brief postdoctoral tenure at Stanford University with Andrew Ng, Sutskever co-founded DNNResearch. Shortly after its founding, the company was acquired by Google in 2013, leading him to join the Google Brain team. During his tenure at Google, he worked on sequence-to-sequence learning—an algorithm that became central to machine translation and natural language processing—and contributed to the development of the TensorFlow framework and the AlphaGo project. In 2015, Sutskever co-founded OpenAI, a non-profit organization (which later restructured) aimed at ensuring that artificial general intelligence benefits all of humanity. As the company's chief scientist, he played a leadership role in guiding research that led to the development of the GPT series of large language models. His work at OpenAI emphasized the scaling hypothesis—the idea that increasing computing power and data size would consistently improve model performance—and eventually shifted toward addressing the risks of superintelligent systems. In 2023, he was a key figure in the board’s temporary removal of CEO Sam Altman, a decision that sparked internal and external scrutiny regarding OpenAI’s governance and direction. Sutskever subsequently resigned from the OpenAI board and departed the company in May 2024. In June 2024, Sutskever co-founded Safe Superintelligence Inc. (SSI) alongside Daniel Gross and Daniel Levy. The company distinguishes itself through a singular, concentrated focus on the safety and development of superintelligence, operating outside the constraints of immediate commercial product cycles. Under his leadership, SSI has attracted significant capital and industry attention, positioning itself as a central player in the global pursuit of AGI alignment and safety. Throughout his career, Sutskever has received numerous accolades, including being elected a Fellow of the Royal Society in 2022 and receiving an honorary doctorate from the University of Toronto in 2025. He is frequently cited as one of the most influential figures in the development of contemporary AI, noted for his technical acumen and his vocal advocacy for the safe, responsible development of powerful artificial intelligence.
Co-founded Safe Superintelligence Inc. (SSI) in 2024, raising over $1 billion in early seed funding from a16z and Sequoia Capital to build a pure-research lab dedicated solely to safe AGI
Served as the Chief Scientist and Co-Founder of OpenAI (2015–2024), acting as the primary visionary behind the massive scaling laws that resulted in the GPT models
Orchestrated the catastrophic, highly chaotic firing of Sam Altman in November 2023, causing an absolute global meltdown in the tech industry; he later publicly regretted his participation in the board's actions, though the event permanently fractured his relationship with OpenAI leadership
His departure from OpenAI in 2024 led to the mass exodus of the entire 'Superalignment' team, sparking severe industry concerns that OpenAI had abandoned its safety protocols in favor of shipping shiny products
Viewed by some tech accelerationists as having adopted a cult-like, religious devotion to 'safety' that threatens to stall open innovation
Co-invented AlexNet (2012) alongside Alex Krizhevsky and Geoffrey Hinton, a breakthrough in convolutional neural networks that won the ImageNet competition and single-handedly triggered the deep learning revolution
Co-inventor of the sequence-to-sequence learning algorithm, a foundational architecture for modern machine translation
Received a Bachelor of Science degree in mathematics from the University of Toronto in 2005.
Earned a Master of Science degree in computer science from the University of Toronto in 2007.
Co-invented AlexNet, a groundbreaking convolutional neural network, with Geoffrey Hinton and Alex Krizhevsky in 2012.
Completed a PhD in computer science from the University of Toronto in 2013 under the supervision of Geoffrey Hinton.
Joined Google Brain as a research scientist in 2013 following Google's acquisition of DNNResearch.
Co-developed the sequence-to-sequence learning algorithm while working at Google Brain in 2014.
Co-founded the artificial intelligence research organization OpenAI in 2015.
Named to MIT Technology Review's '35 Innovators Under 35' list in 2015.
Served as a keynote speaker at Nvidia Ntech and the AI Frontiers Conference in 2018.
Elected as a Fellow of the Royal Society (FRS) in 2022.
Awarded the NeurIPS Test of Time award in 2022, 2023, and 2024 for contributions that significantly shaped the AI field.
Recognized on TIME's list of the 100 most influential people in AI in 2023 and 2024.
Co-founded Safe Superintelligence Inc. (SSI) in June 2024, serving as its CEO.
Received an honorary Doctor of Science degree from the University of Toronto in 2025.
Awarded the National Academy of Sciences Award for the Industrial Application of Science in 2026.
In November 2023, as a member of the OpenAI board of directors, Sutskever played a central role in the sudden, temporary ousting of CEO Sam Altman. He later expressed 'deep regret' for his participation in the decision after an internal employee revolt and intense public pressure led to Altman's reinstatement and a restructuring of the board.
Following the November 2023 leadership crisis, court depositions released in 2025 revealed that Sutskever had prepared a 52-page memo alleging a 'consistent pattern of lying' and manipulative behavior by Altman, which contributed to ongoing internal tensions. The revelations provided further insight into the fractures within OpenAI's leadership that ultimately led to his resignation from the company in May 2024.
Sutskever's departure from OpenAI in 2024, occurring shortly after the dissolution of the 'Superalignment' team he co-led, became a focal point for broader criticism regarding OpenAI's perceived shift away from prioritizing AI safety. Observers and departing staff cited these events as evidence of an erosion of safety culture and transparency within the organization.