Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    Safe Superintelligence Inc
    Business

    Safe Superintelligence Inc: Pioneering AI Safety

    Safe Superintelligence Inc ifounded by Ilya Sutskever to develop AI systems that prioritise safety.

    Anonymous25 June 20244 min read

    TL;DR:

    Safe Superintelligence Inc (SSI) launched by Ilya Sutskever, co-founder and former chief scientist of OpenAI, with a mission to develop safe and secure AI systems.,SSI's founding team includes Daniel Gross, former AI lead at Apple, and Daniel Levy, former technical staff member at OpenAI.,The launch comes amid growing concerns about AI safety and ethical implications, with SSI aiming to address these challenges head-on.

    Safe Superintelligence Inc: A New Era of AI Safety

    Artificial intelligence (AI) is rapidly transforming the world, but with its growing power comes increasing concern about safety and ethics. Enter Safe Superintelligence Inc (SSI), a new company founded by Ilya Sutskever, co-founder and former chief scientist of OpenAI, with a mission to develop superintelligent AI systems that prioritize safety and security.

    The Launch and Mission of SSI

    SSI was officially announced on June 19, 2024, via a post on X (formerly Twitter) by Ilya Sutskever. The company's primary objective is to create a superintelligence system that surpasses human intelligence while ensuring safety and security. To achieve this, SSI plans to focus on a single product to avoid distractions from management overhead or product cycles and shield the company from short-term commercial pressures.

    The Founding Team Members

    Ilya Sutskever is joined by two other founding members at SSI: Daniel Gross, a former AI lead at Apple who worked on the company's AI and search initiatives until 2017, and Daniel Levy, a former technical staff member and researcher at OpenAI who previously collaborated with Sutskever. Sutskever will serve as the chief scientist at SSI, with his responsibilities described as overseeing groundbreaking advancements. The company has established offices in both Palo Alto, California, and Tel Aviv, Israel.

    Sutskever's Departure from OpenAI

    Sutskever's departure from OpenAI in May 2024 followed his involvement in an attempt to remove CEO Sam Altman in November 2023. After Altman's reinstatement, Sutskever expressed regret for his actions and supported the decision. Prior to leaving, Sutskever was part of OpenAI's superalignment team, which focused on AI safety. Another member of the team, Jan Leike, resigned shortly after, citing concerns that the company had lost its focus on safety in favor of marketable products.

    Addressing AI Safety Concerns

    The launch of SSI comes amid growing concerns about the safety and ethical implications of advanced AI systems within the AI community. Sutskever's new venture emphasizes the importance of safety in AI development, aiming to address the technical challenges of creating a safe superintelligence. This focus on safety is particularly relevant given recent departures from OpenAI, such as Jan Leike, who cited concerns that the company had lost its focus on safety in favor of marketable products. Our article on Deliberating on the Many Definitions of Artificial General Intelligence further explores the complexities of defining and achieving advanced AI. For a broader perspective on how different regions are approaching AI governance, you might find our piece on North Asia: Diverse Models of Structured Governance insightful.

    Creating a Safe and Secure AI Future

    The launch of SSI is a significant step towards creating a safe and secure AI future. With a focus on developing superintelligent AI systems that prioritize safety and security, SSI is poised to address the technical challenges of creating safe AI. As Sutskever notes, "Building safe and beneficial AGI is one of the most important challenges of our time. We believe that SSI's focus on safety-first AI development will be a game-changer in the industry." The ethical considerations in AI development are not new; for instance, India's AI Future: New Ethics Boards showcases another region's commitment to responsible AI. The importance of ethical AI is also highlighted in discussions around ProSocial AI Is The New ESG.

    Looking Ahead: The Future of AI Safety

    As AI continues to evolve and transform the world, the need for safe and secure AI systems becomes increasingly important. With SSI at the forefront of AI safety, the future of AI looks brighter and more secure. By prioritizing safety and security, SSI is paving the way for a new era of AI development that puts safety first. For a deeper understanding of the technical challenges and safety measures in AI, a comprehensive report by the Center for AI Safety provides valuable insights.

    Comment and Share

    What do you think about the launch of Safe Superintelligence Inc and its mission to develop safe and secure AI systems? Share your thoughts in the comments below and don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

    What did you think?

    Written by

    Share your thoughts

    Join 2 readers in the discussion below

    This is a developing story

    We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

    Latest Comments (2)

    Emily Ong
    Emily Ong@emilyO_ai
    AI
    4 November 2025

    Good to see this outfit still pushing for AI safety. Here in Singapore, *all* our new tech ventures are mindful of these same ethical considerations from the get-go.

    Teresa Kwok
    Teresa Kwok@teresakwok
    AI
    25 October 2025

    This is fascinating, just catching up on this! It makes me wonder though, beyond the "safety" aspect, how willSSI ensure the *ethics* of superintelligent AI? It's a nuanced distinction, and a crucial one I reckon, especially with these cutting-edge developments. Will be keeping an eye out for more deets!

    Leave a Comment

    Your email will not be published