
Safe Superintelligence
Developing safe superintelligent AI.
Date | Investors | Amount | Round |
---|---|---|---|
- | N/A | - | |
* | $1.0b Valuation: $5.0b | Early VC | |
* | $2.0b Valuation: $32.0b | Early VC | |
Total Funding | $3.0b |
Related Content
Safe Superintelligence Inc. (SSI) is an American artificial intelligence (AI) company with a mission to build a safe superintelligence. The company was co-founded by Ilya Sutskever, former chief scientist of OpenAI, Daniel Gross, former head of Apple AI, and Daniel Levy.
SSI operates as a research lab with a singular goal: to develop superintelligence that surpasses human capabilities in a safe and ethical manner. The company's approach is to prioritize safety over commercial pressures, allowing its team to focus on scientific and engineering breakthroughs without the distraction of short-term financial goals. This makes SSI the world's first research lab dedicated solely to creating safe superintelligence.
Recently, co-founder and CEO Daniel Gross was hired by Meta, with Ilya Sutskever taking over the leadership of the company. This move highlights the intense competition for AI talent in the industry. Despite the leadership change, SSI remains committed to its long-term mission of solving the challenge of safe superintelligence.
Keywords: artificial intelligence, superintelligence, AI safety, research lab, machine learning, deep learning, AI ethics, long-term research, AI development, neural networks