Crypto Venture Raises $1 Billion in Funding
Months after stepping down from AI developer OpenAI, former chief scientist Ilya Sutskever’s latest project, Safe Superintelligence (SSI), has secured $1 billion in investment, the company revealed recently. SSI shared that key investors in the funding round included NFDG, a16z, Sequoia, DST Global, and SV Angel. The company’s valuation has already reached $5 billion, as reported by Reuters.
New Beginnings for Safe Superintelligence
In May of this year, Sutskever and Jan Leike bid farewell to OpenAI, following the earlier departure of Andrej Karpathy. Leike highlighted the lack of resources and safety focus at the ChatGPT developer as reasons for leaving in a Twitter post. Sutskever’s exit from OpenAI came after a period of leadership turmoil that saw the ousting and reinstatement of co-founder and CEO Sam Altman.
- Sutskever’s new venture, Safe Superintelligence Inc., was officially launched in June of this year.
- The company’s co-founders include Daniel Gross, a former AI lead at Apple, and Daniel Levy, who also has experience at OpenAI.
- SSI’s core team comprises Sutskever as chief scientist, Levy as principal scientist, and Gross overseeing computing power and fundraising.
Focus on AI Safety
As the field of generative AI continues to expand, developers are increasingly prioritizing safety measures in their products to instill confidence among consumers and regulators. In this context, collaborations between AI companies and organizations like the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) are gaining prominence.
- OpenAI and Claude AI developer Anthropic recently entered into agreements with NIST to collaborate with the U.S. AI Safety Institute (AISI).
- These collaborations aim to provide NIST with access to cutting-edge AI models developed by both companies for pre-release testing.
- OpenAI’s Sam Altman emphasized the importance of national-level initiatives in advancing AI safety and innovation.
Hot Take: Embracing AI Safety for Future Growth
As the landscape of AI development evolves, a renewed focus on safety and responsible innovation is essential for sustainable progress and societal acceptance. Initiatives like Safe Superintelligence’s funding success and industry collaborations underscore the commitment to creating AI technologies that prioritize safety and ethical considerations.
By staying attuned to developments in AI safety practices and fostering dialogue with regulatory bodies, the crypto community can contribute to shaping a more secure and sustainable AI landscape for the future.