[ad_1]
On Wednesday, Reuters reported that Protected Superintelligence (SSI), a brand new AI startup cofounded by OpenAI’s former chief scientist Ilya Sutskever, has raised $1 billion in funding. The three-month-old firm plans to concentrate on growing what it calls “secure” AI techniques that surpass human capabilities.
The fundraising effort exhibits that even amid growing skepticism round large investments in AI tech that thus far have did not be worthwhile, some backers are nonetheless keen to put massive bets on high-profile expertise in foundational AI analysis. Enterprise capital companies like Andreessen Horowitz, Sequoia Capital, DST World, and SV Angel participated within the SSI funding spherical.
SSI goals to make use of the brand new funds for computing energy and attracting expertise. With solely 10 workers in the meanwhile, the corporate intends to construct a bigger workforce of researchers throughout areas in Palo Alto, California, and Tel Aviv, Reuters reported.
Whereas SSI didn’t formally disclose its valuation, sources advised Reuters it was valued at $5 billion—which is a stunningly great amount simply three months after the corporate’s founding and with no publicly-known merchandise but developed.
Son of OpenAI
Very like Anthropic earlier than it, SSI fashioned as a breakaway firm based partly by former OpenAI workers. Sutskever, 37, cofounded SSI with Daniel Gross, who beforehand led AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher.
Sutskever’s departure from OpenAI adopted a tough interval on the firm that reportedly included disenchantment that OpenAI administration didn’t dedicate correct assets to his “superalignment” analysis workforce after which Sutskever’s involvement within the temporary ouster of OpenAI CEO Sam Altman final November. After leaving OpenAI in Could, Sutskever said his new firm would “pursue secure superintelligence in a straight shot, with one focus, one purpose, and one product.”
Superintelligence, as we have noted previously, is a nebulous time period for a hypothetical expertise that may far surpass human intelligence. There isn’t any assure that Sutskever will achieve his mission (and skeptics abound), however the star energy he gained from his educational bona fides and being a key cofounder of OpenAI has made fast fundraising for his new firm comparatively simple.
The corporate plans to spend a few years on analysis and improvement earlier than bringing a product to market, and its self-proclaimed concentrate on “AI security” stems from the idea that highly effective AI techniques that may trigger existential dangers to humanity are on the horizon.
The “AI security” subject has sparked debate inside the tech trade, with firms and AI specialists taking completely different stances on proposed security laws, together with California’s controversial SB-1047, which can quickly become law. For the reason that subject of existential danger from AI remains to be hypothetical and regularly guided by private opinion reasonably than science, that individual controversy is unlikely to die down anytime quickly.
[ad_2]
Source link