OpenAI training its next major AI model, forms new safety committee

0
66


On Monday, OpenAI announced the formation of a brand new “Security and Safety Committee” to supervise danger administration for its initiatives and operations. The announcement comes as the corporate says it has “just lately begun” coaching its subsequent frontier mannequin, which it expects to carry the corporate nearer to its purpose of reaching synthetic normal intelligence (AGI), although some critics say AGI is farther off than we’d assume. It additionally comes as a response to a terrible two weeks within the press for the corporate.

Whether or not the aforementioned new frontier mannequin is meant to be GPT-5 or a step past that’s at present unknown. Within the AI business, “frontier mannequin” is a time period for a brand new AI system designed to push the boundaries of present capabilities. And “AGI” refers to a hypothetical AI system with human-level talents to carry out novel, normal duties past its coaching information (in contrast to slim AI, which is skilled for particular duties).

In the meantime, the brand new Security and Safety Committee, led by OpenAI administrators Bret Taylor (chair), Adam D’Angelo, Nicole Seligman, and Sam Altman (CEO), might be accountable for making suggestions about AI security to the total firm board of administrators. On this case, “security” partially means the standard “we cannot let the AI go rogue and take over the world,” however it additionally features a broader set of “processes and safeguards” that the corporate spelled out in a May 21 safety update associated to alignment analysis, defending kids, upholding election integrity, assessing societal impacts, and implementing safety measures.

OpenAI says the committee’s first process might be to guage and additional develop these processes and safeguards over the subsequent 90 days. On the finish of this era, the committee will share its suggestions with the total board, and OpenAI will publicly share an replace on adopted suggestions.

OpenAI says that a number of technical and coverage consultants, together with Aleksander Madry (head of preparedness), Lilian Weng (head of security programs), John Schulman (head of alignment science), Matt Knight (head of safety), and Jakub Pachocki (chief scientist), may even serve on its new committee.

The announcement is notable in a couple of methods. First, it is a response to the destructive press that got here from OpenAI Superalignment team members Ilya Sutskever and Jan Leike resigning two weeks in the past. That crew was tasked with “steer[ing] and management[ling] AI programs a lot smarter than us,” and their departure has led to criticism from some inside the AI neighborhood (and Leike himself) that OpenAI lacks a dedication to growing extremely succesful AI safely. Different critics, like Meta Chief AI Scientist Yann LeCun, assume the corporate is nowhere near growing AGI, so the priority over a scarcity of security for superintelligent AI could also be overblown.

Second, there have been persistent rumors that progress in massive language fashions (LLMs) has plateaued just lately round capabilities much like GPT-4. Two main competing fashions, Anthropic’s Claude Opus and Google’s Gemini 1.5 Pro, are roughly equal to the GPT-4 household in functionality regardless of each aggressive incentive to surpass it. And just lately, when many anticipated OpenAI to launch a brand new AI mannequin that may clearly surpass GPT-4 Turbo, it as a substitute launched GPT-4o, which is roughly equal in capacity however sooner. Throughout that launch, the corporate relied on a flashy new conversational interface slightly than a significant under-the-hood improve.

We have beforehand reported on a rumor of GPT-5 coming this summer, however with this current announcement, it appears the rumors might have been referring to GPT-4o as a substitute. It is fairly attainable that OpenAI is nowhere close to releasing a mannequin that may considerably surpass GPT-4. However with the corporate quiet on the main points, we’ll have to attend and see.



Source link