[ad_1]
For the reason that launch of its newest AI language mannequin, GPT-4o, OpenAI has discovered itself on the defensive over the previous week as a consequence of a string of dangerous information, rumors, and mock circulating on conventional and social media. The damaging consideration is doubtlessly an indication that OpenAI has entered a brand new stage of public visibility—and is extra prominently receiving pushback to its AI strategy past tech pundits and government regulators.
OpenAI’s tough week began final Monday when the corporate previewed a flirty AI assistant with a voice seemingly impressed by Scarlett Johansson from the 2013 movie Her. OpenAI CEO Sam Altman alluded to the movie himself on X simply earlier than the occasion, and we had beforehand made that comparison with an earlier voice interface for ChatGPT that launched in September 2023.
Whereas that September replace included a voice referred to as “Sky” that some have mentioned seems like Johansson, it was GPT-4o’s seemingly lifelike new conversational interface, full with laughing and emotionally charged tonal shifts, that led to a extensively circulated Daily Show segment ridiculing the demo for its perceived flirty nature. Subsequent, a Saturday Night Live joke bolstered an implied connection to Johansson’s voice.
That should have spooked OpenAI (or maybe they heard from Johansson’s reps—we do not know), as a result of the subsequent day, OpenAI announced it was pausing use of the “Sky” voice in ChatGPT. The corporate particularly talked about Sky in a tweet and Johansson defensively in its weblog submit: “We consider that AI voices shouldn’t intentionally mimic a celeb’s distinctive voice—Sky’s voice just isn’t an imitation of Scarlett Johansson however belongs to a distinct skilled actress utilizing her personal pure talking voice,” the corporate wrote.
Superalignment group implodes
The AI analysis firm’s PR woes continued on Tuesday with the high-profile resignations of two key security researchers: Ilya Sutskever and Jan Leike, who led the “Superalingment” team centered on guaranteeing that hypothetical, at present non-existent superior AI programs don’t pose dangers to humanity. Following his departure, Leike took to social media to accuse OpenAI of prioritizing “shiny products” over essential security analysis.
In a joint statement posted on X, Altman and OpenAI President Greg Brockman addressed Leike’s criticisms, emphasizing their gratitude for his contributions and outlining the corporate’s technique for “accountable” AI improvement. In a separate, earlier submit, Altman acknowledged that “we’ve much more to do” concerning OpenAI’s alignment analysis and security tradition.
In the meantime, critics like Meta’s Yann LeCun maintained the drama was a lot ado about nothing. Responding to a tweet the place Leike wrote, “we urgently want to determine learn how to steer and management AI programs a lot smarter than us,” LeCun replied, “It appears to me that earlier than ‘urgently determining learn how to management AI programs a lot smarter than us’ we have to have the start of a touch of a design for a system smarter than a home cat.”
LeCun continued: “It is as if somebody had mentioned in 1925 ‘we urgently want to determine learn how to management aircrafts [sic] that may transport lots of of passengers at close to the pace of the sound over the oceans.’ It will have been tough to make long-haul passenger jets protected earlier than the turbojet was invented and earlier than any plane had crossed the Atlantic continuous. But, we will now fly midway all over the world on twin-engine jets in full security. It did not require some form of magical recipe for security. It took a long time of cautious engineering and iterative refinements.”
[ad_2]
Source link