In a latest interview on “The Ted AI Show” podcast, former OpenAI board member Helen Toner stated the OpenAI board was unaware of the existence of ChatGPT till they noticed it on Twitter. She additionally revealed particulars in regards to the firm’s inside dynamics and the occasions surrounding CEO Sam Altman’s surprise firing and subsequent rehiring final November.
OpenAI launched ChatGPT publicly on November 30, 2022, and its massive surprise popularity set OpenAI on a brand new trajectory, shifting focus from being an AI analysis lab to a extra consumer-facing tech firm.
“When ChatGPT got here out in November 2022, the board was not knowledgeable upfront about that. We discovered about ChatGPT on Twitter,” Toner stated on the podcast.
Toner’s revelation about ChatGPT appears to spotlight a big disconnect between the board and the corporate’s day-to-day operations, bringing new mild to accusations that Altman was “not constantly candid in his communications with the board” upon his firing on November 17, 2023. Altman and OpenAI’s new board later stated that the CEO’s mismanagement of attempts to remove Toner from the OpenAI board following her criticism of the company’s release of ChatGPT performed a key function in Altman’s firing.
“Sam didn’t inform the board that he owned the OpenAI startup fund, though he consistently was claiming to be an unbiased board member with no monetary curiosity within the firm on a number of events,” she stated. “He gave us inaccurate details about the small variety of formal security processes that the corporate did have in place, which means that it was principally inconceivable for the board to understand how effectively these security processes had been working or what may want to alter.”
Toner additionally make clear the circumstances that led to Altman’s short-term ousting. She talked about that two OpenAI executives had reported situations of “psychological abuse” to the board, offering screenshots and documentation to help their claims. The allegations made by the previous OpenAI executives, as relayed by Toner, recommend that Altman’s management type fostered a “poisonous environment” on the firm:
In October of final 12 months, we had this sequence of conversations with these executives, the place the 2 of them immediately began telling us about their very own experiences with Sam, which they hadn’t felt snug sharing earlier than, however telling us how they couldn’t belief him, in regards to the poisonous environment it was creating. They use the phrase “psychological abuse,” telling us they didn’t assume he was the best particular person to guide the corporate, telling us that they had no perception that he might or would change, there’s no level in giving him suggestions, no level in attempting to work via these points.
Regardless of the board’s choice to fireside Altman, Altman started the method of returning to his position simply 5 days later after a letter to the board signed by over 700 OpenAI staff. Toner attributed this swift comeback to staff who believed the corporate would collapse with out him, saying in addition they feared retaliation from Altman if they didn’t help his return.
“The second factor I feel is basically essential to know, that has actually gone beneath reported is how scared persons are to go in opposition to Sam,” Toner stated. “They skilled him retaliate in opposition to individuals retaliating… for previous situations of being crucial.”
“They had been actually afraid of what may occur to them,” she continued. “So some staff began to say, , wait, I don’t need the corporate to disintegrate. Like, let’s convey again Sam. It was very exhausting for these individuals who had had horrible experiences to truly say that… if Sam did keep in energy, as he finally did, that will make their lives depressing.”
In response to Toner’s statements, present OpenAI board chair Bret Taylor offered a press release to the podcast: “We’re disenchanted that Miss Toner continues to revisit these points… The review concluded that the prior board’s choice was not primarily based on considerations concerning product security or safety, the tempo of improvement, OpenAI’s funds, or its statements to buyers, prospects, or enterprise companions.”
Even provided that evaluation, Toner’s foremost argument is that OpenAI hasn’t been capable of police itself regardless of claims on the contrary. “The OpenAI saga reveals that attempting to do good and regulating your self isn’t sufficient,” she stated.