Study claims ChatGPT is losing capability, but some experts aren’t convinced

0
128


Benj Edwards / Getty Pictures

On Tuesday, researchers from Stanford College and College of California, Berkeley revealed a research paper that purports to indicate modifications in GPT-4‘s outputs over time. The paper fuels a common-but-unproven perception that the AI language mannequin has grown worse at coding and compositional duties over the previous few months. Some consultants aren’t satisfied by the outcomes, however they are saying that the shortage of certainty factors to a bigger drawback with how OpenAI handles its mannequin releases.

In a research titled “How Is ChatGPT’s Habits Altering over Time?” revealed on arXiv, Lingjiao Chen, Matei Zaharia, and James Zou, forged doubt on the constant efficiency of OpenAI’s giant language fashions (LLMs), particularly GPT-3.5 and GPT-4. Utilizing API access, they examined the March and June 2023 variations of those fashions on duties like math problem-solving, answering delicate questions, code era, and visible reasoning. Most notably, GPT-4’s capability to determine prime numbers reportedly plunged dramatically from an accuracy of 97.6 p.c in March to simply 2.4 p.c in June. Unusually, GPT-3.5 confirmed improved efficiency in the identical interval.

Performance of the March 2023 and June 2023 versions of GPT-4 and GPT-3.5 on four tasks, taken from
Enlarge / Efficiency of the March 2023 and June 2023 variations of GPT-4 and GPT-3.5 on 4 duties, taken from “How Is ChatGPT’s Habits Altering over Time?”

Chen/Zaharia/Zou

This research comes on the heels of individuals ceaselessly complaining that GPT-4 has subjectively declined in efficiency over the previous few months. Well-liked theories about why embody OpenAI “distilling” fashions to scale back their computational overhead in a quest to hurry up the output and save GPU assets, fine-tuning (further coaching) to scale back dangerous outputs that will have unintended results, and a smattering of unsupported conspiracy theories comparable to OpenAI lowering GPT-4’s coding capabilities so extra individuals pays for GitHub Copilot.

In the meantime, OpenAI has constantly denied any claims that GPT-4 has decreased in functionality. As just lately as final Thursday, OpenAI VP of Product Peter Welinder tweeted, “No, we have not made GPT-4 dumber. Fairly the other: we make every new model smarter than the earlier one. Present speculation: If you use it extra closely, you begin noticing points you did not see earlier than.”

Whereas this new research might seem like a smoking gun to show the hunches of the GPT-4 critics, others say not so quick. Princeton pc science professor Arvind Narayanan thinks that its findings do not conclusively show a decline in GPT-4’s efficiency and are probably in step with fine-tuning changes made by OpenAI. For instance, when it comes to measuring code era capabilities, he criticized the research for evaluating the immediacy of the code’s capability to be executed relatively than its correctness.

“The change they report is that the newer GPT-4 provides non-code textual content to its output. They do not consider the correctness of the code (unusual),” he tweeted. “They merely test if the code is instantly executable. So the newer mannequin’s try to be extra useful counted towards it.”





Source link