Benj Edwards / Getty Pictures
On Tuesday, researchers from Stanford College and College of California, Berkeley launched a research paper that purports to point out modifications in GPT-4’s outputs over time. The paper fuels a common-but-unproven perception that the AI language mannequin has grown worse at coding and compositional duties over the previous few months. Some specialists aren’t satisfied by the outcomes, however they are saying that the dearth of certainty factors to a bigger downside with how OpenAI handles its mannequin releases.
In a research titled “How Is ChatGPT’s Conduct Altering over Time?” listed on arXiv, Lingjiao Chen, Matei Zaharia, and James Zou solid doubt on the constant efficiency of OpenAI’s massive language fashions (LLMs), particularly GPT-3.5 and GPT-4. Utilizing API access, they examined the March and June 2023 variations of those fashions on duties like math problem-solving, answering delicate questions, code technology, and visible reasoning. Most notably, GPT-4’s means to establish prime numbers reportedly plunged dramatically from an accuracy of 97.6 % in March to only 2.4 % in June. Surprisingly, GPT-3.5 confirmed improved efficiency in the identical interval.
Chen/Zaharia/Zou
This research comes on the heels of individuals regularly complaining that GPT-4 has subjectively declined in efficiency over the previous few months. Widespread theories about why embrace OpenAI “distilling” fashions to cut back their computational overhead in a quest to hurry up the output and save GPU sources, fine-tuning (further coaching) to cut back dangerous outputs which will have unintended results, and a smattering of unsupported conspiracy theories comparable to OpenAI lowering GPT-4’s coding capabilities so extra folks pays for GitHub Copilot.
In the meantime, OpenAI has constantly denied any claims that GPT-4 has decreased in functionality. As lately as final Thursday, OpenAI VP of Product Peter Welinder tweeted, “No, we have not made GPT-4 dumber. Fairly the other: we make every new model smarter than the earlier one. Present speculation: If you use it extra closely, you begin noticing points you did not see earlier than.”
Whereas this new research could seem like a smoking gun to show the hunches of the GPT-4 critics, others say not so quick. Princeton laptop science professor Arvind Narayanan thinks that its findings do not conclusively show a decline in GPT-4’s efficiency and are probably in line with fine-tuning changes made by OpenAI. For instance, when it comes to measuring code technology capabilities, he criticized the research for evaluating the immediacy of the code’s means to be executed slightly than its correctness.
“The change they report is that the newer GPT-4 provides non-code textual content to its output. They do not consider the correctness of the code (unusual),” he tweeted. “They merely examine if the code is instantly executable. So the newer mannequin’s try and be extra useful counted towards it.”