"While the tone and style of ChatGPT summaries were often a good match for human-authored content, "concerns about the factual accuracy in LLM-authored content" were prevalent, the journalists wrote. Even using ChatGPT summaries as a "starting point" for human editing "would require just as much, if not more, effort as drafting summaries themselves from scratch" due to the need for "extensive fact-checking," they added.
These results might not be too surprising given previous studies that have shown AI search engines citing incorrect news sources a full 60 percent of the time. Still, the specific weaknesses are all the more glaring when discussing scientific papers, where accuracy and clarity of communication are paramount.
In the end, the AAAS journalists concluded that ChatGPT "does not meet the style and standards for briefs in the SciPak press package." But the white paper did allow that it might be worth running the experiment again if ChatGPT "experiences a major update." For what it's worth, GPT-5 was introduced to the public in August."
https://arstechnica.com/ai/2025/09/science-journalists-find-chatgpt-is-bad-at-summarizing-scientific-papers/
#AI #GenerativeAI #Science #ChatGPT #LLMs #Chatbots #Journalism #Media #News #ScienceJournalism