Generative AI has undeniably reshaped the landscape of content creation over the last three years, with its influence particularly pronounced in human writing. Large Language Models (LLMs) like those powering ChatGPT are now capable of generating sophisticated texts, leading to a surge in AI-assisted output. However, this proliferation has also introduced a significant amount of "AI slop" – low-quality, AI-generated content produced with minimal human oversight.
While the implications for education, work, and culture are frequently discussed, the impact on scientific writing remained a critical question: does AI genuinely enhance academic output, or does it simply contribute to scientific "slop"? A recent study by researchers from UC Berkeley and Cornell University, published in Science, suggests the latter might be winning.
The Productivity Paradox in Academia
The study analysed over a million preprint articles, made publicly available between 2018 and 2024, to gauge AI's effect on academic productivity, manuscript quality, and literature diversity. Researchers measured productivity by the number of preprints an author produced and quality by an article's eventual publication in a peer-reviewed journal.
The findings were striking. After adopting AI, authors saw a significant increase in their preprint output, ranging from 36.2% to 59.8% monthly, depending on the platform. This boost was most pronounced among non-native English speakers, particularly Asian authors, who experienced an increase of 43% to 89.3%. For authors from English-speaking institutions with typically "Caucasian" names, the increase was more modest, between 23.7% and 46.2%. This suggests AI has become a valuable tool for non-native speakers looking to refine their written English, perhaps in a similar way that tools like ChatGPT Translate Launches to compete with Google Translate aim to bridge language gaps.
Quality Concerns and Language Complexity
Beyond sheer volume, the study also delved into article quality. It observed that AI-assisted articles tended to use more complex language. However, here's where the paradox emerges: for articles written without AI, greater language complexity correlated with a higher likelihood of publication. This implies that sophisticated, high-quality writing is generally perceived as having greater scientific merit.
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
Conversely, for articles with AI support, this relationship inverted. The more complex the AI-generated language, the less likely the article was to be published. This crucial finding suggests that AI-generated linguistic complexity might, in some cases, be used to mask weaker scholarly contributions, leading to what some might call "scientific AI slop". This raises important questions about how we think with AI, not just ask it questions.
AI's Influence on Information Discovery
The research also explored how AI impacts the diversity of academic sources. By comparing article downloads from Google and Microsoft search platforms, particularly after Bing's integration of its AI-powered Bing Chat feature in February 2023, the researchers found an interesting divergence.
Bing users were exposed to a wider variety of sources and more recent publications than Google users. This is likely due to retrieval-augmented generation (RAG), a technique Bing Chat employs to combine search results with AI prompting. This discovery alleviates earlier concerns that AI search might predominantly recommend older, widely cited sources, instead showcasing its potential to broaden researchers' informational horizons. This capability could be particularly useful for those looking to keep abreast of the latest developments, similar to how daily news roundups like 3 Before 9: January 26, 2026 offer a quick overview of current events.
Navigating the Future of Academic Publishing
The findings highlight AI's entrenched role in scientific writing, especially for non-native English speakers. With AI becoming integrated into ubiquitous applications like word processors and email, its use will soon be almost unavoidable.
Crucially, AI is challenging the long-held notion that complex, high-quality language is a definitive indicator of scholarly merit. Relying solely on linguistic quality for quick article screening and evaluation is becoming increasingly unreliable. This necessitates more robust, in-depth evaluations of methodology and contributions during peer review. The academic community may need to consider new tools, perhaps even AI-powered review systems like those being developed by researchers such as Andrew Ng at Stanford, to manage the ever-increasing volume of submissions. This proactive approach to vetting AI usage in business is also becoming critical, as detailed in The AI Vendor Vetting Checklist: What Asian businesses should check before buying AI in 2026.
The full study is available in Science here.
What's your take on AI's impact on academic integrity and productivity? Share your thoughts in the comments below.










Latest Comments (4)
yeah, this stuff's gonna get wild. 📝
British spelling?
from an ML engineering perspective, data quality has always been number one.
Ai is good if data not slop, but gpu cost very high. better ai tools help for scientists, less data noisy. 💭📝
Leave a Comment