Skip to main content
AI in ASIA
AI research quality
News

AI Slop: Low-Quality Research Choking AI Progress

AI research drowns in low-quality papers as Kevin Zhu claims 113 publications in one year, exposing how AI slop is choking academic progress.

Intelligence Desk8 min read

AI Snapshot

The TL;DR: what matters, fast.

NeurIPS 2024 received 21,500+ paper submissions, more than double 2020's count

Researcher Kevin Zhu claims authorship on 113 AI papers in a single year

Peer-review systems overwhelmed by AI-generated content and fabricated sources

Advertisement

Advertisement

Academic Publishing Faces Crisis as AI-Generated Papers Flood Top Conferences

The artificial intelligence research community is drowning in its own output. What was once a manageable field of scholarly pursuit has become an overwhelming deluge of questionable papers, many apparently churned out with the help of the very AI tools researchers are studying.

NeurIPS, one of AI's most prestigious conferences, received over 21,500 paper submissions this year compared to fewer than 10,000 in 2020. The explosion isn't driven by breakthrough discoveries but by what critics call "AI slop": low-quality research that's choking genuine innovation.

Professor Hany Farid at UC Berkeley describes the situation as an absolute "frenzy." He's now advising his students to avoid AI research entirely because the field has become virtually unnavigable.

The Kevin Zhu Controversy Exposes Academic Gaming

The debate reached a boiling point when Farid highlighted researcher Kevin Zhu, who claims contributions to 113 AI papers in a single year. Zhu, a recent UC Berkeley graduate, runs Algoverse, a programme charging students £3,325 for 12-week courses that often result in co-authorships on conference submissions.

Eighty-nine of Zhu's papers are being presented at NeurIPS this week alone. Farid called the output a "disaster," using the term "vibe coding" to describe the haphazard approach of using AI tools to quickly generate software without meaningful contribution.

"I can't carefully read 100 technical papers a year, so imagine my surprise when I learned about one author who claims to have participated in the research and writing of over 100 technical papers in a year," Professor Hany Farid, UC Berkeley, told The Guardian.

The situation mirrors broader concerns about AI slop eroding social media experiences, where AI-generated content overwhelms genuine human creativity.

By The Numbers

  • 21,500+ papers submitted to NeurIPS 2024, up from under 10,000 in 2020
  • 113 AI papers claimed by a single researcher in one year
  • 89 papers from one author presented at a single conference
  • 74% of workers experience negative consequences from low-quality AI outputs
  • 58% of workers spend three or more hours weekly correcting AI-generated work

Quality Control Breakdown Threatens Academic Integrity

The peer-review process, academia's traditional quality gatekeeper, is buckling under pressure. PhD students are being recruited to help review submissions, while AI-generated citations and fabricated sources slip through even respected journals.

Some authors are embedding hidden text to manipulate AI-powered review systems, creating what Farid describes as a "digital arms race." The problem extends beyond volume: it's about the fundamental reliability of research foundations.

Year NeurIPS Submissions Review Burden Quality Concerns
2020 <10,000 Manageable Standard peer review
2024 21,500+ PhD students recruited AI-generated papers proliferating

This quality crisis affects regions investing heavily in AI research infrastructure, including Singapore's $1 billion AI research commitment and Hong Kong's new AI research institute.

"You have no chance, no chance as an average reader to try to understand what is going on in the scientific literature. Your signal-to-noise ratio is basically one," Professor Hany Farid, UC Berkeley, explained to The Guardian.

The Automated Research Assembly Line

When questioned about AI usage, Zhu diplomatically stated his teams used "standard productivity tools such as reference managers, spellcheck, and sometimes language models for copy-editing or improving clarity." This careful language reflects how AI tools have become embedded in research workflows.

The challenges mirror those seen in AI-assisted peer reviews across Asia's research landscape, where the line between assistance and automation continues blurring.

Key indicators of AI-generated academic content include:

  • Unusually high publication volumes from individual researchers
  • Generic language patterns and repetitive phrasing across papers
  • Citations to non-existent or fabricated sources
  • Multiple co-authors with minimal subject matter expertise
  • Rapid submission timelines inconsistent with thorough research

The situation has broader implications for scientific research automation, raising questions about where legitimate AI assistance ends and problematic automation begins.

What constitutes AI slop in academic research?

AI slop refers to low-quality papers that appear mass-produced using large language models, often featuring fabricated citations, minimal original research, and authors who couldn't have meaningfully contributed to the volume of work they claim.

How can readers identify potentially AI-generated papers?

Warning signs include abnormally high publication rates from single authors, generic writing patterns, non-existent citations, and co-authors with little relevant expertise in the paper's subject matter.

Why are major conferences accepting these papers?

The sheer volume of submissions has overwhelmed traditional peer-review systems. Conferences are recruiting PhD students as reviewers and struggling to maintain quality standards while processing exponentially more papers.

What impact does this have on legitimate researchers?

Genuine breakthrough research gets lost in the noise, making it harder for quality work to gain recognition. Some established researchers are advising students to avoid AI research entirely due to the chaotic state of the field.

How might the academic community address this crisis?

Solutions could include stricter submission limits per author, enhanced AI detection tools, reformed peer-review processes, and clearer guidelines on acceptable AI usage in academic writing and research.

The AIinASIA View: The AI research crisis represents a critical inflection point for academic integrity. While legitimate AI tools can enhance research productivity, we're witnessing systematic gaming of academic publishing systems. Asia's substantial investments in AI research infrastructure risk being undermined if quality standards collapse. The region's emerging AI governance frameworks must address not just AI deployment but also the integrity of the research pipeline itself. Without decisive action, we'll see the field's credibility erode just as Asia positions itself as a global AI leader.

This academic crisis threatens to undermine the very foundation of AI development just as the technology reaches critical mass. If researchers can't trust the literature that informs their work, how can we build reliable AI systems for society?

What's your experience with AI-generated content in your field? Have you noticed quality declining in areas you follow? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 5 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Research Radar learning path.

Continue the path →

Latest Comments (5)

Budi Santoso@budi_s
AI
2 January 2026

all this noise about 100 papers a year. for us, even just finding good, reliable local datasets to train models for our specific market needs is a struggle. the quality of a paper is less important than if it actually solves a problem for the unbanked, which these academic games rarely do.

N.
N.@anon_reader
AI
26 December 2025

This parallels similar observations in other intelligence-gathering domains. Not limited to academia.

Charlotte Davies
Charlotte Davies@charlotted
AI
24 December 2025

Professor Farid's concerns about the sheer volume of papers are valid, but the focus on Kevin Zhu as an individual seems a bit misplaced. This isn't about one person's output; it's about systemic pressures in academic publishing and the incentives pushing for quantity over quality. We should be looking at how regulatory frameworks, perhaps similar to what the UK AI Safety Institute considers for model evaluations, could be applied to research dissemination to ensure rigour.

Somchai Wongsa@somchaiw
AI
18 December 2025

The situation with Kevin Zhu's Algoverse program and its output of co-authored papers raises concerns regarding academic integrity. This rapid generation of papers, even by students, could indeed complicate efforts to establish clear standards for AI R&D within ASEAN digital frameworks. We need to ensure quality benchmarks.

Ahmad Razak
Ahmad Razak@ahmadrazak
AI
14 December 2025

This "AI slop" issue, particularly the Kevin Zhu case, makes one consider the implications for national AI strategies. How do we ensure quality control and prevent dilution for initiatives like Malaysia's AI roadmap, for example?

Leave a Comment

Your email will not be published