Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI research quality
News

AI Slop: Low-Quality Research Choking AI Progress

AI research papers are 'slop', experts claim. Is the field undermining itself with LLM-generated content? Discover why genuine breakthroughs are getting lost.

Anonymous5 min read

AI Snapshot

The TL;DR: what matters, fast.

The AI research field is experiencing an influx of low-quality academic papers, many generated with AI assistance, making it difficult for significant work to gain recognition.

Professor Hany Farid highlights the overwhelming volume of AI papers and advises students against entering the field due to the current chaotic environment.

A researcher named Kevin Zhu co-authors over 100 AI papers annually, often with students from his for-profit program, raising concerns about publication ethics.

Who should pay attention: AI researchers | Academics | AI ethicists | Funding bodies

What changes next: Debate is likely to intensify regarding academic standards and research integrity.

Doesn't it feel like AI is eating itself sometimes? We're seeing a really bizarre situation where the very field of artificial intelligence research is being swamped and, frankly, undermined by a flood of academic papers – many of them apparently churned out with the help of large language models. It's making it incredibly tough for genuinely good, groundbreaking work to get noticed.

The AI Paper Avalanche

Picture this: AI has become massively popular, attracting loads of researchers and, inevitably, some opportunists. These folks are seemingly trying to fast-track their academic careers by pumping out dozens, sometimes even hundreds, of papers a year. It's giving the whole academic pursuit a bit of a bad name, and trust me, it's not helping anyone trying to do proper research.

Professor Hany Farid, a computer science expert at UC Berkeley, told The Guardian it's an absolute "frenzy." He's now advising his students not to go into AI research because it's just such a mess. "You can't keep up, you can't publish, you can't do good work, you can't be thoughtful," he lamented. That's a pretty stark warning, isn't it?

The Case of Kevin Zhu

Farid really stirred the pot when he highlighted the output of a researcher called Kevin Zhu. Zhu apparently claims to have contributed to 113 AI papers in a single year. Farid, quite understandably, questioned this on LinkedIn, pointing out, "I can't carefully read 100 technical papers a year, so imagine my surprise when I learned about one author who claims to have participated in the research and writing of over 100 technical papers in a year."

Now, Zhu, who's a recent computer science graduate from UC Berkeley (where Farid teaches, ironically), runs a programme called Algoverse. It's aimed at high school and university students, and they pay a decent chunk of change, £3,325, for a 12-week online course. The kicker? Many of these students end up as co-authors on Zhu's papers, and they're expected to submit their work to big AI conferences.

Conferences Under Siege

Take NeurIPS, for instance. It's one of the top-tier conferences in AI, a field that's gone from obscure to super high-profile with massive investment. In 2020, they received fewer than 10,000 papers. This year? Over 21,500! That's a huge jump, and it's happening across other major AI conferences too. They're so overwhelmed that PhD students are being roped in to help review the sheer volume of submissions.

And who's contributing to this deluge? People like Zhu. Apparently, 89 of his papers are being presented at NeurIPS this week. Farid didn't mince words, calling Zhu's papers a "disaster" and suggesting he "could not have possibly meaningfully contributed" to them. He even used the term "vibe coding" to describe the attitude, which is a new bit of slang for using AI tools to quickly build software without much thought. It really highlights the haphazard approach some are taking.

The AI's Role in Academia

When asked if AI was used in his papers, Zhu didn't confirm or deny it directly. He simply said his teams used "standard productivity tools such as reference managers, spellcheck, and sometimes language models for copy-editing or improving clarity." That's a rather diplomatic answer for something that's causing such a stir.

The role of AI in academic research has been a hot topic since tools like ChatGPT first appeared. We've heard stories of AI hallucinating citations and inventing sources, which can sadly slip through the peer-review process, even in respected journals. Remember the peer-reviewed paper that featured an AI-generated diagram of a mouse with ridiculously oversized genitalia? It makes you wonder about the quality control, doesn't it? Some clever authors are even embedding hidden text to trick AI-powered reviewers into giving positive assessments. It's a bit like a digital arms race.

What Does This Mean for the Future?

What's really worrying is how AI research itself is being damaged by the very technology it's studying. How long can the field sustain this? And what does it mean for the next generation of AI scientists if genuine, novel research gets lost in a sea of AI-generated studies with made-up sources?

Even an experienced hand like Professor Farid admits it's now almost impossible to keep up with what's happening in AI.

You have no chance, no chance as an average reader to try to understand what is going on in the scientific literature," he told The Guardian^. "Your signal-to-noise ratio is basically one. I can barely go to these conferences and figure out what the hell is going on.

This situation raises important questions about the integrity of academic publishing and the future of AI development. We're already talking about the ethical challenges of AI, like those explored in The Dark Side of 'Learning' via AI? and the need for careful governance, as discussed in ASEAN: Regional AI Governance Overview. If the very foundations of AI research are crumbling under the weight of AI-generated content, it's a problem we need to address urgently. It's a far cry from the helpful applications we see in things like Google unveils new AI features for Android or creating 10 AI Prompts to Create Eye-Catching YouTube Thumbnails.

We need to ensure quality doesn't get lost in the pursuit of quantity.

What did you think?

Written by

Share your thoughts

Join 5 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the Research Radar learning path.

Continue the path →

Latest Comments (5)

Budi Santoso@budi_s
AI
2 January 2026

all this noise about 100 papers a year. for us, even just finding good, reliable local datasets to train models for our specific market needs is a struggle. the quality of a paper is less important than if it actually solves a problem for the unbanked, which these academic games rarely do.

N.
N.@anon_reader
AI
26 December 2025

This parallels similar observations in other intelligence-gathering domains. Not limited to academia.

Charlotte Davies
Charlotte Davies@charlotted
AI
24 December 2025

Professor Farid's concerns about the sheer volume of papers are valid, but the focus on Kevin Zhu as an individual seems a bit misplaced. This isn't about one person's output; it's about systemic pressures in academic publishing and the incentives pushing for quantity over quality. We should be looking at how regulatory frameworks, perhaps similar to what the UK AI Safety Institute considers for model evaluations, could be applied to research dissemination to ensure rigour.

Somchai Wongsa@somchaiw
AI
18 December 2025

The situation with Kevin Zhu's Algoverse program and its output of co-authored papers raises concerns regarding academic integrity. This rapid generation of papers, even by students, could indeed complicate efforts to establish clear standards for AI R&D within ASEAN digital frameworks. We need to ensure quality benchmarks.

Ahmad Razak
Ahmad Razak@ahmadrazak
AI
14 December 2025

This "AI slop" issue, particularly the Kevin Zhu case, makes one consider the implications for national AI strategies. How do we ensure quality control and prevent dilution for initiatives like Malaysia's AI roadmap, for example?

Leave a Comment

Your email will not be published