Doesn't it feel like AI is eating itself sometimes? We're seeing a really bizarre situation where the very field of artificial intelligence research is being swamped and, frankly, undermined by a flood of academic papers – many of them apparently churned out with the help of large language models. It's making it incredibly tough for genuinely good, groundbreaking work to get noticed.
The AI Paper Avalanche
Picture this: AI has become massively popular, attracting loads of researchers and, inevitably, some opportunists. These folks are seemingly trying to fast-track their academic careers by pumping out dozens, sometimes even hundreds, of papers a year. It's giving the whole academic pursuit a bit of a bad name, and trust me, it's not helping anyone trying to do proper research.
Professor Hany Farid, a computer science expert at UC Berkeley, told The Guardian it's an absolute "frenzy." He's now advising his students not to go into AI research because it's just such a mess. "You can't keep up, you can't publish, you can't do good work, you can't be thoughtful," he lamented. That's a pretty stark warning, isn't it?
The Case of Kevin Zhu
Farid really stirred the pot when he highlighted the output of a researcher called Kevin Zhu. Zhu apparently claims to have contributed to 113 AI papers in a single year. Farid, quite understandably, questioned this on LinkedIn, pointing out, "I can't carefully read 100 technical papers a year, so imagine my surprise when I learned about one author who claims to have participated in the research and writing of over 100 technical papers in a year."
Now, Zhu, who's a recent computer science graduate from UC Berkeley (where Farid teaches, ironically), runs a programme called Algoverse. It's aimed at high school and university students, and they pay a decent chunk of change, £3,325, for a 12-week online course. The kicker? Many of these students end up as co-authors on Zhu's papers, and they're expected to submit their work to big AI conferences.
Conferences Under Siege
Enjoying this? Get more in your inbox.
Weekly AI news & insights from Asia.
Take NeurIPS, for instance. It's one of the top-tier conferences in AI, a field that's gone from obscure to super high-profile with massive investment. In 2020, they received fewer than 10,000 papers. This year? Over 21,500! That's a huge jump, and it's happening across other major AI conferences too. They're so overwhelmed that PhD students are being roped in to help review the sheer volume of submissions.
And who's contributing to this deluge? People like Zhu. Apparently, 89 of his papers are being presented at NeurIPS this week. Farid didn't mince words, calling Zhu's papers a "disaster" and suggesting he "could not have possibly meaningfully contributed" to them. He even used the term "vibe coding" to describe the attitude, which is a new bit of slang for using AI tools to quickly build software without much thought. It really highlights the haphazard approach some are taking.
The AI's Role in Academia
When asked if AI was used in his papers, Zhu didn't confirm or deny it directly. He simply said his teams used "standard productivity tools such as reference managers, spellcheck, and sometimes language models for copy-editing or improving clarity." That's a rather diplomatic answer for something that's causing such a stir.
The role of AI in academic research has been a hot topic since tools like ChatGPT first appeared. We've heard stories of AI hallucinating citations and inventing sources, which can sadly slip through the peer-review process, even in respected journals. Remember the peer-reviewed paper that featured an AI-generated diagram of a mouse with ridiculously oversized genitalia? It makes you wonder about the quality control, doesn't it? Some clever authors are even embedding hidden text to trick AI-powered reviewers into giving positive assessments. It's a bit like a digital arms race.
What Does This Mean for the Future?
What's really worrying is how AI research itself is being damaged by the very technology it's studying. How long can the field sustain this? And what does it mean for the next generation of AI scientists if genuine, novel research gets lost in a sea of AI-generated studies with made-up sources?
Even an experienced hand like Professor Farid admits it's now almost impossible to keep up with what's happening in AI.
"You have no chance, no chance as an average reader to try to understand what is going on in the scientific literature," he told The Guardian^. "Your signal-to-noise ratio is basically one. I can barely go to these conferences and figure out what the hell is going on."
This situation raises important questions about the integrity of academic publishing and the future of AI development. We're already talking about the ethical challenges of AI, like those explored in The Dark Side of 'Learning' via AI? and the need for careful governance, as discussed in ASEAN: Regional AI Governance Overview. If the very foundations of AI research are crumbling under the weight of AI-generated content, it's a problem we need to address urgently. It's a far cry from the helpful applications we see in things like Google unveils new AI features for Android or creating 10 AI Prompts to Create Eye-Catching YouTube Thumbnails.
We need to ensure quality doesn't get lost in the pursuit of quantity.












Latest Comments (3)
Wah, "AI slop" sounds proper dramatic. But are researchers *really* using LLMs for their own papers, or just for drafting? Bit of a stretch, perhaps.
This AI "slop" reminds me of the clickbait we wade through online daily. It's a proper mess. If even research is now getting diluted with generative content, that’s a worrying downward spiral. It just cheapens everything, doesn't it? We need to champion quality, not quantity, especially here.
Honestly, I wonder if the "AI slop" problem is truly choking progress or just making it harder for the *right* kind of breakthroughs to surface. It feels a bit like folks are complaining about too much hawker food when their real gripe is the effort needed to find the Michelin star stalls. Still, the concern about genuine innovation getting buried is fair enough.
Leave a Comment