Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
AI-assisted peer reviews
Business

The Rise of AI-Assisted Peer Reviews in Asia's AI and AGI Research

Asia's AI and AGI research community sees a rise in AI-assisted peer reviews, bringing transparency and feedback diversity concerns.

Intelligence Desk3 min read

AI Snapshot

The TL;DR: what matters, fast.

AI-assisted peer reviews are increasingly used by AI and AGI researchers in Asia, raising questions about their implications.

A study analyzing peer reviews in top AI conferences found 6.5% to 16.9% were substantially modified by large language models, with increased AI usage observed closer to submission deadlines.

The researchers advocate for transparency regarding AI assistance and express concern that AI feedback might homogenize reviews, potentially overlooking diverse expert insights.

Who should pay attention: AI researchers | Peer review committees | Academic publishers

What changes next: Debate is likely to intensify regarding the ethical implications of AI in academic review.

AI researchers in Asia are increasingly using AI-assisted peer reviews,Adjective frequency analysis can detect AI-authored content with some reliability,Transparency and diversity of feedback are crucial for the future of AI research.

Introduction

In recent years, artificial intelligence (AI) and artificial general intelligence (AGI) researchers in Asia have begun utilising AI assistance to evaluate the work of their peers. This shift towards AI-assisted peer reviews has sparked discussions about the implications and potential consequences for the research community. For a broader view on the region, you can explore APAC AI in 2026: 4 Trends You Need To Know.

AI Authorship in Peer Reviews

A group of researchers from Stanford University, NEC Labs America, and UC Santa Barbara analysed peer reviews of papers submitted to leading AI conferences, such as ICLR 2024, NeurIPS 2023, CoRL 2023, and EMNLP 2023. Their findings, published in a paper titled "Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews," reveal an increasing trend of AI involvement in the review process (Liang et al., 2023). This trend highlights a growing reliance on AI in various sectors, similar to how AI & Call Centres: Is The End Nigh? discusses its impact on customer service.

Detecting AI-Authored Content

The difficulty of distinguishing between human- and machine-written text has led to a need for reliable methods to evaluate real-world datasets containing AI-authored content. The researchers found that focusing on adjective usage in a text provides more reliable results than assessing entire documents or sentences.

AI-generated reviews tend to employ adjectives like "commendable," "innovative," and "comprehensive" more frequently than human authors. This statistical difference in word usage allowed the researchers to identify reviews where AI assistance was likely used.

Prevalence and Factors Influencing AI Usage

The study concluded that between 6.5% and 16.9% of peer reviews submitted to these conferences could have been substantially modified by large language models (LLMs). The researchers also found a correlation between approaching deadlines and increased AI usage, with a small but consistent increase in apparent LLM usage for reviews submitted three days or less before the deadline. This mirrors broader discussions on executives treading carefully on generative AI adoption.

Transparency and Diversity Concerns With AI Coding Assistants

The researchers emphasised that their goal was not to judge the use of AI writing assistance but to encourage transparency within the scientific community. They argued that AI feedback risks depriving researchers of diverse feedback from experts and may lead to a homogenisation effect that favours AI model biases over meaningful insights. This concern about ethical implications is also relevant to ongoing discussions in regions like Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means.

Conclusion: AI coding assistants

As AI-assisted peer reviews become more prevalent in Asia's AI and AGI research landscape, it is crucial to address concerns about transparency and the diversity of feedback. By fostering open discussions and developing reliable detection methods, the research community can ensure the integrity and quality of AI research.

Comment and Share:

How do you think the AI research community should address the increasing use of AI-assisted peer reviews while maintaining transparency and ensuring diverse feedback? Share your thoughts in the comments below!

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Be the first to share your perspective on this story

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the This Week in Asian AI learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Loading comments...