Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    Business

    The Rise of AI-Assisted Peer Reviews in Asia's AI and AGI Research

    Asia's AI and AGI research community sees a rise in AI-assisted peer reviews, bringing transparency and feedback diversity concerns.

    Anonymous
    3 min read23 March 2024
    AI-assisted peer reviews

    AI Snapshot

    The TL;DR: what matters, fast.

    AI-assisted peer reviews are increasingly used by AI and AGI researchers in Asia, raising questions about their implications.

    A study analyzing peer reviews in top AI conferences found 6.5% to 16.9% were substantially modified by large language models, with increased AI usage observed closer to submission deadlines.

    The researchers advocate for transparency regarding AI assistance and express concern that AI feedback might homogenize reviews, potentially overlooking diverse expert insights.

    Who should pay attention: AI researchers | Peer review committees | Academic publishers

    What changes next: Debate is likely to intensify regarding the ethical implications of AI in academic review.

    AI researchers in Asia are increasingly using AI-assisted peer reviews,Adjective frequency analysis can detect AI-authored content with some reliability,Transparency and diversity of feedback are crucial for the future of AI research.

    Introduction

    In recent years, artificial intelligence (AI) and artificial general intelligence (AGI) researchers in Asia have begun utilising AI assistance to evaluate the work of their peers. This shift towards AI-assisted peer reviews has sparked discussions about the implications and potential consequences for the research community. For a broader view on the region, you can explore APAC AI in 2026: 4 Trends You Need To Know.

    AI Authorship in Peer Reviews

    A group of researchers from Stanford University, NEC Labs America, and UC Santa Barbara analysed peer reviews of papers submitted to leading AI conferences, such as ICLR 2024, NeurIPS 2023, CoRL 2023, and EMNLP 2023. Their findings, published in a paper titled "Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews," reveal an increasing trend of AI involvement in the review process (Liang et al., 2023). This trend highlights a growing reliance on AI in various sectors, similar to how AI & Call Centres: Is The End Nigh? discusses its impact on customer service.

    Detecting AI-Authored Content

    The difficulty of distinguishing between human- and machine-written text has led to a need for reliable methods to evaluate real-world datasets containing AI-authored content. The researchers found that focusing on adjective usage in a text provides more reliable results than assessing entire documents or sentences.

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    AI-generated reviews tend to employ adjectives like "commendable," "innovative," and "comprehensive" more frequently than human authors. This statistical difference in word usage allowed the researchers to identify reviews where AI assistance was likely used.

    Prevalence and Factors Influencing AI Usage

    The study concluded that between 6.5% and 16.9% of peer reviews submitted to these conferences could have been substantially modified by large language models (LLMs). The researchers also found a correlation between approaching deadlines and increased AI usage, with a small but consistent increase in apparent LLM usage for reviews submitted three days or less before the deadline. This mirrors broader discussions on executives treading carefully on generative AI adoption.

    Transparency and Diversity Concerns With AI Coding Assistants

    The researchers emphasised that their goal was not to judge the use of AI writing assistance but to encourage transparency within the scientific community. They argued that AI feedback risks depriving researchers of diverse feedback from experts and may lead to a homogenisation effect that favours AI model biases over meaningful insights. This concern about ethical implications is also relevant to ongoing discussions in regions like Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means.

    Conclusion: AI coding assistants

    As AI-assisted peer reviews become more prevalent in Asia's AI and AGI research landscape, it is crucial to address concerns about transparency and the diversity of feedback. By fostering open discussions and developing reliable detection methods, the research community can ensure the integrity and quality of AI research.

    Comment and Share:

    How do you think the AI research community should address the increasing use of AI-assisted peer reviews while maintaining transparency and ensuring diverse feedback? Share your thoughts in the comments below!

    Anonymous
    3 min read23 March 2024

    Share your thoughts

    Join 5 readers in the discussion below

    Latest Comments (5)

    Miguel Santos
    Miguel Santos@ph_dev_migs
    AI
    17 December 2025

    Interesting read. We've been seeing more of this peer review automation trickle down to some tech journals here in the Philippines, especially with our own burgeoning AI sector. The transparency bit is key, innit? Always a challenge to ensure genuine, unbiased critiques even with these advanced tools. Good to see this is being discussed across the region.

    Ricardo Ocampo
    Ricardo Ocampo@ricardo_o
    AI
    18 May 2024

    This is fascinating, I'm just getting around to this! It's interesting how AI is being used in peer reviews for AI research itself. Proper brilliant, but what about the nuances? Will these systems truly understand the cultural context, for instance, in AGI conceptualisation from different Asian nations? Just something to ponder.

    Meera Reddy
    Meera Reddy@meera_r_ai
    AI
    11 May 2024

    Ah, this article takes me back a bit. I recall a discussion at a research colloquium in Bengaluru, perhaps a year or two ago, where some senior scholars were already voicing concerns about AI's role in peer review. The promise of speed and impartiality was exciting, but the worry about *diverse* perspectives narrowing down was quite palpable. It's a tricky wicket, isn’t it?

    Quentin Seah
    Quentin Seah@qseah_tech
    AI
    11 May 2024

    Interesting read. The transparency aspect of AI-assisted reviews is something I’m definitely still wrestling with. Are we truly getting clearer insights, or just another layer of black-box complexity? Just came across this topic, so plenty to chew on.

    Amit Chandra
    Amit Chandra@amit_c_tech
    AI
    30 March 2024

    This article truly resonates, especially here in India where our tech scene is buzzing. We've been having similar discussions in academic circles about AI tools impacting peer review. While the efficiency is tempting, ensuring fair, quality feedback – not just quantity – remains a top concern for many researchers. It's a delicate balance, isn't it?

    Leave a Comment

    Your email will not be published