Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
Life

AI Outsmarts Students: How Universities are Adapting

ChatGPT achieves higher grades than human students in university exams, with 94% of AI submissions going completely undetected by academic markers.

Intelligence DeskIntelligence Desk••4 min read

AI Snapshot

The TL;DR: what matters, fast.

ChatGPT achieved higher grades than humans across multiple psychology modules at University of Reading

94% of AI-generated exam submissions went completely undetected by academic markers

92% of university students now use AI tools for studies, with 88% using them during assessments

Universities Race to Keep Pace as AI Students Score Higher Than Humans

A groundbreaking study from the University of Reading has revealed that ChatGPT can not only pass university exams but consistently outperform human students. The AI language model achieved higher grades across multiple psychology modules, with 94% of its submissions going completely undetected by academic markers.

This dramatic shift has sparked urgent conversations across campuses worldwide. With 92% of university students now using AI tools for their studies and 88% deploying them during actual assessments, the traditional boundaries between human and artificial intelligence in education are dissolving rapidly.

The Numbers Paint a Startling Picture

The University of Reading study created 33 fake student profiles and had ChatGPT complete psychology exam questions. The results were unprecedented: AI submissions consistently earned higher marks than human work, with only third-year modules requiring abstract reasoning showing human superiority.

Advertisement

This performance gap reflects broader trends emerging across higher education. Universities that once viewed AI as a distant concern now find themselves scrambling to adapt assessment methods and maintain academic integrity standards.

By The Numbers

  • 94% of AI-generated submissions went undetected by university markers
  • 92% of university students now use AI tools for studies, up from 66% in 2024
  • 88% of students use AI during assessments, representing a 66% increase from 2024
  • 90% of higher education professionals personally use AI tools
  • 66% of institutions have formally adopted AI, up from 49% the previous year

Asian Universities Chart New Assessment Territory

Universities across Asia are responding with varied approaches. Some institutions, following the University of Glasgow's lead, have reverted to traditional in-person, invigilated examinations for senior students. However, many educators view this moment as an opportunity for fundamental change rather than retreat.

"Assessment needs to be reframed as collecting evidence of student learning, a continuous process measured against qualitative criteria," says Dr Jennifer Chang Wathall, part-time instructor at the University of Hong Kong.

This perspective aligns with broader conversations about how AI is transforming classroom experiences across Asia. Rather than viewing AI as a threat, progressive institutions are reimagining what authentic learning looks like in an AI-augmented world.

"Authentic assessment should prioritise practicing relevant skills, creativity, and critical thinking," emphasises Jason Gulya, professor at Berkeley College in New York.

New Assessment Models Emerge from Necessity

Forward-thinking educators are developing assessment methods less susceptible to AI manipulation whilst better reflecting real-world skills. These innovations include:

  • Project-based assessments requiring iterative development and documentation
  • Collaborative group projects emphasising interpersonal skills
  • Oral presentations and viva examinations
  • Problem-solving scenarios requiring real-time adaptation
  • Portfolio-based evaluation tracking learning progression
  • Peer review and reflection components

The shift towards practical skills mirrors trends across industries, where AI is redefining the meaning of work itself. Universities recognising this connection are positioning their graduates for success in an AI-integrated economy.

Assessment Type AI Susceptibility Skills Measured Implementation Complexity
Traditional Essays High Research, Writing Low
Oral Presentations Low Communication, Critical Thinking Medium
Group Projects Medium Collaboration, Problem-Solving High
Portfolio Development Low Reflection, Growth Documentation High

The Skills Gap Widens Between Adoption and Preparation

Despite widespread AI adoption, preparation lags significantly behind usage. Whilst 83% of teachers actively use generative AI tools, 71% lack formal training. This gap creates challenges for both educators and students navigating AI integration responsibly.

The disparity is particularly pronounced in Asia, where major initiatives like OpenAI's partnership with Indian universities are attempting to bridge the skills gap through structured programmes. However, the pace of technological advancement continues to outstrip institutional adaptation.

Some educators worry about students becoming overly dependent on AI assistance. Research suggests that maintaining cognitive independence remains crucial even as AI tools become more sophisticated and accessible.

How are universities detecting AI-generated work?

Most institutions rely on detection software and manual review, though 94% of AI submissions in recent studies went undetected. Universities are shifting focus from detection to prevention through assessment redesign.

What subjects are most vulnerable to AI assistance?

Text-heavy subjects like literature, history, and social sciences show highest vulnerability. STEM fields requiring calculations and technical problem-solving offer more resistance to AI shortcuts.

Are students cheating when they use AI tools?

This depends entirely on institutional policies and assignment guidelines. Many universities are developing clear frameworks distinguishing between permitted AI assistance and academic misconduct.

How can educators adapt their teaching methods?

Successful adaptations include emphasising process over product, incorporating real-time problem-solving, focusing on metacognitive skills, and designing assessments requiring human creativity and critical thinking.

Will AI replace traditional university education?

Rather than replacement, AI is catalysing evolution towards more personalised, skills-focused education. Universities providing human mentorship, peer interaction, and credentialed learning remain highly relevant.

The AIinASIA View: This isn't a crisis of academic integrity but an opportunity for educational evolution. Universities clinging to outdated assessment methods will find themselves irrelevant as AI capabilities expand. The institutions thriving in 2026 and beyond will be those that embrace AI as a collaborative tool whilst doubling down on uniquely human skills: creativity, emotional intelligence, ethical reasoning, and complex problem-solving. We believe the future belongs to universities that teach students to work with AI, not against it, preparing graduates for a world where human-AI collaboration is the norm, not the exception.

The transformation of higher education through AI presents both challenges and unprecedented opportunities. Universities that embrace this change thoughtfully, focusing on human skills that complement rather than compete with AI, will shape the next generation of leaders and innovators.

As AI continues reshaping Asian universities, the question isn't whether institutions should adapt, but how quickly they can evolve their practices whilst maintaining educational quality. The students and educators navigating this transition today are pioneering the future of human learning.

How do you see AI changing your own educational or professional development? Are we witnessing the end of traditional assessment or its necessary evolution? Drop your take in the comments below.

â—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 2 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the Prompt Engineering Mastery learning path.

Continue the path →

Latest Comments (2)

Natalie Okafor@natalieok
AI
19 January 2026

The 94% undetected rate is exactly why we're so careful with AI in patient-facing applications. If even human markers struggle to identify AI-generated content, imagine the risks in clinical documentation or diagnostic support where the stakes are much higher. It highlights the need for robust validation beyond just performance metrics.

Priya Ramasamy@priyaram
AI
16 September 2024

I get that AI can ace exams, but the push for "innovative assessment methods" makes me wonder if these are truly scalable for the Malaysian education system. Most of our public universities already struggle with large class sizes. How exactly do you implement nuanced, creative assessments for thousands of students without just adding to lecturer workload?

Leave a Comment

Your email will not be published