Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

AI in ASIA
AI text detection
News

The Future of AI: OpenAI's Deliberate Approach to Detecting ChatGPT-Generated Text

OpenAI's deliberate approach to AI text detection highlights the complexities and ethical considerations in AI development.

Intelligence Desk4 min read

OpenAI is developing a text watermarking tool to detect ChatGPT-generated content.,The tool is highly accurate but has risks, including susceptibility to circumvention and potential impact on non-English speakers.,OpenAI is taking a cautious approach due to the complexities involved and the broader ecosystem impact.

Artificial Intelligence (AI) is rapidly transforming the world, and Asia is at the forefront of this revolution. One of the most talked-about AI tools is ChatGPT, developed by OpenAI. Recently, OpenAI has been working on a new tool to detect text generated by ChatGPT. This tool, based on text watermarking, could have significant implications for education, content creation, and more. Let's dive into what this means and why OpenAI is taking a deliberate approach.

The Need for AI Text Detection

AI-generated text has become increasingly sophisticated, making it difficult to distinguish from human-written content. This poses challenges in various fields, particularly in education, where students might use AI to cheat on assignments. Previous efforts to detect AI-generated text have been largely ineffective. Even OpenAI shut down its previous AI text detector due to its low accuracy.

What is Text Watermarking?

Text watermarking is a method that involves making small changes to how ChatGPT selects words. This creates an invisible watermark in the writing that can later be detected by a separate tool. Unlike previous methods, text watermarking focuses solely on detecting writing from ChatGPT, not from other companies’ models.

The Promise and Risks of Text Watermarking

OpenAI’s research has shown that text watermarking is highly accurate and even effective against localized tampering, such as paraphrasing. However, it is less robust against globalized tampering, like using translation systems or rewording with another generative model. This makes it trivial for bad actors to circumvent the detection.

Moreover, the method could disproportionately impact groups like non-English speakers. OpenAI acknowledges that text watermarking could stigmatize the use of AI as a useful writing tool for non-native English speakers.

OpenAI’s Deliberate Approach

Given these complexities, OpenAI is taking a deliberate approach to releasing the text watermarking tool. The company is weighing the risks and researching alternatives. In a statement to TechCrunch, an OpenAI spokesperson said:

“The text watermarking method we’re developing is technically promising, but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers.”

“The text watermarking method we’re developing is technically promising, but has important risks we’re weighing while we research alternatives, including susceptibility to circumvention by bad actors and the potential to disproportionately impact groups like non-English speakers.”

The Broader Ecosystem Impact

The decision to release or not release the text watermarking tool will have a significant impact on the broader AI ecosystem. OpenAI is considering how this tool could affect not just education but also content creation, journalism, and other fields where AI-generated text is becoming more prevalent. For a deeper dive into the ethical considerations of AI, read our article on Why ProSocial AI Is The New ESG.

Detecting AI-Generated Text

It's important to understand why detecting AI-generated text is crucial. AI tools like ChatGPT can generate highly convincing text, which can be misused in various ways, from plagiarism to spreading misinformation. Imagine you are a teacher who suspects a student has used ChatGPT to write an essay. How would you use the text watermarking tool to detect AI-generated content? What steps would you take to ensure fairness and accuracy in your assessment? You might also be interested in how AI Browsers Under Threat as Researchers Expose Deep Flaws.

The Future of AI in Asia

Asia is a hotbed for AI innovation, with countries like China, Japan: Principles-Led Governance with Strong Industry Input, and South Korea leading the way. The development of tools like text watermarking highlights the need for ethical considerations in AI. As AI becomes more integrated into our daily lives, it's crucial to ensure that it is used responsibly and ethically. Our piece on Taiwan’s AI Law Is Quietly Redefining What “Responsible Innovation” Means provides further context on regional approaches to AI governance.

For more about whether OpenAI will release ChatGPT spotting software, tap here.

Comment and Share

What do you think about OpenAI’s approach to detecting AI-generated text? How do you see AI impacting your field in the future? Share your thoughts and experiences in the comments below. Don’t forget to Subscribe to our newsletter for updates on AI and AGI developments.

What did you think?

Written by

Share your thoughts

Join 4 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the Prompt Engineering Mastery learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (4)

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
28 October 2024

The point about localized versus globalized tampering is key here. If using translation systems can bypass the watermark, what does this mean for multilingual models, especially those being developed for Indic languages? The "small changes to how ChatGPT selects words" could have compounding effects when translated or rephrased across different linguistic structures. I'm curious if OpenAI is considering how this impacts NLP research focused on cross-lingual transfer or even just diverse language datasets.

Charlotte Davies
Charlotte Davies@charlotted
AI
30 September 2024

The point about watermarking being less robust against globalized tampering is key. This highlights the ongoing challenge for regulatory bodies. We've been discussing similar issues at the UK AI Safety Institute, and it underscores the need for a multi-faceted approach to verification, not just relying on single technical solutions.

Crystal
Crystal@crystalwrites
AI
23 September 2024

It's promising that OpenAI is trying watermarking, especially since their last detector wasn't great. but if it's so easy for people to get around it with rewording or translation, that's a bit worrying for folks in Asia where using translation tools is super common for content creation! it kinda defeats the purpose if it's not robust enough to handle that.

Krit Tantipong
Krit Tantipong@krit_99
AI
26 August 2024

@krit_99: I remember when this watermarking idea first came out. We were looking at something similar for tracking documentation in our logistics supply chain, knowing if certain reports were model-generated or human. The "localized tampering" versus "globalized tampering" point is key for us in Thailand, especially with translations. It makes it tricky to trust fully.

Leave a Comment

Your email will not be published