Skip to main content

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. Cookie Policy

AI in ASIA
News

The Future of AI: A Landmark Treaty Signed by US, Britain, and EU

World's first legally binding AI treaty signed by US, UK, and EU establishes seven core principles for protecting human rights and democracy.

Intelligence DeskIntelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

First legally binding international AI treaty signed by US, UK, EU and 7+ other nations on Sept 5, 2024

Framework establishes seven core principles for AI governance covering human rights and democratic processes

Critics raise concerns about enforceability due to broad language and national security exemptions

Historic AI Treaty Sets Global Precedent for Human Rights Protection

The world's first legally binding international AI treaty has officially entered the global stage. The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, signed on 5 September 2024, represents a watershed moment in AI governance.

The United States, United Kingdom, European Union, and seven other nations have committed to this groundbreaking framework. Unlike voluntary guidelines or industry standards, this treaty carries legal weight and establishes enforceable obligations for signatory states.

Seven Pillars of AI Accountability

The AI Convention builds upon seven core principles that governments must integrate into their national AI policies. These principles span human dignity, transparency, accountability, equality, privacy protection, reliability, and safe innovation.

Advertisement

Countries retain flexibility in implementation, allowing them to craft domestic legislation that reflects the treaty's requirements whilst addressing local contexts. This approach recognises the diverse regulatory landscapes across different jurisdictions.

The framework specifically targets AI systems that could impact human rights, democratic processes, or rule of law. It covers both public sector deployments and private sector applications that fall within these critical areas.

By The Numbers

  • First binding global AI treaty signed by 10+ countries on 5 September 2024
  • 57 countries participated in the negotiation process led by Council of Europe
  • Seven core principles established for AI governance and human rights protection
  • Treaty applies to both public and private sector AI systems affecting human rights
  • Unlimited number of additional countries can join the framework after signing

Implementation Challenges and Criticism

Legal experts have raised concerns about the treaty's practical enforceability. The broad language and built-in exemptions may limit its effectiveness in real-world scenarios.

"The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability," said Francesca Fanucci, Legal Expert, European Center for Not-for-Profit Law.

National security exemptions present a particular challenge. The treaty allows countries to exclude AI systems used for defence or security purposes, potentially creating significant loopholes.

Critics also point to disparities between oversight of public versus private sector AI applications. The framework places stronger scrutiny requirements on government use whilst providing more lenient treatment for commercial deployments.

Global Context and Regional Responses

The treaty emerges alongside diverse regulatory approaches worldwide. While Europe advances binding frameworks, Asia-Pacific nations are developing distinct governance models that blend innovation promotion with risk management.

"This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law," said Shabana Mahmood, Justice Secretary, United Kingdom.

The timing coincides with rapid AI advancement across sectors. From autonomous vehicles to predictive healthcare, AI systems increasingly influence critical decisions affecting millions of people daily.

This regulatory momentum reflects growing awareness that AI governance requires structured approaches rather than purely market-driven development. Countries recognise the need for proactive frameworks before AI capabilities outpace oversight mechanisms.

Governance Approach Key Features Timeline
Council of Europe Convention Binding treaty, human rights focus Signed September 2024
EU AI Act Risk-based regulation, market focus Effective August 2024
US Executive Orders Federal agency coordination October 2023
UK AI Safety Summit International cooperation framework November 2023

Industry Impact and Future Developments

Technology companies operating across multiple jurisdictions face increasingly complex compliance requirements. The treaty adds another layer to an already intricate regulatory landscape that includes the EU AI Act, national legislation, and sector-specific rules.

Organizations must now consider human rights implications alongside technical performance and commercial viability. This shift requires new assessment frameworks and governance structures within companies developing AI systems.

The treaty's success will largely depend on implementation consistency across signatory nations. Divergent interpretations could undermine its effectiveness and create regulatory arbitrage opportunities.

Key implementation areas include:

  • Risk assessment methodologies for AI systems affecting human rights
  • Transparency requirements for algorithmic decision-making processes
  • Appeals mechanisms for individuals affected by AI system decisions
  • Cross-border cooperation frameworks for investigation and enforcement
  • Regular review and updating processes to address technological evolution

Looking ahead, the treaty may influence how digital agents transform work environments and shape Asia's AI development trajectory. As AI capabilities expand, governance frameworks must evolve to address new challenges whilst preserving innovation incentives.

What makes this AI treaty different from existing regulations?

Unlike regional laws such as the EU AI Act, this treaty creates binding international obligations focused specifically on human rights protection. It establishes common principles whilst allowing national implementation flexibility, creating a global baseline for AI governance.

Which countries can join the AI Convention?

Any country can potentially join the Convention, not just Council of Europe members. The initial signatories include the US, UK, EU nations, and others, but the framework is designed to accommodate global participation and expansion.

How will the treaty be enforced across different countries?

Enforcement occurs through national legislation that incorporates the treaty's principles. Countries must establish domestic mechanisms for compliance monitoring, investigation, and remediation. International cooperation frameworks facilitate cross-border coordination and information sharing.

Does the treaty cover private companies or just government AI use?

The treaty applies to both public and private sector AI systems that could impact human rights, democracy, or rule of law. However, critics note that oversight requirements may be less stringent for private companies compared to government applications.

What happens to countries that don't comply with the treaty?

The treaty relies on diplomatic pressure, international cooperation mechanisms, and potential reputational costs rather than direct sanctions. Compliance monitoring occurs through regular reporting requirements and peer review processes among signatory nations.

The AIinASIA View: This treaty represents genuine progress in global AI governance, despite legitimate concerns about enforcement mechanisms. The framework establishes crucial precedents for human rights protection whilst maintaining innovation space. However, success depends entirely on implementation quality and consistency across diverse legal systems. We anticipate this will catalyse similar regional frameworks, particularly in Asia-Pacific where AI development requires balanced oversight approaches. The real test comes in translating principles into effective national legislation that protects citizens without stifling technological advancement.

The AI Convention marks a pivotal moment in technology governance, establishing the foundation for human rights protection in an AI-driven world. As countries begin implementation, the global community will learn whether international cooperation can effectively govern transformative technologies whilst preserving democratic values and individual freedoms.

What aspects of this historic AI treaty concern or encourage you most? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Share your thoughts

Join 3 readers in the discussion below

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

Advertisement

Advertisement

This article is part of the AI Policy Tracker learning path.

Continue the path →

Latest Comments (3)

Soo-yeon Park
Soo-yeon Park@sooyeon
AI
22 November 2024

It's good to see this treaty happening, even if it's not perfect. For K-content AI localization, where we're training on huge datasets of Korean dramas and music, protecting human rights in the AI development process is key. We need clear guidelines so our AI models, which are learning cultural nuances, don't accidentally reproduce biases or infringe on creator rights, even if this treaty is still a bit vague on enforceability.

Tran Linh@tranl
AI
20 September 2024

this is exciting to see, especially how it talks about human rights with AI. but for us building AI in Vietnamese, an international treaty feels a bit distant when we're still figuring out basic datasets and things. will these conventions ever address the unique challenges of non-english NLP development?

Lisa Park
Lisa Park@lisapark
AI
13 September 2024

hey, just catching up on this… I'm curious how much user research went into shaping this "AI Convention." It talks about protecting human rights, which is great, but then Francesca Fanucci points out how broad it is. From a UX perspective, broad usually means vague in application. How will they translate these principles into practical guidelines for developers and designers? Especially across so many different countries and cultures-what does "human rights" even mean in an AI context for someone in, say, Singapore versus Sweden? Wondering if there's a follow-up on how they plan to operationalize this for actual users.

Leave a Comment

Your email will not be published