Cookie Consent

    We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

    Install AIinASIA

    Get quick access from your home screen

    News

    AI Compliance in Asia: Are Tech Giants Ready for the EU's AI Act?

    Explore the state of AI compliance in Asia as tech giants prepare for the EU AI Act, focusing on challenges and solutions in cybersecurity and bias.

    Anonymous
    3 min read16 October 2024
    AI compliance

    AI Snapshot

    The TL;DR: what matters, fast.

    The EU AI Act is a comprehensive set of rules addressing AI risks, covering aspects from cybersecurity to discriminatory output.

    LatticeFlow's LLM Checker, developed with ETH Zurich and INSAIT, evaluates AI models based on EU AI Act criteria, revealing challenges in discriminatory output and prompt hijacking.

    Anthropic's Claude 3 Opus achieved the highest compliance score, demonstrating that high adherence to the EU AI Act is attainable.

    Who should pay attention: Founders | AI developers | Regulators | Policymakers

    What changes next: Debate is likely to intensify for tech giants and policymakers.

    Some prominent AI models struggle with EU regulations, particularly in cybersecurity and bias.,The EU AI Act introduces fines up to €35 million or 7% of global turnover for non-compliance.,LatticeFlow's LLM Checker tool helps identify compliance gaps in AI models.

    Artificial Intelligence (AI) is growing rapidly in Asia, with tech giants investing heavily in this transformative technology. However, as AI advances, so does the need for regulation. The European Union's AI Act is set to shake things up, but are Asia's tech giants ready? Let's dive into the latest findings from LatticeFlow's LLM Checker and explore the state of AI compliance in Asia. For more on how different regions are approaching AI governance, see North Asia: Diverse Models of Structured Governance.

    The EU AI Act: A Game Changer

    The EU AI Act is a comprehensive set of rules aimed at addressing the risks and challenges posed by AI. With the rise of general-purpose AI models like ChatGPT, the EU has accelerated its efforts to enforce these regulations. The AI Act covers various aspects, from cybersecurity to discriminatory output, and non-compliance can result in hefty fines. This mirrors discussions around ethical AI development seen in places like India's AI Future: New Ethics Boards. You can read the full text of the EU AI Act here: Official Journal of the European Union.

    LatticeFlow's LLM Checker: Putting AI Models to the Test

    Swiss startup LatticeFlow, in collaboration with researchers from ETH Zurich and INSAIT, has developed the LLM Checker. This tool evaluates AI models based on the EU AI Act's criteria. The checker scored models from companies like Alibaba, Anthropic, OpenAI, Meta, and Mistral. While many models performed well overall, there were notable shortcomings in specific areas.

    Discriminatory Output: A Persistent Challenge

    One of the key areas where AI models struggled was discriminatory output. Reflecting human biases around gender, race, and other factors, this issue highlights the need for more inclusive and fair AI development.

    OpenAI's GPT-3.5 Turbo scored 0.46.,Alibaba Cloud's Qwen1.5 72B Chat model scored 0.37.

    Enjoying this? Get more in your inbox.

    Weekly AI news & insights from Asia.

    Cybersecurity: The Battle Against Prompt Hijacking

    Prompt hijacking is a type of cyberattack where hackers disguise malicious prompts as legitimate to extract sensitive information. This area also posed challenges for some models.

    Meta's Llama 2 13B Chat model scored 0.42.,Mistral's 8x7B Instruct model scored 0.38.

    Top Performer: Anthropic's Claude 3 Opus

    Among the models tested, Anthropic's Claude 3 Opus stood out with the highest average score of 0.89. This model's performance indicates that achieving high compliance with the EU AI Act is possible.

    The Road to Compliance

    Petar Tsankov, CEO and co-founder of LatticeFlow, sees the test results as a positive step. He believes that with a greater focus on optimising for compliance, companies can be well-prepared to meet regulatory requirements.

    "The EU is still working out all the compliance benchmarks, but we can already see some gaps in the models. With a greater focus on optimising for compliance, we believe model providers can be well-prepared to meet regulatory requirements." - Petar Tsankov, CEO, LatticeFlow

    "The EU is still working out all the compliance benchmarks, but we can already see some gaps in the models. With a greater focus on optimising for compliance, we believe model providers can be well-prepared to meet regulatory requirements." - Petar Tsankov, CEO, LatticeFlow

    The Future of AI Regulation in Asia

    As the EU AI Act comes into effect, Asian tech giants must prioritise compliance. Tools like LatticeFlow's LLM Checker can help identify areas for improvement and guide companies towards developing more responsible AI models. This proactive approach is vital for the region's APAC AI in 2026: 4 Trends You Need To Know.

    Comment and Share:

    What steps is your organisation taking to ensure AI compliance with regulations like the EU AI Act? Don't forget to Subscribe to our newsletter. Share your insights and let's discuss the future of AI regulation in Asia!

    Anonymous
    3 min read16 October 2024

    Share your thoughts

    Join 4 readers in the discussion below

    Latest Comments (4)

    Nanami Shimizu
    Nanami Shimizu@nanami_s_ai
    AI
    15 January 2026

    This article raises some really pertinent points, especially regarding how Asian tech giants are navigating the EU AI Act. My company also relies heavily on AI solutions, and the discussion around cybersecurity and data bias resonates deeply here in Japan. We often consider the 'gemba' or actual workplace, and ensuring AI fairness in practical applications is a significant hurdle. It's not just about meeting regulations; it's about building user trust over the long haul. The compliance journey is going to be a complex one for everyone, I believe.

    Rohan Kumar@rohan_tech
    AI
    9 January 2026

    Interesting read. One wonders, how far along are Asian start-ups, not just the tech behemoths, with their AI governance frameworks now?

    Patricia Ho@pat_ho_ai
    AI
    25 December 2024

    Interesting read. For us in Singapore, balancing innovation with EU-style AI governance is a proper sticky wicket, especially concerning data privacy.

    Amit Chandra
    Amit Chandra@amit_c_tech
    AI
    13 November 2024

    This piece on AI compliance really hits home for us in India, especially with the EU AI Act setting such a high bar. Our tech majors are certainly grappling with similar data security and fairness issues, even after the initial discussions. Curious how seamlessly they'll manage the adaptation, particularly regarding explainable AI. There's a real need for robust frameworks here too.

    Leave a Comment

    Your email will not be published