News
Meta’s Llama 3 AI Model: A Giant Leap in Multilingual and Mathematical Capabilities
Meta’s Llama 3 AI Model, with its multilingual and mathematical capabilities, is challenging the status quo in the AI landscape.
Published
10 months agoon
By
AIinAsia
TL;DR:
- Meta’s new Llama 3 AI model boasts 405 billion parameters, enhancing multilingual skills and mathematical prowess.
- Llama 3 outperforms previous versions and rivals in coding, maths, and multilingual conversation.
- Meta aims to surpass competitors with future Llama models by 2024.
The Arrival of Llama 3: Meta’s Largest AI Model Yet
On Tuesday, Meta Platforms unveiled its most substantial Llama 3 artificial intelligence model, showcasing impressive multilingual abilities and overall performance that challenges paid models from competitors like OpenAI. The new Llama 3 model can communicate in eight languages, write superior-quality computer code, and solve complex maths problems, as stated in Meta’s blog posts and research paper.
A Giant Among AI Models
With 405 billion parameters, Llama 3 surpasses its predecessor released last year. Although it’s still smaller than leading models offered by competitors, Meta’s CEO, Mark Zuckerberg, is confident that future Llama models will surpass proprietary competitors by next year. The Meta AI chatbot, powered by these models, is projected to become the most popular AI assistant by the end of 2023, with hundreds of millions of users already.
The Race for AI Supremacy
As tech companies compete to demonstrate the capabilities of their large language models, Meta’s Llama 3 aims to deliver significant gains in areas like advanced reasoning. Despite concerns about the limits of such models, Meta continues to innovate and invest in AI technology.
Multilingual and Multitalented
In addition to the flagship 405 billion parameter model, Meta is also releasing updated versions of its lighter-weight 8 billion and 70 billion parameter Llama 3 models. All three models are multilingual and can handle larger user requests via an expanded “context window.” This improvement, according to Meta’s head of generative AI, Ahmad Al-Dahle, will enhance the experience of generating computer code.
AI-Generated Data for Improved Performance
Al-Dahle also revealed that his team improved the Llama 3 model’s performance on tasks such as solving maths problems by using AI to generate some of the data on which they were trained. This innovative approach could pave the way for future advancements in AI technology.
Meta’s Strategic Move
Meta releases its Llama models largely free-of-charge for use by developers. This strategy, Zuckerberg believes, will lead to innovative products, less dependence on competitors, and increased engagement on Meta’s core social networks. Despite some investors’ concerns about the costs, Meta’s commitment to AI development remains steadfast.
Llama 3 vs. The Competition
Although measuring progress in AI development is challenging, test results provided by Meta suggest that its largest Llama 3 model is nearly matching and, in some cases, outperforming Anthropic’s Claude 3.5 Sonnet and OpenAI’s GPT-4o. This competitive edge could make Meta’s free models more appealing to developers.
The Future of Llama 3: Multimodal Capabilities
In their paper, Meta researchers hinted at upcoming “multimodal” versions of the models due later this year. These versions will incorporate image, video, and speech capabilities, potentially rivalling other multimodal models such as Google’s Gemini 1.5 and Anthropic’s Claude 3.5 Sonnet.
To learn more about Meta’s Llama 3 tap here.
Comment and Share
What do you think about Meta’s Llama 3 AI model and its potential impact on the AI landscape? Share your thoughts in the comments below and don’t forget to subscribe for updates on AI and AGI developments.
You may also like:
- Meta Expands AI Chatbot to India and Africa
- Gemini Rising: Google’s Advance AI Game Changer
- To learn more about Meta’s Llama 3 tap here.
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
You may like
-
How Singtel Used AI to Bring Generations Together for Singapore’s SG60
-
Can Singtel’s Free Access to Perplexity Pro Redefine AI Search?
-
Adrian’s Arena: The AI-Driven Playbook for Winning Over APAC Consumers
-
Meta’s Movie Gen: Revolutionising Video Creation with AI
-
Meta Enlists Celebrities for AI Voices
Business
Apple’s China AI pivot puts Washington on edge
Apple’s partnership with Alibaba to deliver AI services in China has sparked concern among U.S. lawmakers and security experts, highlighting growing tensions in global technology markets.
Published
8 hours agoon
May 21, 2025By
AIinAsia
As Apple courts Alibaba for its iPhone AI partnership in China, U.S. lawmakers see more than just a tech deal taking shape.
TL;DR — What You Need To Know
- Apple has reportedly selected Alibaba’s Qwen AI model to power its iPhone features in China
- U.S. lawmakers and security officials are alarmed over data access and strategic implications
- The deal has not been officially confirmed by Apple, but Alibaba’s chairman has acknowledged it
- China remains a critical market for Apple amid declining iPhone sales
- The partnership highlights the growing difficulty of operating across rival tech spheres
Apple Intelligence meets the Great Firewall
Apple’s strategic pivot to partner with Chinese tech giant Alibaba for delivering AI services in China has triggered intense scrutiny in Washington. The collaboration, necessitated by China’s blocking of OpenAI services, raises profound questions about data security, technological sovereignty, and the intensifying tech rivalry between the United States and China. As Apple navigates declining iPhone sales in the crucial Chinese market, this partnership underscores the increasing difficulty for multinational tech companies to operate seamlessly across divergent technological and regulatory environments.
Apple Intelligence Meets Chinese Regulations
When Apple unveiled its ambitious “Apple Intelligence” system in June, it marked the company’s most significant push into AI-enhanced services. For Western markets, Apple seamlessly integrated OpenAI’s ChatGPT as a cornerstone partner for English-language capabilities. However, this implementation strategy hit an immediate roadblock in China, where OpenAI’s services remain effectively banned under the country’s stringent digital regulations.
Faced with this market-specific challenge, Apple initiated discussions with several Chinese AI leaders to identify a compliant local partner capable of delivering comparable functionality to Chinese consumers. The shortlist reportedly included major players in China’s burgeoning AI sector:
- Baidu, known for its Ernie Bot AI system
- DeepSeek, an emerging player in foundation models
- Tencent, the social media and gaming powerhouse
- Alibaba, whose open-source Qwen model has gained significant attention
While Apple has maintained its characteristic silence regarding partnership details, recent developments strongly suggest that Alibaba’s Qwen model has emerged as the chosen solution. The arrangement was seemingly confirmed when Alibaba’s chairman made an unplanned reference to the collaboration during a public appearance.
“Apple’s decision to implement a separate AI system for the Chinese market reflects the growing reality of technological bifurcation between East and West. What we’re witnessing is the practical manifestation of competing digital sovereignty models.”
Washington’s Mounting Concerns
The revelation of Apple’s China-specific AI strategy has elicited swift and pronounced reactions from U.S. policymakers. Members of the House Select Committee on China have raised alarms about the potential implications, with some reports indicating that White House officials have directly engaged with Apple executives on the matter.
Representative Raja Krishnamoorthi of the House Intelligence Committee didn’t mince words, describing the development as “extremely disturbing.” His reaction encapsulates broader concerns about American technological advantages potentially benefiting Chinese competitors through such partnerships.
Greg Allen, Director of the Wadhwani A.I. Centre at CSIS, framed the situation in competitive terms:
“The United States is in an AI race with China, and we just don’t want American companies helping Chinese companies run faster.”
The concerns expressed by Washington officials and security experts include:
- Data Sovereignty Issues: Questions about where and how user data from AI interactions would be stored, processed, and potentially accessed
- Model Training Advantages: Concerns that the vast user interactions from Apple devices could help improve Alibaba’s foundational AI models
- National Security Implications: Worries about whether sensitive information could inadvertently flow through Chinese servers
- Regulatory Compliance: Questions about how Apple will navigate China’s content restrictions and censorship requirements
In response to these growing concerns, U.S. agencies are reportedly discussing whether to place Alibaba and other Chinese AI companies on a restricted entity list. Such a designation would formally limit collaboration between American and Chinese AI firms, potentially derailing arrangements like Apple’s reported partnership.
Commercial Necessities vs. Strategic Considerations
Apple’s motivation for pursuing a China-specific AI solution is straightforward from a business perspective. China remains one of the company’s largest and most important markets, despite recent challenges. Earlier this spring, iPhone sales in China declined by 24% year over year, highlighting the company’s vulnerability in this critical market.
Without a viable AI strategy for Chinese users, Apple risks further erosion of its market position at precisely the moment when AI features are becoming central to consumer technology choices. Chinese competitors like Huawei have already launched their own AI-enhanced smartphones, increasing pressure on Apple to respond.
“Apple faces an almost impossible balancing act. They can’t afford to offer Chinese consumers a second-class experience by omitting AI features, but implementing them through a Chinese partner creates significant political exposure in the U.S.
The situation is further complicated by China’s own regulatory environment, which requires foreign technology companies to comply with data localisation rules and content restrictions. These requirements effectively necessitate some form of local partnership for AI services.
A Blueprint for the Decoupled Future?
Whether Apple’s partnership with Alibaba proceeds as reported or undergoes modifications in response to political pressure, the episode provides a revealing glimpse into the fragmenting global technology landscape.
As digital ecosystems increasingly align with geopolitical boundaries, multinational technology firms face increasingly complex strategic decisions:
- Regionalised Technology Stacks: Companies may need to develop and maintain separate technological implementations for different markets
- Partnership Dilemmas: Collaborations beneficial in one market may create political liabilities in others
- Regulatory Navigation: Operating across divergent regulatory environments requires sophisticated compliance strategies
- Resource Allocation: Developing market-specific solutions increases costs and complexity
What we’re seeing with Apple and Alibaba may become the norm rather than the exception. The era of frictionless global technology markets is giving way to one where regional boundaries increasingly define technological ecosystems.
Looking Forward
For now, Apple Intelligence has no confirmed launch date for the Chinese market. However, with new iPhone models traditionally released in autumn, Apple faces mounting time pressure to finalise its AI strategy.
The company’s eventual approach could signal broader trends in how global technology firms navigate an increasingly bifurcated digital landscape. Will companies maintain unified global platforms with minimal adaptations, or will we see the emergence of fundamentally different technological experiences across major markets?
As this situation evolves, it highlights a critical reality for the technology sector: in an era of intensifying great power competition, even seemingly routine business decisions can quickly acquire strategic significance.
You May Also Like:
- Alibaba’s AI Ambitions: Fueling Cloud Growth and Expanding in Asia
- Apple Unleashes AI Revolution with Apple Intelligence: A Game Changer in Asia’s Tech Landscape
- Apple and Meta Explore AI Partnership
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
News
AI still can’t tell the time, and it’s a bigger problem than it sounds
This article explores new findings from ICLR 2025 revealing the limitations of leading AI models in basic timekeeping tasks. Despite excelling at language and pattern recognition, AIs falter when asked to interpret analogue clocks or calendar dates, raising crucial questions for real-world deployment in Asia.
Published
2 days agoon
May 19, 2025By
AIinAsia
Despite its growing prowess in language, images and coding, AI timekeeping ability is stumped by the humble clock and calendar.
TL;DR — What You Need To Know
- AI models struggle with tasks most humans master as children: reading analogue clocks and determining calendar dates.
- New research tested leading AI models and found they failed over 60% of the time.
- The findings raise questions about AI’s readiness for real-world, time-sensitive applications.
AI can pass law exams but flubs a clock face
It’s hard not to marvel at the sophistication of large language models. They write passable essays, chat fluently in multiple languages, and generate everything from legal advice to song lyrics. But put one in front of a basic clock or ask it what day a date falls on, and it might as well be guessing.
At the recent International Conference on Learning Representations (ICLR), researchers unveiled a startling finding: even top-tier AI models such as GPT-4o, Claude-3.5 Sonnet, Gemini 2.0 and LLaMA 3.2 Vision struggle mightily with time-related tasks. In a study led by Rohit Saxena from the University of Edinburgh, these systems were tested on their ability to interpret images of clocks and respond to calendar queries. They failed more than half the time.
“Most people can tell the time and use calendars from an early age,” Saxena explained. “Our findings highlight a significant gap in the ability of AI to carry out what are quite basic skills for people.”
Reading the time: a surprisingly complex puzzle
To a human, clock reading feels instinctive. To a machine, it’s a visual nightmare. Consider the elements involved:
- Overlapping hands that require angle estimation
- Diverse designs using Roman numerals or decorative dials
- Variability in colour, style, and size
While older AI systems relied on labelled datasets, clock reading demands spatial reasoning. As Saxena noted:
AI recognising that ‘this is a clock’ is easier than actually reading it.
In testing, even the most advanced models correctly read the time from a clock image just 38.7% of the time. That’s worse than random chance on many tasks.
Calendar chaos: dates and days don’t add up
When asked, “What day is the 153rd day of the year?”, humans reach for logic or a calendar. AI, by contrast, attempts to spot a pattern. This doesn’t always go well.
The study showed that calendar queries stumped the models even more than clocks, with just 26.3% accuracy. And it’s not just a lack of memory — it’s a fundamentally different approach. LLMs don’t execute algorithms like traditional computers; they predict outputs based on training patterns.
So while an AI might ace the question “Is 2028 a leap year?”, it could completely fail at mapping that fact onto a real-world date. Training data often omits edge cases like leap years or obscure date calculations.
What it means for Asia’s AI future
From India’s booming tech sector to Japan’s robotics leaders, AI applications are proliferating across Asia. Scheduling tools, autonomous systems, and assistive tech rely on accurate timekeeping — a weakness this research throws into sharp relief.
For companies deploying AI into customer service, logistics, or smart city infrastructure, such flaws aren’t trivial. If an AI can’t reliably say what time it is, it’s hardly ready to manage hospital shift schedules or transport timetables.
These findings argue for hybrid models and tighter oversight. AI isn’t useless here — but it may need more handholding than previously thought.
When logic and vision collide
This study underscores a deeper truth: AI isn’t just a faster brain. It’s something else entirely. What humans do intuitively often mixes perception with logic. AI, however, processes one layer at a time.
Tasks like reading clocks or calculating dates demand a blend of visual interpretation, spatial understanding, and logical sequence — all areas where LLMs still struggle when combined.
“AI is powerful, but when tasks mix perception with precise reasoning, we still need rigorous testing, fallback logic, and in many cases, a human in the loop,” Saxena concluded.
You May Also Like:
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
Business
Anthropic’s CEO Just Said the Quiet Part Out Loud — We Don’t Understand How AI Works
Anthropic’s CEO admits we don’t fully understand how AI works — and he wants to build an “MRI for AI” to change that. Here’s what it means for the future of artificial intelligence.
Published
2 weeks agoon
May 7, 2025By
AIinAsia
TL;DR — What You Need to Know
- Anthropic CEO Dario Amodei says AI’s decision-making is still largely a mystery — even to the people building it.
- His new goal? Create an “MRI for AI” to decode what’s going on inside these models.
- The admission marks a rare moment of transparency from a major AI lab about the risks of unchecked progress.
Does Anyone Really Know How AI Works?
It’s not often that the head of one of the most important AI companies on the planet openly admits… they don’t know how their technology works. But that’s exactly what Dario Amodei — CEO of Anthropic and former VP of research at OpenAI — just did in a candid and quietly explosive essay.
In it, Amodei lays out the truth: when an AI model makes decisions — say, summarising a financial report or answering a question — we genuinely don’t know why it picks one word over another, or how it decides which facts to include. It’s not that no one’s asking. It’s that no one has cracked it yet.
“This lack of understanding”, he writes, “is essentially unprecedented in the history of technology.”
Unprecedented and kind of terrifying.
To address it, Amodei has a plan: build a metaphorical “MRI machine” for AI. A way to see what’s happening inside the model as it makes decisions — and ideally, stop anything dangerous before it spirals out of control. Think of it as an AI brain scanner, minus the wires and with a lot more math.
Anthropic’s interest in this isn’t new. The company was born in rebellion — founded in 2021 after Amodei and his sister Daniela left OpenAI over concerns that safety was taking a backseat to profit. Since then, they’ve been championing a more responsible path forward, one that includes not just steering the development of AI but decoding its mysterious inner workings.
In fact, Anthropic recently ran an internal “red team” challenge — planting a fault in a model and asking others to uncover it. Some teams succeeded, and crucially, some did so using early interpretability tools. That might sound dry, but it’s the AI equivalent of a spy thriller: sabotage, detection, and decoding a black box.
Amodei is clearly betting that the race to smarter AI needs to be matched with a race to understand it — before it gets too far ahead of us. And with artificial general intelligence (AGI) looming on the horizon, this isn’t just a research challenge. It’s a moral one.
Because if powerful AI is going to help shape society, steer economies, and redefine the workplace, shouldn’t we at least understand the thing before we let it drive?
What happens when we unleash tools we barely understand into a world that’s not ready for them?
You may also like:
- Anthropic Unveils Claude 3.5 Sonnet
- Unveiling the Secret Behind Claude 3’s Human-Like Personality: A New Era of AI Chatbots in Asia
- Shadow AI at Work: A Wake-Up Call for Business Leaders
- Or try the free version of Anthropic’s Claude by tapping here.
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.

Apple’s China AI pivot puts Washington on edge

Build Your Own Custom GPT in Under 30 Minutes – Step-by-Step Beginner’s Guide

How to Upload Knowledge into Your Custom GPT
Trending
-
Life1 week ago
7 Mind-Blowing New ChatGPT Use Cases in 2025
-
Business1 week ago
AI Just Killed 8 Jobs… But Created 15 New Ones Paying £100k+
-
Tools2 weeks ago
Edit AI Images on the Go with Gemini’s New Update
-
Learning1 day ago
Build Your Own Custom GPT in Under 30 Minutes – Step-by-Step Beginner’s Guide
-
Life3 weeks ago
WhatsApp Confirms How To Block Meta AI From Your Chats
-
Life6 days ago
Adrian’s Arena: Will AI Get You Fired? 9 Mistakes That Could Cost You Everything
-
Learning1 day ago
How to Use the “Create an Action” Feature in Custom GPTs
-
Learning1 day ago
How to Upload Knowledge into Your Custom GPT