Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

AI in ASIA
Samsung Galaxy S26 with Gemini screen automation active
News

Gemini Screen Automation Arrives With Strict Usage Caps

Google's new Android AI takes actions inside apps for you. The catch? Free users get just 5 requests a day.

Intelligence Desk9 min read

Gemini screen automation lets Galaxy S26 users book rides and order food via voice commands.

AI Snapshot

The TL;DR: what matters, fast.

Free Gemini users get 5 screen automation requests/day; Ultra users at $249.99/month get 120

Feature launches on Samsung Galaxy S26 in US and South Korea; Pixel 10 support pending

App catalogue is US-only at launch, leaving most Asia-Pacific users with limited practical utility

Who should pay attention: Android power users and Galaxy S26 owners | Google Gemini subscribers evaluating tier value | Asia-Pacific app developers and platform operators

What changes next: As Google expands the supported app catalogue and Pixel 10 support goes live, the key test will be whether Asian super-apps like Grab and Gojek gain integration, which would determine whether screen automation has genuine mass-market relevance in Asia-Pacific.

Google brings AI-driven app control to Android, but usage caps tell the real story

Google is rolling out one of its most practically useful AI features to date: Gemini screen automation, which lets users control Android apps through natural language commands. The feature is currently live on the Samsung Galaxy S26 series in the United States and South Korea, with support for Pixel 10 devices coming soon. It marks a meaningful step forward in agentic AI on mobile, but the tiered usage limits reveal just how resource-intensive cloud-powered AI actions really are.

By The Numbers

  • 5 requests per day for free Gemini account holders using screen automation
  • 12 requests/day for Google AI Plus subscribers at $7.99/month
  • 20 requests/day for Google AI Pro subscribers at $19.99/month
  • 120 requests/day for Google AI Ultra subscribers at $249.99/month
  • 200 requests/day for the separate Gemini Agent capability, exclusive to AI Ultra

The feature works by running a supported app inside a virtual window on the device, while cloud infrastructure handles the intelligence layer, telling the phone precisely where to scroll, tap, and type. This architecture means real compute costs sit behind every single command, which explains why Google has introduced firm daily caps across all subscription tiers.

What Gemini Screen Automation Actually Does

Screen automation is not to be confused with the broader Gemini Agent capability, which operates through a live browser instance in the cloud and remains exclusive to AI Ultra subscribers. Screen automation is narrower in scope but arguably more immediately useful for everyday tasks. It is designed to execute structured actions inside specific apps without the user having to navigate menus themselves.

At launch, Gemini screen automation supports six apps: Lyft, Uber, GrubHub, DoorDash, Uber Eats, and Starbucks. The choice of apps is telling. These are high-frequency, transactional platforms where users repeat the same sequences constantly, booking a ride, reordering a coffee, scheduling a grocery delivery. Automating these flows has genuine daily utility rather than serving as a novelty demonstration.

Supported commands include:

  • "Book a ride to the airport"
  • "Schedule a ride for tomorrow"
  • "Reorder my last coffee"
  • "Order pizza for delivery"
  • "Add milk and eggs to my grocery cart"
  • "Order groceries to my mum's house"

These are not complex multi-step reasoning tasks. They are routine, high-repetition actions where removing friction has real value. That said, as agentic AI capabilities mature, the expectation is that the supported app catalogue will expand significantly beyond this initial set.

"120 requests per day for AI Ultra subscribers at $249.99/month, compared to just 5 per day on the free tier" - Google Gemini subscription tier documentation

The Subscription Tier Problem

The usage caps expose a structural tension in how Google is monetising Gemini features. A free user who attempts to automate five tasks in a single morning will be locked out for the remainder of the day. For a feature positioned as reducing friction in daily life, that constraint is significant. At AI Plus ($7.99/month), twelve requests per day remains modest if a user is relying on automation for ride-booking and food ordering on any given afternoon.

The jump from AI Pro at $19.99 to AI Ultra at $249.99 is particularly steep, and the 120 daily request allowance for Ultra is the tier where screen automation actually becomes usable as a genuine daily driver. For most consumers, that price point is prohibitive. The more realistic use case for AI Pro subscribers is occasional, task-specific automation rather than habitual use. As noted in our analysis of how AI is intensifying rather than reducing workloads, the friction of managing AI tool limits is becoming its own category of cognitive overhead.

"Subscribing to Google AI Ultra ($249.99) bumps the limit to 120 requests per day" - Google, Gemini feature documentation

The cost architecture also signals where Google sees the primary market. At $249.99 per month, AI Ultra is positioned for power users and early adopters, not mainstream consumers. Screen automation at that tier starts to look compelling. At lower tiers, it remains a preview of what the product could eventually become.

Subscription Tier Monthly Cost Screen Automation Requests/Day
Free $0 5
Google AI Plus $7.99 12
Google AI Pro $19.99 20
Google AI Ultra $249.99 120

Food delivery app order confirmed on Android

Gemini screen automation running on a Samsung Galaxy S26 in a real-world ride-booking scenario.

Device Availability and the Samsung Partnership

The initial rollout is limited to the Samsung Galaxy S26 series, available in two markets: the United States and South Korea. Pixel 10, Pixel 10 Pro, and Pixel 10 Pro XL support has been confirmed but is not yet live, with a US-only launch planned. The Samsung-first approach reflects Google's ongoing strategic partnership with the Korean electronics giant, which has seen Gemini deeply integrated into Samsung's One UI experience ahead of broader Android availability.

The Korea inclusion is noteworthy. It positions South Korea as a launch market alongside the US, rather than an afterthought in a phased Asia rollout. For Samsung's home market, where Galaxy devices command strong consumer loyalty, it provides a meaningful on-device differentiator. That said, the limited app catalogue at launch is heavily US-centric. Lyft, GrubHub, and DoorDash have little or no presence in South Korea, which means Korean Galaxy S26 users have far fewer practical automation options at this stage.

This kind of Android AI innovation is closely watched by device makers across Asia. Samsung's integration with Gemini is a direct strategic play, and for context on how desktop AI tools like Claude are pushing similar boundaries, the broader pattern is clear: AI assistants are moving from answering questions to taking actions across devices and platforms.

What This Means for Asia-Pacific

South Korea's inclusion as a day-one market for Gemini screen automation is a clear signal that Google views Asia-Pacific as central to its agentic AI ambitions, not a secondary market for delayed feature rollouts. Samsung's dominance in the Korean and broader Asian Android market means that any feature shipping on Galaxy S26 hardware reaches a substantial installed base quickly.

However, the current app catalogue represents a significant gap for most of Asia-Pacific. Grab, Gojek, foodpanda, and Lazada, the dominant super-app and delivery platforms across Southeast Asia, are absent from the initial supported list. Until Google expands screen automation to include regionally relevant apps, the feature's practical value in markets like Indonesia, Thailand, Malaysia, Singapore, and the Philippines remains limited. As reporting on Southeast Asia's AI development challenges has consistently highlighted, technology features built around Western app ecosystems frequently miss the mark in ASEAN markets.

Japan and China present separate dynamics. In Japan, Google services are present but consumer AI adoption in agentic formats is still developing. In China, where Google has no operational presence, the feature is irrelevant to the consumer market. The real battleground for agentic mobile AI in Asia is South Korea, Taiwan, Singapore, and Australia, where both the device ecosystem and app infrastructure align more closely with what Google has built. The data infrastructure challenges facing Southeast Asia's AI sector also apply here: without localised training data and regional app integrations, these tools will consistently underperform for Asian users compared to their US counterparts.

India is a wildcard. Android penetration is enormous, and the country has a thriving food delivery and ride-hailing ecosystem through apps like Swiggy, Zomato, and Ola. If Google adds these platforms to the screen automation catalogue, India becomes one of the highest-impact markets globally for this feature.

The Bigger Picture for Agentic AI on Mobile

Gemini screen automation, even in its constrained early form, represents a meaningful step in the transition from AI as a conversational tool to AI as an active participant in daily digital life. The question is not whether this capability will expand, but how quickly Google can scale the app integrations, manage the cloud infrastructure costs, and bring the pricing model to a point where mainstream users find it genuinely worthwhile.

The daily request caps are honest about the current cost reality. Cloud-side processing to interpret screens, determine actions, and execute them at low latency is expensive. Until on-device models become powerful enough to handle this locally, usage limits will remain a feature of every tier. For users curious about where this technology sits in the broader AI agent landscape, our explainer on what agentic AI actually means in practice provides useful grounding. The rapid growth of AI relative to traditional search also suggests that consumers are increasingly comfortable with AI taking actions on their behalf, which bodes well for adoption once the price and access model matures.

Frequently Asked Questions

What apps does Gemini screen automation currently support?

At launch, Gemini screen automation supports Lyft, Uber, GrubHub, DoorDash, Uber Eats, and Starbucks. Support is limited to these six apps, with expansion expected over time. The current app catalogue is heavily US-focused, limiting practical utility for users in Asia-Pacific markets.

Which Android devices support Gemini screen automation?

The Samsung Galaxy S26 series is the first device family to receive screen automation, available in the US and South Korea. Support for the Pixel 10, Pixel 10 Pro, and Pixel 10 Pro XL has been confirmed for the US but is not yet live at the time of writing.

How is Gemini screen automation different from Gemini Agent?

Screen automation runs a supported app in a virtual window on the device and uses cloud processing to execute in-app actions. Gemini Agent, by contrast, operates through a live browser instance entirely in the cloud and is currently exclusive to AI Ultra subscribers ($249.99/month), with a higher daily request allowance of 200 per day.

The AIinASIA View: Gemini screen automation is a genuinely useful step forward in mobile AI, but Google has structured the pricing so that it only becomes practical at a tier most consumers will not pay. The South Korea launch is encouraging, but until Grab, Gojek, and Zomato are on the supported list, this feature is largely irrelevant to the majority of Asia-Pacific users.

If Google added your most-used local app to the Gemini screen automation catalogue tomorrow, would that actually change how you use your phone? Drop your take in the comments below.

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Be the first to share your perspective on this story

This is a developing story

We're tracking this across Asia-Pacific and may update with new developments, follow-ups and regional context.

This article is part of the AI Tools Power User learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Loading comments...