Google Gemini's inline image editing could reshape how mobile users create and refine AI visuals
Google is quietly streamlining one of the more frustrating aspects of its Gemini AI assistant: editing images you've just generated. A new inline editing interface, spotted in a beta version of the Google app, could soon let Android users modify AI-generated images without the clunky download-and-reattach workflow that currently exists. It's a small change on paper, but for the growing number of people using Gemini image editing on mobile, it matters considerably.
By The Numbers
- The feature was spotted in Google app version 17.8.59 (beta), requiring manual enablement to access.
- Gemini now runs on Imagen 3.1 Flash Image (referred to colloquially as Nano Banana 2), bringing pro-tier capabilities to a broader user base.
- An early web-based annotation version of this tool was discovered by leaker TestingCatalog in November 2024, suggesting several months of internal development.
- Google Gemini is available across more than 230 countries and territories, making mobile-first improvements especially significant in high-Android-penetration markets.
What the New Gemini Image Editing Feature Actually Does
Right now, editing a Gemini-generated image requires users to download it, return to the chat, and manually reattach the file before accessing markup tools. It's the kind of friction that quietly erodes user experience, particularly on mobile where file management is already cumbersome.
The new approach is considerably more elegant. A pencil icon now appears directly on the generated image, in the top-right corner. Tapping it opens the familiar markup interface inline, without leaving the conversation thread. From there, users can circle a specific portion of the image and type a prompt describing what they want changed. A text annotation tool is also available for more granular, multi-area edits.
We typically see Gemini features appear on the web version before making their way to the mobile apps. With this in mind, we're potentially looking at a few more weeks or even months of waiting.
Notably, this feature has not yet appeared in the web version of Gemini, which is unusual. Google typically rolls out new capabilities to the web first before bringing them to Android. That inversion suggests the company may be deliberately prioritising mobile-first rollouts for image tools, where the bulk of casual creative usage happens.
Imagen 3.1 and the Nano Banana Backstory
To understand why this update matters, it helps to know what Gemini is generating with. The platform now uses Imagen 3.1 Flash Image, colloquially called Nano Banana 2, an upgrade that brings some of the capabilities previously reserved for the more powerful Nano Banana Pro model to everyday users. The pitch is straightforward: more capable image generation at faster speeds, accessible without a premium subscription tier.
The editorial shorthand "Nano Banana" refers to Google's internal model naming, surfaced through leaks and beta testing. Whether the branding sticks publicly is another matter, but the underlying capability is real: faster generation, improved quality, and now potentially smoother editing.

The inline editing improvement is part of a broader pattern at Google: iterative, mobile-centric upgrades that chip away at workflow friction. For designers, marketers, and content creators who use AI image generation as part of their daily toolkit, these marginal improvements compound quickly.
How the Editing Flow Compares: Before and After
| Stage | Current Workflow | New Inline Editing |
|---|---|---|
| Access editing tools | Download image, reattach to chat | Tap pencil icon directly on image |
| Select area to edit | Manual via separate tool | Circle or annotate inline |
| Submit edit prompt | Type in chat after reattaching | Type in prompt box within markup interface |
| Multi-area edits | Requires repeated downloads | Text annotation tool handles multiple areas |
| Availability | Live now | Beta only; manual enablement required |
The Asia-Pacific Picture
Mobile-first AI tools are not a niche consideration in Asia-Pacific. They are the primary access point. Android commands dominant market share across Southeast Asia, South Asia, and much of East Asia, making any meaningful upgrade to the Android Gemini experience directly relevant to hundreds of millions of potential users.
In markets like India, where Android penetration is above 95% and Google is investing heavily in Gemini localisation, streamlined image editing tools could meaningfully accelerate adoption. Indian creators, small business owners, and digital marketers represent one of the fastest-growing demographics for generative AI tools, particularly on mobile. The business case for accessible AI image tools is especially strong in emerging markets where professional design software remains cost-prohibitive.
Android holds over 72% global smartphone market share, with penetration exceeding 90% in key Asia-Pacific markets including India, Indonesia, and Vietnam. - StatCounter, 2024
In China, Gemini is not available, but the competitive implications are real. Domestic rivals including Baidu's Ernie Bot, ByteDance's tools, and a range of image generation platforms are all racing to reduce editing friction. Google's improvements raise the bar that Chinese developers will need to match for export-market competitiveness. For a broader view of how China is investing in AI infrastructure, see our coverage of China's AI five-year technology push.
Japan and South Korea, both strong Android markets with sophisticated creative and media industries, stand to benefit directly. Korean webtoon artists and Japanese illustrators already experiment extensively with AI-assisted image generation. Faster, more intuitive editing directly within Gemini could reduce dependency on third-party tools like Adobe Firefly or Canva's AI features.
What Developers and Power Users Should Watch
The feature is currently locked behind a manual enablement flag in the Google app beta. That means even users running version 17.8.59 cannot access it without deliberately activating it. This is standard practice for Google's staged rollouts, but it signals the company is still calibrating the experience before a wider release.
- Beta testers can explore the feature now via the Google app beta programme, though manual activation is required.
- Web users will likely wait longer, despite the web version typically being first to receive new Gemini features.
- Developers building on Gemini's API should note that inline editing signals a broader push toward conversational image refinement pipelines.
- Enterprise and creative users should monitor whether the markup tools eventually support layer-based or structured editing beyond basic prompt-driven changes.
For those tracking the broader competitive landscape, it's worth noting that rival model Claude has been gaining ground with users who value coherent, multi-step task handling. Our piece on why users are switching to Claude offers useful context on what Gemini is up against. Meanwhile, Google's own model rankings for specific use cases are increasingly scrutinised. Check out Google's own rankings of best AI models for Android development to understand how the company positions its tools internally.
The energy and infrastructure demands of scaling these AI image tools are also worth acknowledging. Faster, more iterative image generation at scale demands significant compute. Innovative approaches to that challenge, including floating data centres as a response to the AI energy crisis, are increasingly part of the conversation around sustainable AI deployment.
Frequently Asked Questions
How do I edit an image in Google Gemini right now?
Currently, editing a Gemini-generated image requires downloading it and then reattaching it to the chat to access markup tools. The new inline editing feature, which adds a pencil icon directly on generated images, is only available in the Google app beta (version 17.8.59) and must be manually enabled. It is not yet available to general users.
What is Nano Banana in Google Gemini?
"Nano Banana" is an informal name for Google's Imagen image generation model within Gemini. The current version, Nano Banana 2 (also referred to as Imagen 3.1 Flash Image), brings capabilities previously limited to higher-tier models to a broader user base, with faster generation speeds and improved image quality.
When will Gemini's inline image editing be available on Android?
There is no confirmed public release date. The feature has been spotted in beta and requires manual activation. Given that the web version of Gemini has not yet received it, a general Android rollout could still be weeks or months away.
The AIinASIA View: Reducing friction in AI image editing sounds like a minor UX fix, but in high-Android markets across Asia-Pacific it's a genuine capability unlock for millions of mobile-first creators. Google is quietly building the most accessible AI image workflow on any platform, and the implications for Asia's creative and small business economy are bigger than most Western coverage acknowledges.
If you use Gemini for image generation regularly, how much time are you currently losing to the download-and-reattach workaround, and would inline editing actually change how you use the tool day to day? Drop your take in the comments below.







No comments yet. Be the first to share your thoughts!
Leave a Comment