News
Adobe Jumps into AI Video: Exploring Firefly’s New Video Generator
Explore Adobe Firefly Video Generator for safe, AI-driven video creation from text or images, plus easy integration and flexible subscription plans
Published
3 months agoon
By
AIinAsia
TL;DR – What You Need to Know in 30 Seconds
- Adobe Has Launched a New AI Video Generator: Firefly Video (beta) is now live for anyone who’s signed up for early access, promising safe and licensed content.
- Commercially Safe Creations: The video model is trained only on licensed and public domain content, reducing the headache of potential copyright issues.
- Flexible Usage: You can create 5-second, 1080p clips from text prompts or reference images, add extra effects, and blend seamlessly with Adobe’s other tools.
- Subscription Plans: Ranging from 10 USD to 30 USD per month, you’ll get a certain number of monthly generative credits to play with, along with free cloud storage.
So, What is the Adobe Firefly Video Generator?
If you’ve been keeping an eye on the AI scene, you’ll know it’s bursting with new tools left, right, and centre. But guess who has finally decided to join the party, fashionably late but oh-so-fancy? That’s right — Adobe! The creative software giant has just unveiled its generative AI video tool, Firefly Video Generator. Today, we’re taking a closer look at what it does, why it matters, and whether it’s worth your time.
If you’ve heard whispers about Adobe’s foray into AI, it’s all about Firefly — their suite of AI-driven creative tools. Adobe has now extended Firefly to video, letting you turn text or images into short video clips. At the moment, each clip is around five seconds long in 1080p resolution and spits out an MP4 file.
We’ve got great news — Generate Video (beta) is now available. Powered by the Adobe Firefly Video Model, Generate Video (beta) lets you generate new, commercially safe video clips with the ease of creative AI.
The unique selling point is that Firefly’s videos are trained on licensed and public domain materials, so you can rest easy about copyright concerns. Whether you’re a content creator, a social media guru, or just love dabbling in AI, this tool might be your new favourite playground.
Getting Started: Text-to-Video in a Flash
Interested? Here’s the easiest way in:
- Sign In: Head over to firefly.adobe.com and log in or sign up for an Adobe account.
- Select “Text to Video”: Once logged in, you’ll see a selection of AI tools under the Featured tab. Pick “Text to Video,” and you’re in!
- Craft a Prompt: Type out a description of what you want to see. For best results, Adobe recommends specifying the shot type, character, action, location, and aesthetic — the more detail, the better — up to 175 words.. For example:
Prompt: A futuristic cityscape at sunset with neon lights reflecting off wet pavement. The camera pans over a sleek, silver skyscraper, then zooms in on a group of drones flying in formation, their lights pulsating in sync with the city’s rhythm. The scene transitions to a close-up of a holographic advertisement displaying vibrant, swirling patterns. The video ends with a wide shot of the city, capturing the dynamic interplay of light and technology.
- Generate: Hit that generate button, and watch Firefly do its magic. Stick around on the tab while it’s generating, or else your progress disappears (a bit of a quirk if you ask me).
The end result is a 5-second video clip in MP4 format, complete with 1920 × 1080 resolution. You can’t exactly produce a Hollywood blockbuster here, but for quick, creative clips, it’s pretty handy.
Here’s another one:
A cheerful, pastel-colored cartoon rabbit wearing a pair of oversized sunglasses and a Hawaiian shirt. The rabbit is standing on a sunny beach, surrounded by palm trees and colorful beach balls. As it dances to upbeat music, it starts to juggle three beach balls while spinning around. The camera zooms out to show the rabbit’s shadow growing larger, transforming into a giant beach ball that bounces across the sand. The video ends with the rabbit laughing and winking at the camera.
Image-to-Video: Turn That Pic into Motion
To use this feature, you must have the rights to any third-party images you upload. All images uploaded or content generated must meet our User Guidelines. Access will be revoked for any violation.
If you prefer a visual reference to a text prompt, Firefly also has your back. You can upload an image — presumably one you own the rights to — and let the AI interpret that into video form. As Adobe warns:
Once uploaded, you can tweak the ratio, camera angle, motion, and more to shape your final clip. This is a brilliant feature if you’re working on something that requires a specific style or visual element and you’d like to keep that vibe across different shots.
A Dash of Sparkle: Adding Effects
A neat trick up Adobe’s sleeve is the ability to layer special effects like fire, smoke, dust particles, or water over your footage. The model can generate these elements against a black or green screen, so you can easily apply them as overlays in Premiere Pro or After Effects.
In practical terms, you could generate smoky overlays to give your scene a dramatic flair or sprinkling dust particles for a cinematic vibe. Adobe claims these overlays blend nicely with real-world footage, so that’s a plus for those who want to incorporate subtle special effects into their videos without shelling out for expensive stock footage.
How Much Does Adobe Firefly Cost?
There are two main plans if you decide to adopt Firefly into your daily workflow:
- Adobe Firefly Standard (10 USD/month)
- You get 2,000 monthly generative credits for video and audio, which means you can generate up to 20 five-second videos and translate up to 6 minutes of audio and video.
- Useful for quick clip creation, background experimentation, and playing with different styles in features like Text to Image and Generative Fill.
- Adobe Firefly Pro (30 USD/month)
- This plan offers 7,000 monthly generative credits for video and audio, allowing you to generate up to 70 five-second videos and translate up to 23 minutes of audio and video.
- Great for those looking to storyboard entire projects, produce b-roll, and match audio cues for more complex productions.
Both plans also include 100 GB of cloud storage, so you don’t have to worry too much about hoarding space on your own system. They come in monthly or annual prepaid options, and you can cancel anytime without fees — quite flexible, which is nice.
First Impressions: Late to the Party?
Overall, Firefly’s biggest plus is its library of training data. Because it only uses Adobe-licensed or public domain content, creators can produce videos without fear of accidental infringement. This is a big deal, considering how many generative AI tools out there scrape the web, causing all sorts of copyright drama.
Adobe’s integration with its existing ecosystem is another big draw. If you’re already knee-deep in Premiere Pro and After Effects, having a built-in system for AI-generated overlays, quick b-roll clips, and atmospheric effects might streamline your workflow.
But let’s be honest: the AI video space is already pretty jam-packed. Competitors like Runway, Kling, and Sora from OpenAI have been around for a while, offering equally interesting features. So the question is, does Firefly do anything better or more reliably than the rest? You’ll have to try it out for yourself (and please let us know your thoughts in the comments below).
This sentiment might ring true until Adobe packs in some advanced features or speeds up its render times. However, you can’t knock it until you’ve tried it. Adobe does offer free video generation credits, so have a go. Generate your own videos, add flaming overlays, and see if the results vibe with your style.
Will Adobe’s trusted brand name and integrated workflow features push Firefly Video Generator to the top of the AI video world? Or is this too little, too late?
Ultimately, you’re the judge. The AI video revolution is in full swing, and each platform has its own perks and quirks.
Wrapping Up & Parting Thoughts
Adobe’s Firefly Video Generator is an exciting new player that’s sure to turn heads. If you’re already an Adobe devotee, it makes sense to give it a whirl and see how seamlessly it slides into your existing workflow. You’ll enjoy its straightforward interface, the security of licensed content, and some neat editing options.
But with so many alternatives on the market, is Firefly truly innovative, or just the next step in AI’s unstoppable march through our creative spaces?
Could Adobe’s pedigree and safe licensing edge truly redefine AI video for commercial use, or is the industry already oversaturated with better and bolder solutions?
You may also like:
- Revolutionising the Creative Scene: Adobe’s AI Video Tools Challenge Tech Giants
- Adobe’s GenAI is Revolutionising Music Creation
- Try out the Adobe Firefly Video Generator for free by tapping here
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
You may like
-
The Three AI Markets Shaping Asia’s Future
-
How Singtel Used AI to Bring Generations Together for Singapore’s SG60
-
New York Times Encourages Staff to Use AI for Headlines and Summaries
-
Voice From the Grave: Netflix’s AI Clone of Murdered Influencer Sparks Outrage
-
How Will AI Skills Impact Your Career and Salary in 2025?
-
Beginner’s Guide to Using Sora AI Video
News
If AI Kills the Open Web, What’s Next?
Exploring how AI is transforming the open web, the rise of agentic AI, and emerging monetisation models like microtransactions and stablecoins.
Published
1 week agoon
May 28, 2025By
AIinAsia
The web is shifting from human-readable pages to machine-mediated experiences with AI impacting the future of the open web. What comes next may be less open—but potentially more useful.
TL;DR — What You Need To Know
- AI is reshaping web navigation: Google’s AI Overviews and similar tools provide direct answers, reducing the need to visit individual websites.
- Agentic AI is on the rise: Autonomous AI agents are beginning to perform tasks like browsing, shopping, and content creation on behalf of users.
- Monetisation models are evolving: Traditional ad-based revenue is declining, with microtransactions and stablecoins emerging as alternative monetisation methods.
- The open web faces challenges: The shift towards AI-driven interactions threatens the traditional open web model, raising concerns about content diversity and accessibility.
The Rise of Agentic AI
The traditional web, characterised by human users navigating through hyperlinks and search results, is undergoing a transformation. AI-driven tools like Google’s AI Overviews now provide synthesised answers directly on the search page, reducing the need for users to click through to individual websites.
This shift is further amplified by the emergence of agentic AI—autonomous agents capable of performing tasks such as browsing, shopping, and content creation without direct human intervention. For instance, Opera’s new AI browser, Opera Neon, can automate internet tasks using contextual awareness and AI agents.
These developments suggest a future where AI agents act as intermediaries between users and the web, fundamentally altering how information is accessed and consumed.
Monetisation in the AI Era
The traditional ad-based revenue model that supported much of the open web is under threat. As AI tools provide direct answers, traffic to individual websites declines, impacting advertising revenues.
In response, new monetisation strategies are emerging. Microtransactions facilitated by stablecoins offer a way for users to pay small amounts for content or services, enabling creators to earn revenue directly from consumers. Platforms like AiTube are integrating blockchain-based payments, allowing creators to receive earnings through stablecoins across multiple protocols.
This model not only provides a potential revenue stream for content creators but also aligns with the agentic web’s emphasis on seamless, automated interactions.
The Future of the Open Web
The open web, once a bastion of free and diverse information, is facing significant challenges. The rise of AI-driven tools and platforms threatens to centralise information access, potentially reducing the diversity of content and perspectives available to users.
However, efforts are underway to preserve the open web’s principles. Initiatives like Microsoft’s NLWeb aim to create open standards that allow AI agents to access and interact with web content in a way that maintains openness and interoperability.
The future of the web may depend on balancing the efficiency and convenience of AI-driven tools with the need to maintain a diverse and accessible information ecosystem.
What Do YOU Think?
As AI impacts the future of the open web, we must consider how to preserve the values of openness, diversity, and accessibility. How can we ensure that the web remains a space for all voices, even as AI agents become the primary means of navigation and interaction?
You may also like:
- Top 10 AI Trends Transforming Asia by 2025
- Build Your Own Agentic AI — No Coding Required
- Is AI Really Paying Off? CFOs Say ‘Not Yet’
- Or tap here to explore the free version of Claude AI.
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
News
GPT-5 Is Less About Revolution, More About Refinement
This article explores OpenAI’s development of GPT-5, focusing on improving user experience by unifying AI tools and reducing the need for manual model switching. It includes insights from VP of Research Jerry Tworek on token growth, benchmarks, and the evolving role of humans in the AI era.
Published
2 weeks agoon
May 22, 2025By
AIinAsia
OpenAI’s next model isn’t chasing headlines—it’s building a smoother, smarter user experience with fewer interruptions the launch of GPT-5 unified tools.
TL;DR — What You Need To Know
- GPT-5 aims to unify OpenAI’s tools, reducing the need for switching between models
- The Operator screen agent is due for an upgrade, with a push towards becoming a desktop-level assistant
- Token usage continues to rise, suggesting growing AI utility and infrastructure demand
- Benchmarks are losing their relevance, with real-world use cases taking centre stage
- OpenAI believes AI won’t replace humans but may reshape human labour roles
A more cohesive AI experience, not a leap forward
While GPT-4 dazzled with its capabilities, GPT-5 appears to be a quieter force, according to OpenAI’s VP of Research, Jerry Tworek. Speaking during a recent Reddit Q&A with the Codex team, Tworek described the new model as a unifier—not a disruptor.
“We just want to make everything our models can currently do better and with less model switching,” Tworek said. That means streamlining the experience so users aren’t constantly toggling between tools like Codex, Operator, Deep Research and memory functions.
For OpenAI, the future lies in integration over invention. Instead of introducing radically new features, GPT-5 focuses on making the existing stack work together more fluidly. This approach marks a clear departure from the hype-heavy rollouts often associated with new model versions.
Operator: from browser control to desktop companion
One of the most interesting pieces in this puzzle is Operator, OpenAI’s still-experimental screen agent. Currently capable of basic browser navigation, it’s more novelty than necessity. But that may soon change.
An update to Operator is expected “soon,” with Tworek hinting it could evolve into a “very useful tool.” The goal? A kind of AI assistant that handles your screen like a power user, automating online tasks without constantly needing user prompts.
The update is part of a broader push to make AI tools feel like one system, rather than a toolkit you have to learn to assemble. That shift could make screen agents like Operator truly indispensable—especially in Asia, where mobile-first behaviour and app fragmentation often define the user journey.
Integration efforts hit reality checks
Originally, OpenAI promised that GPT-5 would merge the GPT and “o” model series into a single omnipotent system. But as with many grand plans in AI, the reality was less elegant.
In April, CEO Sam Altman admitted the challenge: full integration proved more complex than expected. Instead, the company released o3 and o4-mini as standalone models, tailored for reasoning.
Tworek confirmed that the vision of reduced model switching is still alive—but not at the cost of model performance. Users will still see multiple models under the hood; they just might not have to choose between them manually.
Tokens and the long road ahead
If you think the token boom is a temporary blip, think again. Tworek addressed a user scenario where AI assistants might one day process 100 tokens per second continuously, reading sensors, analysing messages, and more.
That, he says, is entirely plausible. “Even if models stopped improving,” Tworek noted, “they could still deliver a lot of value just by scaling up.”
This perspective reflects a strategic bet on infrastructure. OpenAI isn’t just building smarter models; it’s betting on broader usage. Token usage becomes a proxy for economic value—and infrastructure expansion the necessary backbone.
Goodbye benchmarks, hello real work
When asked to compare GPT with rivals like Claude or Gemini, Tworek took a deliberately contrarian stance. Benchmarks, he suggested, are increasingly irrelevant.
“They don’t reflect how people actually use these systems,” he explained, noting that many scores are skewed by targeted fine-tuning.
Instead, OpenAI is doubling down on real-world tasks as the truest test of model performance. The company’s ambition? To eliminate model choice altogether. “Our goal is to resolve this decision paralysis by making the best one.”
The human at the helm
Despite AI’s growing power, Tworek offered a thoughtful reminder: some jobs will always need humans. While roles will evolve, the need for oversight won’t go away.
“In my view, there will always be work only for humans to do,” he said. The “last job,” he suggested, might be supervising the machines themselves—a vision less dystopian, more quietly optimistic.
For Asia’s fast-modernising economies, that might be a signal to double down on education, critical thinking, and human-centred design. The jobs of tomorrow may be less about doing, and more about directing.
You May Also Like:
- ChatGPT-5 Is Coming in 2024: Sam Altman
- Revolutionise Your Designs with Canva’s AI-Powered Magic Tools
- Revolutionising Critical Infrastructure: How AI is Becoming More Reliable and Transparent
- Or tap here to try the free version of ChatGPT.
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.
Business
Apple’s China AI pivot puts Washington on edge
Apple’s partnership with Alibaba to deliver AI services in China has sparked concern among U.S. lawmakers and security experts, highlighting growing tensions in global technology markets.
Published
2 weeks agoon
May 21, 2025By
AIinAsia
As Apple courts Alibaba for its iPhone AI partnership in China, U.S. lawmakers see more than just a tech deal taking shape.
TL;DR — What You Need To Know
- Apple has reportedly selected Alibaba’s Qwen AI model to power its iPhone features in China
- U.S. lawmakers and security officials are alarmed over data access and strategic implications
- The deal has not been officially confirmed by Apple, but Alibaba’s chairman has acknowledged it
- China remains a critical market for Apple amid declining iPhone sales
- The partnership highlights the growing difficulty of operating across rival tech spheres
Apple Intelligence meets the Great Firewall
Apple’s strategic pivot to partner with Chinese tech giant Alibaba for delivering AI services in China has triggered intense scrutiny in Washington. The collaboration, necessitated by China’s blocking of OpenAI services, raises profound questions about data security, technological sovereignty, and the intensifying tech rivalry between the United States and China. As Apple navigates declining iPhone sales in the crucial Chinese market, this partnership underscores the increasing difficulty for multinational tech companies to operate seamlessly across divergent technological and regulatory environments.
Apple Intelligence Meets Chinese Regulations
When Apple unveiled its ambitious “Apple Intelligence” system in June, it marked the company’s most significant push into AI-enhanced services. For Western markets, Apple seamlessly integrated OpenAI’s ChatGPT as a cornerstone partner for English-language capabilities. However, this implementation strategy hit an immediate roadblock in China, where OpenAI’s services remain effectively banned under the country’s stringent digital regulations.
Faced with this market-specific challenge, Apple initiated discussions with several Chinese AI leaders to identify a compliant local partner capable of delivering comparable functionality to Chinese consumers. The shortlist reportedly included major players in China’s burgeoning AI sector:
- Baidu, known for its Ernie Bot AI system
- DeepSeek, an emerging player in foundation models
- Tencent, the social media and gaming powerhouse
- Alibaba, whose open-source Qwen model has gained significant attention
While Apple has maintained its characteristic silence regarding partnership details, recent developments strongly suggest that Alibaba’s Qwen model has emerged as the chosen solution. The arrangement was seemingly confirmed when Alibaba’s chairman made an unplanned reference to the collaboration during a public appearance.
“Apple’s decision to implement a separate AI system for the Chinese market reflects the growing reality of technological bifurcation between East and West. What we’re witnessing is the practical manifestation of competing digital sovereignty models.”
Washington’s Mounting Concerns
The revelation of Apple’s China-specific AI strategy has elicited swift and pronounced reactions from U.S. policymakers. Members of the House Select Committee on China have raised alarms about the potential implications, with some reports indicating that White House officials have directly engaged with Apple executives on the matter.
Representative Raja Krishnamoorthi of the House Intelligence Committee didn’t mince words, describing the development as “extremely disturbing.” His reaction encapsulates broader concerns about American technological advantages potentially benefiting Chinese competitors through such partnerships.
Greg Allen, Director of the Wadhwani A.I. Centre at CSIS, framed the situation in competitive terms:
“The United States is in an AI race with China, and we just don’t want American companies helping Chinese companies run faster.”
The concerns expressed by Washington officials and security experts include:
- Data Sovereignty Issues: Questions about where and how user data from AI interactions would be stored, processed, and potentially accessed
- Model Training Advantages: Concerns that the vast user interactions from Apple devices could help improve Alibaba’s foundational AI models
- National Security Implications: Worries about whether sensitive information could inadvertently flow through Chinese servers
- Regulatory Compliance: Questions about how Apple will navigate China’s content restrictions and censorship requirements
In response to these growing concerns, U.S. agencies are reportedly discussing whether to place Alibaba and other Chinese AI companies on a restricted entity list. Such a designation would formally limit collaboration between American and Chinese AI firms, potentially derailing arrangements like Apple’s reported partnership.
Commercial Necessities vs. Strategic Considerations
Apple’s motivation for pursuing a China-specific AI solution is straightforward from a business perspective. China remains one of the company’s largest and most important markets, despite recent challenges. Earlier this spring, iPhone sales in China declined by 24% year over year, highlighting the company’s vulnerability in this critical market.
Without a viable AI strategy for Chinese users, Apple risks further erosion of its market position at precisely the moment when AI features are becoming central to consumer technology choices. Chinese competitors like Huawei have already launched their own AI-enhanced smartphones, increasing pressure on Apple to respond.
“Apple faces an almost impossible balancing act. They can’t afford to offer Chinese consumers a second-class experience by omitting AI features, but implementing them through a Chinese partner creates significant political exposure in the U.S.
The situation is further complicated by China’s own regulatory environment, which requires foreign technology companies to comply with data localisation rules and content restrictions. These requirements effectively necessitate some form of local partnership for AI services.
A Blueprint for the Decoupled Future?
Whether Apple’s partnership with Alibaba proceeds as reported or undergoes modifications in response to political pressure, the episode provides a revealing glimpse into the fragmenting global technology landscape.
As digital ecosystems increasingly align with geopolitical boundaries, multinational technology firms face increasingly complex strategic decisions:
- Regionalised Technology Stacks: Companies may need to develop and maintain separate technological implementations for different markets
- Partnership Dilemmas: Collaborations beneficial in one market may create political liabilities in others
- Regulatory Navigation: Operating across divergent regulatory environments requires sophisticated compliance strategies
- Resource Allocation: Developing market-specific solutions increases costs and complexity
What we’re seeing with Apple and Alibaba may become the norm rather than the exception. The era of frictionless global technology markets is giving way to one where regional boundaries increasingly define technological ecosystems.
Looking Forward
For now, Apple Intelligence has no confirmed launch date for the Chinese market. However, with new iPhone models traditionally released in autumn, Apple faces mounting time pressure to finalise its AI strategy.
The company’s eventual approach could signal broader trends in how global technology firms navigate an increasingly bifurcated digital landscape. Will companies maintain unified global platforms with minimal adaptations, or will we see the emergence of fundamentally different technological experiences across major markets?
As this situation evolves, it highlights a critical reality for the technology sector: in an era of intensifying great power competition, even seemingly routine business decisions can quickly acquire strategic significance.
You May Also Like:
- Alibaba’s AI Ambitions: Fueling Cloud Growth and Expanding in Asia
- Apple Unleashes AI Revolution with Apple Intelligence: A Game Changer in Asia’s Tech Landscape
- Apple and Meta Explore AI Partnership
Author
Discover more from AIinASIA
Subscribe to get the latest posts sent to your email.

The Dirty Secret Behind Your Favourite AI Tools

How To Teach ChatGPT Your Writing Style

Upgrade Your ChatGPT Game With These 5 Prompts Tips
Trending
-
Life3 weeks ago
7 Mind-Blowing New ChatGPT Use Cases in 2025
-
Learning2 weeks ago
How to Use the “Create an Action” Feature in Custom GPTs
-
Business3 weeks ago
AI Just Killed 8 Jobs… But Created 15 New Ones Paying £100k+
-
Learning2 weeks ago
How to Upload Knowledge into Your Custom GPT
-
Learning2 weeks ago
Build Your Own Custom GPT in Under 30 Minutes – Step-by-Step Beginner’s Guide
-
Life2 days ago
How To Teach ChatGPT Your Writing Style
-
Business2 weeks ago
Adrian’s Arena: Stop Collecting AI Tools and Start Building a Stack
-
Life3 weeks ago
Adrian’s Arena: Will AI Get You Fired? 9 Mistakes That Could Cost You Everything