Skip to main content
AI in ASIA
Adobe generative AI video tools
Create

Revolutionising Video Creation: Adobe's Upcoming Generative AI Tools

Adobe launches text-to-video and image-to-video AI tools, offering commercially safe generative capabilities with adjustable camera controls for creators.

Intelligence Deskโ€ขโ€ข4 min read

AI Snapshot

The TL;DR: what matters, fast.

Adobe launches text-to-video and image-to-video AI tools through standalone Firefly application

Tools generate 5-second clips with adjustable camera controls for professional video creation

Firefly uses commercially safe training data from licensed content and Adobe Stock imagery

86% of creators actively use generative AI with 76% reporting business growth benefits

Adobe Firefly ARR exceeded $250 million with video actions growing 8x year-over-year

Advertisement

Advertisement

Adobe Unveils Next-Generation Video AI Tools

Adobe is set to transform video creation with its latest generative AI capabilities, introducing text-to-video and image-to-video features that promise to reshape how creators approach visual content. These tools, launching initially through the standalone Firefly application, represent a significant leap forward in accessible video production technology.

The new features allow users to generate short video clips simply by typing descriptions or using reference images. What sets Adobe's approach apart is the inclusion of adjustable camera controls, enabling creators to simulate different angles, motion effects, and shooting distances with unprecedented precision.

Breaking Down the Feature Set

Adobe's text-to-video functionality transforms written descriptions into dynamic visual content, whilst the image-to-video feature generates clips using specific reference images. Both tools produce videos with a maximum length of five seconds, positioning them as ideal solutions for social media content, B-roll footage, and presentation enhancements.

The company is also introducing "Generative Extend" for Premiere Pro, which can extend existing video footage length, similar to Photoshop's Generative Expand tool. This feature addresses a common challenge in post-production by filling gaps or extending scenes seamlessly.

"Creators today aren't passively using creative generative AI, they're intentionally curating the tools they trust... 76 per cent of creators say creative generative AI is positively shaping the creator economy." , Mike Polner, Vice President & Head of Product Marketing for Creators at Adobe

By The Numbers

  • Adobe's Firefly ARR exceeded $250 million in Q1 2026, with video generative actions growing more than 8x year-over-year
  • 86% of global creators actively use creative generative AI, with 76% reporting business or follower growth
  • 22% of video ad creative was built or enhanced using generative AI in 2024, projected to reach 39% by 2026
  • Adobe Firefly generated 24 billion assets by May 2025, up from one billion in June 2023
  • 52% of creators identify video generation as their top use case for generative AI

Commercial Safety and Legal Compliance

One of Adobe's key differentiators lies in Firefly's training methodology. The model uses openly licensed content, public domain materials, and Adobe Stock imagery, addressing copyright concerns that plague many AI-generated content tools. This "commercially safe" approach is particularly crucial for professional creators who require legal certainty in their work.

The integration strategy mirrors broader trends in generative AI adoption across Asia, where organisations prioritise robust, legally sound solutions. Adobe plans to eventually integrate these video tools into Creative Cloud, Experience Cloud, and Adobe Express, making them accessible across its entire ecosystem.

"Generative AI experimentation pays off: 76% of organisations report improvements driven by generative AI in volume and speed of content ideation and production." , Adobe 2026 AI and Digital Trends Report

Market Competition and Positioning

Adobe's entry intensifies competition in the AI video generation space, where rivals like Meta's Movie Gen and Chinese platforms such as Kling are reshaping Asian filmmaking. The five-second video limit positions Adobe's tools more as enhancement features rather than comprehensive video creation solutions.

Feature Adobe Firefly OpenAI Sora Meta Movie Gen
Video Length 5 seconds 60 seconds 16 seconds
Commercial Safety Yes (licensed training) Limited Limited
Camera Controls Yes Basic Advanced
Integration Creative Cloud planned Standalone Facebook/Instagram

The applications extend beyond individual creators to enterprise use cases:

  • Marketing teams can rapidly prototype video concepts without extensive production resources
  • Social media managers can generate consistent visual content at scale
  • Educational institutions can create engaging instructional materials with minimal budget
  • Small businesses can produce professional-looking promotional content without video production expertise
  • Content creators can fill production gaps and extend existing footage for comprehensive storytelling

Implementation Timeline and Access

The rollout begins with beta access through the standalone Firefly application, allowing Adobe to gather user feedback and refine capabilities before broader integration. This staged approach reflects lessons learned from other AI video tools in the market, where rushed launches often result in user experience issues.

Creative Cloud integration represents the ultimate goal, positioning these tools alongside Adobe's established video editing workflow. This could significantly impact how businesses are adopting generative AI for content creation across various industries.

How long are the generated videos?

Adobe's current video generation tools produce clips with a maximum length of five seconds. Whilst this may seem limiting, it's designed for specific use cases like social media content, B-roll footage, and quick visual enhancements rather than full-length video production.

What makes Adobe's approach "commercially safe"?

Adobe trains its Firefly model exclusively on openly licensed content, public domain materials, and Adobe Stock imagery. This approach reduces copyright infringement risks compared to models trained on broader internet content, making it more suitable for commercial use.

When will these tools be available in Creative Cloud?

Adobe plans to integrate the video generation features into Creative Cloud, Experience Cloud, and Adobe Express following the initial beta release through the standalone Firefly application. No specific timeline has been announced for full integration.

How do the camera controls work in generated videos?

Users can adjust camera angles, motion effects, and shooting distances within the generated content. This provides creative control over the final output, allowing creators to match their specific vision and production needs.

What are the main use cases for five-second video clips?

Short-form content works well for social media posts, product demonstrations, presentation enhancements, B-roll footage insertion, and filling gaps in existing video projects. The brevity encourages focused, impactful visual communication.

The AIinASIA View: Adobe's measured approach to video AI reflects strategic wisdom in a rapidly evolving market. By prioritising commercial safety and Creative Cloud integration over maximum video length, the company positions itself for sustainable growth rather than flashy headlines. The five-second limit isn't a limitation but a feature, encouraging creators to think in precise, impactful moments. We expect this approach will resonate particularly well with professional creators who value legal certainty over experimental capabilities. Adobe's success will ultimately depend on execution quality and seamless workflow integration rather than raw technical specifications.

Adobe's entry into AI video generation marks another significant milestone in creative technology evolution. The combination of accessible text-to-video capabilities, robust legal framework, and eventual Creative Cloud integration creates a compelling proposition for creators ranging from individual influencers to enterprise marketing teams.

The five-second constraint may initially disappoint some users expecting longer-form content generation, but this limitation could prove strategic. It forces creators to distill ideas into precise, impactful moments whilst ensuring manageable computational requirements and faster processing times.

As the video AI landscape continues evolving, Adobe's focus on commercial safety and workflow integration may prove more valuable than raw technical capabilities. What aspects of Adobe's video AI strategy do you think will matter most for your creative workflow? Drop your take in the comments below.

โ—‡

YOUR TAKE

We cover the story. You tell us what it means on the ground.

What did you think?

Written by

Share your thoughts

Join 5 readers in the discussion below

Advertisement

Advertisement

This article is part of the Enterprise AI 101 learning path.

Continue the path รขย†ย’

Latest Comments (5)

Rachel Foo
Rachel Foo@rachelf
AI
12 February 2026

the "maximum length of five seconds" for these generated video clips... I can already hear my compliance dept. sighing. trying to explain how a 5-sec AI video is auditable for brand guidelines is gonna be another fun meeting. at least it's not a full-length movie, I guess.

Mike Chen
Mike Chen@mikechen
AI
9 January 2026

hey this is a good overview. i've been following adobe's AI stuff for a bit. curious how they'll handle the cost model for these short 5-second video generations in Firefly.

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
23 December 2024

Adobe's approach to adjustable camera controls for text-to-video generation is actually quite smart. In NLP, we're finding that granular control over output parameters, even for short-form content, significantly boosts user adoption and creative utility, especially when dealing with diverse linguistic or cultural nuances.

Wei Ming Tan
Wei Ming Tan@weiming
AI
2 December 2024

the five-second limit for generated video clips makes total sense for an initial beta. when we're rolling out new features in our systems, especially anything AI-driven, we always start with tight constraints. helps manage compute resources, sure, but more importantly, it lets us get early user feedback on core functionality without overwhelming the models or our infrastructure. you learn a lot from how people use those short bursts of content in real workflows before scaling up.

Nguyen Minh
Nguyen Minh@nguyenm
AI
21 October 2024

Five seconds maximum for the video length is a bit short. For our projects at FPT, even short explainer clips usually need at least 10-15 seconds to convey anything useful. Will be interesting to see if they extend this.

Leave a Comment

Your email will not be published