Skip to main content

Cookie Consent

We use cookies to enhance your browsing experience, serve personalised ads or content, and analyse our traffic. Learn more

Install AIinASIA

Get quick access from your home screen

AI in ASIA
Adobe generative AI video tools
Create

Revolutionising Video Creation: Adobe's Upcoming Generative AI Tools

Adobe's upcoming generative AI video tools promise to revolutionise video creation and editing with innovative features like text-to-video and image-to-video.

Intelligence Desk4 min read

AI Snapshot

The TL;DR: what matters, fast.

Adobe is launching generative AI video tools including text-to-video and image-to-video features.

These tools generate video clips up to five seconds long and offer adjustable camera controls for customization.

Adobe’s Firefly model is commercially safe due to its training data and will integrate into Creative Cloud and other Adobe applications.

Who should pay attention: Video creators | Generative AI developers | Digital artists | Creative industry

What changes next: Adobe will integrate these tools into Creative Cloud following a beta release.

Adobe is introducing new generative AI video tools, including text-to-video and image-to-video features.,These tools allow users to create short video clips with adjustable camera controls.,The initial release will be in beta as a standalone Firefly application, with eventual integration into Adobe's Creative Cloud.

In the rapidly evolving world of artificial intelligence, Adobe is set to make waves with its upcoming generative AI video tools. These innovative features promise to transform the way we create and edit videos, making the process more accessible and efficient than ever before. Let's dive into the exciting developments Adobe has in store for us.

Text-to-Video: A New Era of Video Creation

Adobe's new text-to-video feature is a game-changer. With this tool, users can generate video clips simply by typing a description. Imagine the possibilities: from creating quick clips for social media to enhancing presentations with dynamic visuals, the applications are endless.

The generated videos are not just static; they come with adjustable camera controls. Users can simulate different camera angles, motion, and shooting distances, adding a layer of customisation that brings the videos to life. This level of control is a significant step forward in AI-generated content, offering creators the flexibility to match their vision precisely.

Image-to-Video: Bridging the Gap

In addition to text-to-video, Adobe is introducing an image-to-video feature. This tool can generate video clips using specific reference images, making it ideal for creating additional B-roll footage or patching gaps in production timelines. For content creators, this means less time spent on reshoots and more time dedicated to creative work.

The quality of the generated videos is impressive, rivaling what we've seen from OpenAI's Sora model. However, there is a catch: the videos produced by these features have a maximum length of five seconds. While this might seem limiting, it's important to remember that these tools are still in their early stages, and improvements are likely on the horizon. For a deeper dive into the technical aspects of video generation, you might find this research paper on Generative Adversarial Networks for video generation insightful.

Commercially Safe and Integrated

One of the standout advantages of Adobe's Firefly model is its commitment to being "commercially safe." Trained on openly licensed, public domain, and Adobe Stock content, Firefly aims to reduce concerns about copyright infringement. This is a crucial consideration for professionals who need to ensure their work is legally sound.

The text-to-video and image-to-video features will initially be available in beta as a standalone Firefly application. However, Adobe plans to integrate these tools into its Creative Cloud, Experience Cloud, and Adobe Express applications, making them accessible to a wide range of users. This move mirrors the broader trend of executives treading carefully on generative AI adoption, ensuring robust and legally sound solutions.

Generative Extend: Stretching the Limits

Adobe is also introducing the "Generative Extend" feature for Premiere Pro. This tool can extend the length of existing video footage, similar to Photoshop's Generative Expand tool for image backgrounds. For video editors, this means more flexibility in post-production, allowing them to fill in gaps or extend scenes seamlessly. This kind of innovation highlights how AI's quiet revolutions are changing everything.

The Future of Video Editing

These upcoming tools from Adobe represent a significant leap forward in video creation and editing. They promise to streamline workflows, enhance creativity, and open up new possibilities for content creators. Whether you're a professional video editor or a casual creator, these tools are set to make your life easier and your work more impactful. By typing a vivid description, you can bring a scene to life in a way that was previously only possible with extensive filming and editing.

Comment and Share:

What are you most excited about with Adobe's new generative AI video tools? How do you think they will change the way you create and edit videos? Share your thoughts in the comments below, and don't forget to Subscribe to our newsletter for updates on AI and AGI developments.

What did you think?

Written by

Share your thoughts

Join 5 readers in the discussion below

This article is part of the Enterprise AI 101 learning path.

Continue the path →

Liked this? There's more.

Join our weekly newsletter for the latest AI news, tools, and insights from across Asia. Free, no spam, unsubscribe anytime.

Latest Comments (5)

Rachel Foo
Rachel Foo@rachelf
AI
12 February 2026

the "maximum length of five seconds" for these generated video clips... I can already hear my compliance dept. sighing. trying to explain how a 5-sec AI video is auditable for brand guidelines is gonna be another fun meeting. at least it's not a full-length movie, I guess.

Mike Chen
Mike Chen@mikechen
AI
9 January 2026

hey this is a good overview. i've been following adobe's AI stuff for a bit. curious how they'll handle the cost model for these short 5-second video generations in Firefly.

Lakshmi Reddy
Lakshmi Reddy@lakshmi.r
AI
23 December 2024

Adobe's approach to adjustable camera controls for text-to-video generation is actually quite smart. In NLP, we're finding that granular control over output parameters, even for short-form content, significantly boosts user adoption and creative utility, especially when dealing with diverse linguistic or cultural nuances.

Wei Ming Tan
Wei Ming Tan@weiming
AI
2 December 2024

the five-second limit for generated video clips makes total sense for an initial beta. when we're rolling out new features in our systems, especially anything AI-driven, we always start with tight constraints. helps manage compute resources, sure, but more importantly, it lets us get early user feedback on core functionality without overwhelming the models or our infrastructure. you learn a lot from how people use those short bursts of content in real workflows before scaling up.

Nguyen Minh
Nguyen Minh@nguyenm
AI
21 October 2024

Five seconds maximum for the video length is a bit short. For our projects at FPT, even short explainer clips usually need at least 10-15 seconds to convey anything useful. Will be interesting to see if they extend this.

Leave a Comment

Your email will not be published