Adobe is moving deeper into agentic AI with a new Firefly assistant designed to do more than generate images or text on command.
The company said its new Firefly AI Assistant, powered by Adobe’s creative agent, will let users describe what they want in plain language while the system orchestrates and executes complex, multi-step workflows across Creative Cloud apps including Firefly, Photoshop, Premiere, Lightroom, Express, and Illustrator.
The assistant is set to enter public beta in the coming weeks, though Adobe has not yet said whether it will carry separate pricing from Firefly’s existing credit-based plans.
Adobe is pitching one conversational layer across its creative apps
The most important shift here is not just another chatbot inside a design tool.
Adobe said the assistant brings the power of Adobe’s creative tools into a single conversational interface.
TechCrunch described it as a system that can work across apps to complete tasks for users.
Adobe framed the change as a fundamental shift in how creative work is done, arguing that creators will provide the vision, judgment, and creative direction, while the assistant handles orchestration and execution.
That language matters because Adobe is clearly trying to move beyond isolated AI features and toward a workflow layer that can act across multiple products.
David Wadhwani, president of Adobe’s Creativity & Productivity Business, said the company is leading the shift into a new era of agentic creativity and described Firefly as a category of one that combines top models, powerful tools, and a fundamentally new way of creating.
The assistant is built to suggest, execute, and let users step in
Users will be able to control the outputs not only with text prompts but also with buttons and sliders.
The assistant can suggest actions, orchestrate between actions and apps, and execute workflows, while still leaving room for users to interrupt or adjust the process at any time.
One example TechCrunch gave was a product photo set in a forest, where the assistant might surface a simple slider to increase or reduce trees and foliage. Adobe also said the assistant will learn more about a creator’s preferences over time and make suggestions accordingly.
Adobe is also adding skills, or grouped actions that bundle multiple steps into one workflow. The outlet said one example, a social media assets skill, can help adapt images to different platforms by cropping or expanding visuals, optimizing file sizes, and storing the outputs.
That suggests Adobe is aiming this not only at designers experimenting with AI, but also at marketing and content teams trying to move faster across channels.
Firefly itself is getting broader video and image upgrades
The assistant rollout is tied to a larger Firefly expansion. Adobe said Firefly is adding studio-quality sound, advanced color controls, Adobe Stock integration, and more precision image-editing tools.
Firefly’s AI video editor is gaining options to reduce noise in speech, adjust reverb and music, and apply color changes. Adobe also said Firefly now includes more than 30 AI models, with Kling 3.0 and Kling 3.0 Omni joining the lineup.
Adobe is trying to stay ahead of a growing agent race
Competitors such as Canva and Figma are also working on agentic workflows, but Adobe is betting its advantage lies in unifying its already popular creative tools.
Alexandru Costin, Adobe’s vice president of AI and innovation for its creativity and productivity business, shared that the opportunity is to remove some of the friction in learning this large catalog of tools and bring that value to our customers at their fingertips.
The broader message from Adobe is clear: AI inside creative software is no longer just about generating an image or cleaning up a clip.
It is becoming a layer that can move between apps, combine steps, and carry out more of the work itself.
Whether creators embrace that model may depend on how much control they feel they still have once the assistant starts doing the driving.