Adobe has rolled out upgrades across its Firefly generative AI suite, bolstering its capabilities in image, video, and vector generation, and broadening access through a redesigned web platform and an upcoming mobile app (for iOS and Android alike).
For one, Adobe has now launched its latest text-to-image model, Firefly Image Model 4. This new model, according to Adobe, offers improved control, faster performance, and image realism, alongside the ability to generate images at resolutions up to 2K. A more powerful version, Image Model 4 Ultra, has also been introduced, capable of rendering more scenes filled with detailed elements and finer structures. Alexandru Costin, Adobe’s VP of Generative AI, noted that the company scaled up compute resources significantly during training to enhance the model’s ability to produce more nuanced outputs.
“In just under two years, Adobe Firefly has revolutionized the creative industry and generated more than 22 billion assets worldwide. Today at Adobe MAX London, we’re unveiling the latest release of Firefly, which unifies AI-powered tools for image, video, audio, and vector generation into a single, cohesive platform and introduces many new capabilities,” the company announced in an official statement. “The new Firefly features enhanced models, improved ideation capabilities, expanded creative options, and unprecedented control. This update builds on earlier momentum when we introduced the Firefly web app and expanded into video and audio with Generate Video, Translate Video, and Translate Audio features.”
In addition to the image models, Adobe has made its Firefly Video Model publicly available. Initially launched in a limited beta last year, the tool allows users to generate short video clips from text prompts or static images. The model can simulate camera movements, define specific start and end frames, and render dynamic visual elements at up to 1080p resolution. Complementing this is the new Firefly Vector Model, which is designed for producing editable vector-based artwork. Users can create, iterate, and customize assets like logos, icons, and patterns, bringing new AI capabilities to Adobe Illustrator and other design workflows.
To add to this, the Firefly web app has undergone a major overhaul, now serving as a centralized platform for all of Adobe’s AI models. It also integrates select third-party models, including OpenAI’s GPT image generation, Google’s Imagen 3 and Veo 2, and Flux’s image model. This cross-platform access allows users to toggle between different engines for experimentation, while clearly distinguishing Adobe’s models as “commercially safe.” Further integrations are planned, with future support promised for Luma, Pika, Runway, Ideogram, and others.
This means that Adobe users will be able to tap into video and vector content generation, allowing for broader use cases from motion design to branding and packaging. Additionally, the integration of third-party models from OpenAI, Google, and Flux into the Firefly web app provides them with more experimentation tools and flexibility, all within a single platform.
Adobe also allows content created using third-party models to be seamlessly imported into its other applications like Photoshop, using the same system of generative credit-based billing. In addition to this, Adobe has introduced Firefly Boards, a canvas that lets users moodboard, remix, and comment on generated or imported content. It is currently in public beta through the Firefly web app.