Adobe continues adding new features to Firefly, its AI image generator, and now supports prompts in over 100 languages.
The Adobe Firefly beta launched on March 21, 2023, with text-to-image and text effects capabilities. Both features were notable, the first for the training source and the second for its uniqueness. Adobe didn’t stop there and has expanded plans for Firefly as well as its current capabilities.
Adobe Firefly features
Firefly is unique in a sea of AI image generators since Adobe trained its model on licensed images from Adobe Stock. Eventually, you’ll be able to use Firefly for commercial work without fear of being threatened with copyright infringement.
Adobe’s Text-to-image is friendly, providing user interface controls for aspect ratio, content type, style, tone, lighting, and composition. While Midjourney requires you to learn parameter codes and Stable Diffusion users must type out specific styling instructions, a few clicks let you cycle through on Firefly.
The text effects feature lets you pick fonts and type words to use as a template for Firefly image generation. It’s a bit like ControlNet but with a targeted approach to make fanciful or dramatic display text effortless.
Generative recolor works on vector art, simplifying palette choices by letting Adobe’s AI create four color variations based on your prompt. You can save the result as a Scalable Vector Graphics (SVG) file for use in Illustrator or other vector art app.
Generative fill is one of the most recent additions to Firefly. You can now erase portions of an image and prompt the AI to fill in the blanks. While this might be the least innovative feature, trailing behind Dall-E’s inpainting and Leonardo AI’s canvas feature, it’s an important addition to Firefly. Adobe provides a helpful UI for refining the selection.
100 and 20 supported languages
While Adobe Firefly supports 100 languages in prompts, the user interface is currently only available in 20. That means you can choose your preferred language for prompts to generate images. However, to understand the labels on controls, you must pick from one of the 20 languages available for the UI.
Adobe said the Firefly user interface is now available in German, French Japanese, Spanish, Portugese, and more. This makes it a better and friendlier solution for Adobe’s global user base.
More to come
Adobe’s Firefly website lists 3D-to-image and Extend image as features yet to arrive. There could be many more coming from the leading company in tools for commercial art and professional design.
We recently discussed Adobe’s 3D-to-image research called Project Gingerbread. It sounds similar to Stable Diffusion’s Blender plugin that lets you create a scene in a 3D modeling app, then generate imagery that matches your text prompt.
Extend image sounds like a variation on Dall-E’s outpainting and Midjourney’s new zoom feature. Both of these features allow you to expand the edges of an image.
With Dall-E, outpainting increases the resolution of the image. Midjourney keeps roughly the same resolution as the original, seemingly scaling down the center to allow room for more room at the edges. It’s unknown which method Adobe will choose, but adding pixels is usually preferable.
Further down the list, Adobe is exploring Personalized results, Text to vector, Text to pattern, Text to brush, Sketch to image, and Text to template, so there’s plenty of Firefly updates in progress.