Adobe this week announced that its artificial intelligence (AI) art generation tools, collectively known as Firefly, are being added to Photoshop. Notable for Photoshop is Firefly's Generative Fill tool, which fills in or expands an image based on a text prompt you give it. The AI can also extend an image by guessing how it looks beyond its edges, add or remove objects, generate image-based letters, and recolor vector art.
Firefly for Photoshop is available now in beta when used as part of a Creative Cloud subscription, and there is a beta web app where you can demo some of the features yourself.
I got early hands-on access to Firefly and was impressed by how it can support artists and their work.
What Is Adobe Firefly and How Can I Get It?
Adobe Firefly is a still-breeding family of AI-powered generative art tools. If you're lost on the words "generative art," see the section further down that unpacks it.
A standalone web-based Firefly Beta app is now available for testing features such as Text to Image, Text Effects, and Generative Fill. To get it, go to the Firefly page, log in with your Creative Cloud account, and sign up. Or you can download the beta Photoshop app to experiment with Generative Fill, though you need a Creative Cloud individual, team, or enterprise license for that.
Adobe's vision for Firefly is to help people expand their natural creativity and boost their ability to express ideas. As an embedded model inside Adobe apps, Firefly will offer generative AI tools made specifically for creative needs, use cases, and workflows.
How Is Adobe Leveraging Firefly in Photoshop?
Photoshop is the first Creative Cloud app to integrate Firefly natively. Note that there are two new releases: Photoshop (v.24.5) and Photoshop Beta (v.24.6). In this article, I focus primarily on the latter. Have a gander at these new features:
Generative Fill (for Photoshop Beta v.24.6 Only)
Generative fill is the biggest announcement for Photoshop, and thanks to integration with Firefly, you can add, extend, or remove content from any image. The AI automatically matches the existing perspective, lighting, and style. Adobe says that Generative Fill expands expressivity and productivity and enhances the confidence of creators with the use of natural language and concepts to generate digital content in seconds.
A photo of a lake before the Generative Fill feature is used on it. The same lake photo after Adobe's Generative Fill feature has expanded it.Just like regular Photoshop layers and adjustment layers, new generative layers protect your original image by allowing nondestructive edits. So now you can ideate and variate as fast as you can type in your desires.
New Contextual Task Bar (for Both Versions of Photoshop)
The new Task Bar (reminiscent of the ones built into Adobe’s iPad apps, Illustrator, Photoshop, and Fresco) is not just contextual, but also predictive of your possible next step.
In addition to the Task Bar, other new features introduced to the non-beta version include the following: Enhanced Gradients that offer a live preview and can be edited nondestructively; a Remove Tool that removes an object and refills the background with the surrounding image; and Adjustment Presets that you can use to preview and apply preset filter adjustments.
Adobe's Content Authenticity Initiative
With the emergence of fake news, deep fakes, and copyright infringement, it’s important to be aware that with its AI feature advancements, Adobe is also setting the standard for responsibility with its Content Authenticity Initiative and Coalition for Content Provenance and Authenticity. The company says it is standing up for accountability, responsibility, and transparency in generative AI and working toward a universal “Do Not Train” Content Credentials tag that will remain associated with a piece of content wherever it’s used, published, or stored. That tag will let creators keep their work out of the content that is used in training the AI. Adobe says the tag will remain associated with the content wherever it is used, published, or stored. We say bravo!
Firefly AI Has a Way With Words
How to Improve Your Text-to-Image Results
Though Generative Fill and the Photoshop integration are new announcements, the Firefly suite of AI tools initially launched several weeks ago. Since then, creators have been able to use everyday language (i.e., text prompts) to generate new images and text effects.
As with other AI image generators like Midjourney and Dall-E, the better defined and well-conceived your prompt, the more likely the AI produces a result that aligns with your expectations. Remember, it’s your creative vision that guides the AI.
Based on my experiences with Adobe Firefly and other AI generative art tools, such as Microsoft Designer, here are five considerations for brushing up your prompt writing:
1. Precision. The clearer and more specific your directive, the more accurately the AI can interpret your dreams and wishes. AI uses your prompt to traverse its vast expanse of "knowledge"—Firefly is trained on a dataset of Adobe Stock images, along with openly licensed work and public domain content where copyright has expired.
2. Contextual Richness. A prompt spiced up with context can deepen the AI’s understanding and enrich the result. For example, rather than typing “a pretty sunset,” you might try “a sunset that melts into the horizon, as stars begin to twinkle like fireflies.”
With every prompt, Firefly provides four variations, each with additional options, including Show Similar, Create a Generative Fill, Use as Reference Image, Download, and a heart icon. To refine the look, a panel with option categories lets you adjust several characteristics including Aspect Ratio; Content Type, such as Graphic, Photo, or Art; Techniques, such as Pencil Drawing, Stippling, Doodle, or Palette Knife; Effects, such as Bokeh, Isometric, Bioluminescent, or Fisheye; Materials, such as Layered Paper, Clay, Fabric, or Metal; and Themes, such as Graffiti, Low Poly, Cartoon, or Pixel Art.
3. Emotion. Breathe life into your prompt by invoking a specific mood of joy, sorrow, tranquility, or excitement. For example, consider these two prompts I wrote that are similar but generate totally different designs: “a garden party on a spring afternoon” (shown below at left) and "a rainy twilight garden reception lit only by a chromatic braided arc of fireflies” (below right).
AI-generated art using the prompts “a garden party on a spring afternoon” (left) and "a rainy twilight garden reception lit only by a chromatic braided arc of fireflies” (right).4. Exploration and Experimentation. Great prompt writing is itself an art form. Innovation often bursts forth from the unexpected, so have fun dreaming up extraordinary prompts. An eccentric prompt like “a cacophonic symphony played by a swarm of extraterrestrial fireflies,” could result in a masterpiece.
5. Text Effects. Text Effects let you create letters or words with styles, imagery, or textures based on your prompt. The image below shows an example of text that I made with a prompt.
What Is AI for Generative Graphics?
You’re not alone if the swelling tsunami of AI-assisted apps and services has you treading water in a bewildering sea, so let’s quickly skim the surface.
AI, very simply put, is like having a computer or a robot that can "think" and "learn"—in quotes because it can't actually think or learn, but it seems to us humans as if it can. Instead of using a brain, AI uses algorithms, or a set of programmed rules that it follows to process information and make decisions.
Generative art is art created by AI that’s been trained with an extensive knowledge base of art. In practice as a designer or artist, having AI is like having a second brain, an intern, or a copilot to bounce ideas around or organize information.
Humans are divided on AI and associated ethics quandaries. Some fear it will take our jobs or take over humanity itself, while others say the only thing that will take our jobs are workers who embrace AI. You decide.
Although most popular apps and services have been using AI under the hood for a while (Netflix, Zoom, Slack, Amazon, Spotify), joining the surging chatter about OpenAI’s Dall-E and Chat GPT and Bing Image Creator, several graphics apps have announced new, prominently positioned AI-assisted capabilities. Canva, Microsoft Designer (powered by Dall-E), and Adobe Express are some examples of recent pairings of design and layout with some AI features.
Then there are emerging niche services that seem to do most of the legwork for non-professional designers who want more polished-looking output than they might ordinarily be able to pull off. These interesting new services include the presentation deck designer Decktopus, a bunch of graphics and writing tools in Simplified, and the upcoming free-form, no-code web design assistant Studio AI. However, none seems to glow as bright for creative and design workflows as Firefly, which Adobe promises will benefit users by offering more precision, power, speed, and ease directly into workflows where content is created or modified.
What’s Next for Adobe and Photoshop?
What's coming next for Adobe and Photoshop? Plenty—like interactive positioning of 3D models to generate a photorealistic image; extending the edges or changing the aspect ratio of an image (with a single click); a text-to-brush feature that creates Photoshop and Fresco brushes from your prompts; a sketch-to-image module that will turn simple drawings into full-color images; natural-looking photo mixing; text-to-vector and text-to-pattern modules; and a text-to-template ability that generates editable templates from your detailed text description.
There will even be a Teach button so you can train Firefly on your personal styles or use your own photos as references. Buckle up.
Light My Fire
If a picture is worth a thousand words, Adobe Firefly might ignite a new era in the creative industry with its text-based AI modules. Now you can change the mood, atmosphere, or even the weather in still images or video. With its unprecedented ability to envision imagery and concepts, Firefly invites you to activate your creative superpowers and dream bigger. We look forward to testing it further when it's no longer in beta, so make sure to check back for our findings.