Table of Contents
New Capabilities in Photoshop AI
Adobe has unveiled a range of new AI capabilities for its leading graphics editing software, Photoshop. These include:
- The first-time feature of creating AI images from a blank canvas within Photoshop.
- The choice to substitute the background of pictures with AI-created content.
- The capacity to provide Photoshop with reference images for the AI to mimic the style of.
- An option to identify individuals in a photo (like tourists around a landmark) and automatically eliminate them.
Adobe is also incorporating generative AI capabilities into other applications in its suite, including Lightroom and the publishing package In Design.
Firefly Image 3: Adobe’s Upgraded AI Engine
These new features are accompanied by an upgrade to Adobe’s AI engine, now named Firefly Image 3. The company asserts that this engine renders lines and structures more effectively and expands the variety of different images that the AI can produce.
Creating Images from Text Prompts
One of the most notable new AI features in Photoshop is the ability to create images from text prompts from scratch. Previously, elements could be added to existing images, but now Adobe allows customers to start with a blank canvas and simply type a text prompt describing the image they want the AI to create.
Automatic Background Removal and Replacement
Automatic background removal has been a feature of Photoshop for a while, but now customers can generate AI replacements. For instance, you might have a photo of a dog lying on the grass, then choose the option to remove the background and generate a beach surrounding.
Photoshop provides three different background alternatives and the background is adjusted to the lighting, size, and positioning of the subject. In the case of the dog, for example, the sand on the beach should form around the dog’s paws and body.
Improving Image Resolution
One aspect that hasn’t improved since the previous generation of Firefly AI is the resolution of the images created. Whether you’re creating images from scratch or generating new backgrounds, the created image will peak at around 1,500 x 1,500 pixels in size. This could make it challenging to use created images at full-page size in magazines, for example. If the image in which a background is being replaced is larger than 1,500 x 1,500, the created image will effectively be stretched.
Adobe’s CTO, Ely Greenfield, stated that the company is working to enhance resolution, but it becomes a matter of balancing cost and processing times.
He mentioned three ways the company could improve the resolution of created images. The first would be with “more horsepower—just throw more data at it, more compute at it.” However, Greenfield admitted that extra computing power “gets very expensive, very quickly” and increases the amount of time it takes to create images considerably.
Alternatively, the company could apply upscaling. “We can separate the task of generating [images] from the task of making it detailed,” he said.
The third method would be to create images piece by piece instead of as one complete image. “We’re looking at all three of those [methods],” Greenfield claimed.
The Evolution of Firefly AI
Even if image resolution requires further improvement, the sheer quality of the images created by Firefly AI has improved almost beyond recognition since it was first unveiled a little over a year ago.
When Firefly was first released, faces were often a disfigured mush, hands looked deformed and text was impossible to render. All three have been vastly improved, along with the overall quality of created images.
Greenfield explains how some improvements simply required better training data. Adobe insists that its AI has only been trained on “commercially safe” images, such as photos stored in its stock library. That can create problems when it comes to creating specific types of images.
For example, Adobe’s stock image library has relatively few images of crowds of people, because photographers are required to get model release forms signed by everyone that appears in such images. “When we do have crowds, it’s people facing away from the camera,” said Greenfield. “So with the first version of Firefly, if you tried to get a general image of crowds, you could get it, but they were always facing away from the camera.”
Greenfield says the company has also put a lot of focus on “prompt adherence”, ensuring that the AI delivers what people are asking for. “In the original days of Firefly, if you asked for a hippo riding a boat, you might get a hippo on a boat, or you might get a boat on a hippo.”
“Firefly has got much better at that, but it’s still an area we keep investing in. Does the model understand prepositions? Does it understand associations with color? How deeply can it understand the description of the text?”
The AI improvements to Photoshop and Adobe’s other Creative Suite apps are being announced at Adobe Max, which starts in London today.