Analysis | Anyone can Photoshop now, thanks to the latest leap in AI

A new Generative Fill AI feature can create joyful Photoshop edits and scary deepfakes

The images above have been altered by artificial intelligence. The original image was a vertical portrait of technology columnist Geoffrey A. Fowler. Then Adobe Photoshop’s AI Generative Fill expanded the scene along the left and right sides, removed a person in the background, and applied different hairstyles. (Illustration by The Washington Post; Adobe)

Everyone has heard of Photoshopping, but few people have had the experience to do it. With artificial intelligence, this is no longer the case.

We recently turned a portrait of one of us into something unreal, and it only took a few seconds. Opening the image in a fresh AI version of Adobe Photoshop, we selected the area around the head and typed add a clown wig in a box. A curly rainbow of photo-realistic strands has sprouted up, blending perfectly with the surrounding trees.

AI is ushering Photoshop into a new era, where you can edit an image in sophisticated and sometimes scary ways without mastering complex software. It’s far from perfect: You can remove an ex from a photo in just two clicks, though the fickle AI could replace that person with another whose face seems to melt away.

But this leap forward in artificial intelligence means that anyone can get at least one silly job done with Photoshop now. The AI ​​tool itself can both turn images into joyful entertainment and be used to manipulate or even exploit. And this adds new urgency to a question first raised by Photoshop more than 30 years ago: How much longer can we trust what we see?

The photo of a hot air balloon has been replaced with a flying mushroom by selecting the balloon and using the prompt turn it into a floating mushroom.” (Video: The Washington Post)

To take a look at what will happen to your photos—both the ones you see and the ones you take—we tested a beta version of Photoshop with its new AI feature called generative fill. It’s Photoshop maker Adobe’s response to a barrage of new AI imaging tools that threaten to make it redundant, including DALL-E 2, Midjourney, and Stable Diffusion.

While other AI services have generated interest in inventing entirely new images, generative fill offers a remarkably intuitive way to edit parts of existing images. It’s called inpainting: select what you want to replace and type what should be there. This leap brings AI closer to real photos, and a $10 monthly subscription to Photoshop puts it in a lot more hands. This goes way beyond fixing blemishes and perfecting vacation photos with existing AI tools from Google and others.

But is AI turning Photoshop into the ultimate deepfake machine? The same AI technology that helped us remove the pigeons from the snapshots also allowed us to generate a very convincing image of an entirely fictitious fire at the Pentagon. (When such an image recently went viral on Twitter, it briefly moved the stock market.) However, we’ve found significant limitations both inherent in the technology and intentionally built into Photoshop that prevent some potential worse uses of the technology. At least for now.

What kind of Photoshopping can AI do and not do? Let us show you.

What Photoshops AI does well

You must be online when using Generative Fill. That’s because Photoshop sends three pieces of information to Adobe’s AI for processing: the text of a prompt you enter, the area you’ve selected to replace, and some of the rest of the image it tries to blend with.

All of this information makes the Generative Fill extremely effective at removing objects from backgrounds, such as the radio towers in this sunset scene:

AI is capable of inventing clouds, trees or even entire cityscapes to blend into your original image. You can use generative fill to erase the memory of the warring crowds at a vacation destination or even the person you traveled with.

Removing people from images is a breeze even on complex backgrounds.

The same AI can also add new objects into images that never existed, like the giant robot you see in our sunset photo below.

When it works well, this saves countless hours of effort: Photoshopping things into images before AI required you to be good at clipping objects, know how to use layers, find a source image to add, and adjust lighting, and color.

Maybe you can’t imagine having to add a giant robot to any of your photos. But artists or even just your family’s official photographer might see this as a creative jumping-off point that allows them to try out a wild idea. We also found the marvel of this genius of instant imagination particularly thrilling for the children, three of whom gathered around our computer issuing commands to test.

Most impressively, AI can make sure your images blend in with the angle of the sun, shadows, or even a visible reflection in the water. It also gives you three variants to choose from, or you can keep changing the prompt and selection area until you find one you like.

Another fascinating use: having AI expand the original frame or cutout of a picture by inventing what the rest might look like. On social media, people have enjoyed applying this technique to artwork, album covers, and photos. Here’s how AI augmented Vincent van Gogh’s famous bedroom painting:

What Photoshops AI does wrong

For any Whoa that’s pretty good! we got about five Oh, how bad responses from Photoshop’s AI. But with patience, we might eventually produce what we were looking for.

The main problem: Photoshop’s AI isn’t great at generating certain types of objects, which can look silly, half-drawn, or just plain fake.

Take, for example, the same image of a cow in a field from above. When we asked the AI ​​to add a cowboy, we got whatever it was:

This poor response is partly a function of the randomness of current generative AI technology, and a reflection of the fact that Adobe has focused more on training its systems to create natural-looking images.

There are some objects where I think it works better than others, but when you start to get a little more surreal, a little crazy, it’s not quite there yet, Adobe’s vice president of digital imaging Maria told us. Yap.

Adobe also said its AI model isn’t designed to distort the dark. However, in many tests, we have noticed a tendency of the AI ​​to go for the macabre. Here’s an example of an original photo taken from an airplane where we asked the AI ​​to add an alien spacecraft. Instead we have a flying monster.

The selected area in the original image makes a big difference in the output. Adobe’s AI is designed to respond to the shape of your selection and everything in it will be completely replaced. This means that if you want to add a hat to someone, you only need to select the area where the hat would ideally go in the general shape of a hat.

Here’s what happened when we asked the AI ​​to give funny hats to The Posts Help Desk team members by simply drawing a box around all five of our heads. Instead of hats, we have new ping-pong ball heads.

And speaking of people, Photoshop’s AI is just terrible with them. Here’s one where we asked the AI ​​to add another member to another Help Desk team photo.

That poor lady. We’ve never been able to get the AI ​​to create a person who appears to have a normal face.

Will it be used for evil?

Technology can bring out the best and worst in people. And already, some are using more complicated AI imagery tools to create images to deceive and exploit: Political campaigns ran ads with AI-generated scenes that never happened. The FBI recently issued a warning about deepfake sextortion schemes.

Is Photoshop’s AI putting the tools of counterfeiting in far more hands? Yes and no.

As proof, we took a photo of a hike in a barren field and typed add wildfire. He turned it into a very believable photo of a completely fictional fire. (In fact, it was easier to get the AI ​​to add a realistic looking fire to the image than it was to add realistic wildflowers.)

We were even able to use the AI ​​to remove our watermark on images that signaled they had been altered by the AI. That same feature could be used to remove watermarks on paid professional images, such as those sold by Getty Images or by photographers at the marathon finish line.

But in practical terms, there are also some limitations to the potential misuse of AI Photoshop today. First, the resolution of any areas it fills is limited to a sharpness of 1,024 by 1,024 pixels, although Photoshop will scale it to fill the hole. This means that images will look noticeably blurry if you try to manipulate a large portion all at once.

Adobe says it’s also building some limits into its AI products. First, once this version of Photoshop comes out of beta, it will automatically include so-called content credentials in the files it produces that would mark whether an image has been AI edited. However, it wasn’t available in the version we tested.

Adobe claims that the images produced by its artificial intelligence are pre-processed for content that violates our terms of service, such as pornographic content. When it detects infringing content, it blocks the prompt from being reused.

We found this to be hit and miss – sometimes being overly sensitive like pausing when we asked to add a UFO to an image. Other times it didn’t seem sensitive enough like when we asked to add a baby’s face to an infamous photo of Kim Kardashian that took the internet by storm.

We’re glad Adobe says it’s taken the threat seriously through an industry effort called the Content Authenticity Initiative, but the jury is out on whether that will be enough. One view of new AI technology is that, oh well, you get the bad with the good. We don’t have to accept it: As human beings, we can decide how to maximize the good and minimize the harm in the tools we create.

#Analysis #Photoshop #latest #leap

Leave a Comment