By B.N. Frank
Artificial intelligence (AI) technology can be biased, hacked, and be privacy invasive (see 1, 2, 3, 4, 5, 6, 7, 8). It can replace human jobs, make errors (see 1, 2, 3, 4, 5, 6, 7), and make life more difficult for those forced to work with it rather than working with people. Warnings about “Deepfake” images created by AI are not new; however, they just got much scarier.
From Ars Technica:
AI image generation tech can now create life-wrecking deepfakes with ease
AI tech makes it trivial to generate harmful fake photos from a few social media pictures.
If you’re one of the billions of people who have posted pictures of themselves on social media over the past decade, it may be time to rethink that behavior. New AI image-generation technology allows anyone to save a handful of photos (or video frames) of you, then train AI to create realistic fake photos that show you doing embarrassing or illegal things. Not everyone may be at risk, but everyone should know about it.
Photographs have always been subject to falsifications—first in darkrooms with scissors and paste and then via Adobe Photoshop through pixels. But it took a great deal of skill to pull off convincingly. Today, creating convincing photorealistic fakes has become almost trivial.
Once an AI model learns how to render someone, their image becomes a software plaything. The AI can create images of them in infinite quantities. And the AI model can be shared, allowing other people to create images of that person as well.
John: A social media case study
When we started writing this article, we asked a brave volunteer if we could use their social media images to attempt to train an AI model to create fakes. They agreed, but the results were too convincing, and the reputational risk proved too great. So instead, we used AI to create a set of seven simulated social media photos of a fictitious person we’ll call “John.” That way, we can safely show you the results. For now, let’s pretend John is a real guy. The outcome is exactly the same, as you’ll see below.
In our pretend scenario, “John” is an elementary school teacher. Like many of us, over the past 12 years, John has posted photos of himself on Facebook at his job, relaxing at home, or while going places.
Using nothing but those seven images, someone could train AI to generate images that make it seem like John has a secret life. For example, he might like to take nude selfies in his classroom. At night, John might go to bars dressed like a clown. On weekends, he could be part of an extremist paramilitary group. And maybe he served prison time for an illegal drug charge but has hidden that from his employer.
We used an AI image generator called Stable Diffusion (version 1.5) and a technique called Dreambooth to teach AI how to create images of John in any style. While our John is not real, someone could reproduce similar results with five or more images of any person. They could be pulled from a social media account or even taken as still frames from a video.
The training process—teaching the AI how to create images of John—took about an hour and was free thanks to a Google cloud computing service. Once training was complete, generating the images themselves took several hours—not because generating them is slow but because we needed to sort through many imperfect pictures (and use trial-and-error in prompting) to find the best ones. Still, it’s dramatically easier than attempting to create a realistic fake of “John” in Photoshop from scratch.
Thanks to AI, we can make John appear to commit illegal or immoral acts, such as breaking into a house, using illegal drugs, or taking a nude shower with a student. With add-on AI models optimized for pornography, John can be a porn star, and that capability can even veer into CSAM territory.
We can also generate images of John doing seemingly innocuous things that might still personally be devastating to him—drinking at a bar when he’s pledged sobriety or spending time somewhere he is not supposed to be.
He can also be put into fun and fantastic situations, like being a medieval knight or an astronaut. He can appear young or old, obese or skinny, with or without glasses, or wearing different outfits.
The synthesized images are not perfect. If you look carefully, a knowledgeable person can spot them as fakes. But the tech that creates these images has been progressing rapidly, and it may soon be completely impossible to tell the difference between a synthesized photo and a real one. Yet even with their deficiencies, any of these fake images could plant devastating doubts about John or potentially ruin his reputation.
Further Reading: Google’s newest AI generator creates HD video from text prompts
You can see many examples of people using this same technique (with real people) to create whimsical, artistic profile photos of themselves. And commercial services and apps like Lensa have recently emerged that promise to handle the training for you. What they don’t show you is the potential negative effects of this technology if a person uses someone else’s face without their consent.
How does it work?
If you haven’t been paying attention to the rapid progress in AI image generators recently, seeing what we’ve pulled off above might be very alarming. Basically, computer scientists have figured out how to generate new photorealistic images of anything you can imagine by teaching AI using real photos, and the technology has accelerated rapidly over the past year.
The tech has been controversial because, aside from photos, it has also allowed people to generate new artwork that imitates existing artists’ work without their permission.
One of the most impactful AI image generators is called Stable Diffusion. It’s a deep-learning image synthesis model (a fancy term for AI software) that can generate completely new images from text descriptions. It can run locally on a Windows or Linux PC with a beefy GPU, on a Mac, or in the cloud on rented computer hardware.
Further Reading: Artist finds private medical record photos in popular AI training data set
With financial support from Stability AI, an academic organization called CompVis trained Stable Diffusion’s AI model using hundreds of millions of publicly accessible images downloaded from the Internet. Stability AI released Stable Diffusion as open source software on August 22, 2022, meaning anyone can use it for free, and it has become integrated into a growing number of commercial products.
Through intensive training, Stable Diffusion’s neural network has learned to associate words and the general statistical association between the positions of pixels in images. As a result, you can give Stable Diffusion a text prompt, such as “Morgan Freeman in a classroom,” and you’ll get back a completely new image of Morgan Freeman in a classroom.
Making images of Morgan Freeman is easy because there are probably hundreds of photos of him in the data set used to train Stable Diffusion. It already knows what Morgan Freeman looks like. But if you want it to make images of an average person like John, you need to give Stable Diffusion some extra help.
That’s where Dreambooth comes in. Announced on August 30, 2022, by Google researchers, Dreambooth uses a special technique to teach Stable Diffusion’s AI model new subjects through a process called “fine tuning.”
Today, along with my collaborators at @GoogleAI, we announce DreamBooth! It allows a user to generate a subject of choice (pet, object, etc.) in myriad contexts and with text-guided semantic variations! The options are endless. (Thread ????)
webpage: https://t.co/EDpIyalqiK
1/N pic.twitter.com/FhHFAMtLwS— Nataniel Ruiz (@natanielruizg) August 26, 2022
Activist Post is Google-Free
Support us for just $1 per month at Patreon or SubscribeStar
Activist Post reports regularly about AI and other controversial technologies. For more information, visit our archives.
Top image: Pixabay
Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE
Subscribe to Activist Post for truth, peace, and freedom news. Follow us on SoMee, Telegram, HIVE, Flote, Minds, MeWe, Twitter, Gab, What Really Happened and GETTR.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.
Be the first to comment on "Artificial Intelligence Now Available to Make Realistic “Deepfakes” from Social Media Pictures"