Low-tech Solutions For Spotting AI-generated Media

Fake and manipulated images and videos can be a factor in amplifying our toxic us vs. them divides by making us feel threatened and angry. When that happens, we’re more likely to believe and share fake and distorted information.

As artificial intelligence programs get better at making images and videos that look real, this problem will get worse. We’ll see more people overreacting to images and videos that people have purposefully made to try to rile us up and trigger animosity against people and groups. If we care about reducing toxic polarization in America, we must get in the habit of trying to recognize fake media — and misleading information in general — and help others do the same.

The following piece is by Dan Evon, who writes for RumorGuard, a website by the News Literacy Project that helps people navigate digital news and information. 

In the late 1800s, some of the first photo manipulations were created by hand-drawing tornadoes onto glass and then photographing them against rural backgrounds. Moving into the 20th century, pranksters combined photographs to depict an invasion of monstrous-sized grasshoppers. Another 100 years later, computer software introduced digital tools to create more sophisticated fabrications. 

Our ability to create and disseminate fake photos has evolved yet again. Now, with AI image generators, anyone can create convincing digital manipulations with a few simple text inputs. A flood of AI misinformation has already started — spreading false claims about supposed White House explosions and satanic stores and campaign lies about political rivals.

But there is some good news. We know these types of falsehoods are swirling, and we can guard ourselves against them. While the methods of manipulation have become more high-tech, there are low-tech solutions to suss out what’s real and what’s not. Determining where an image originated from, who is sharing it, and, just as importantly, who is not, will help keep your human intelligence a step ahead from the artificial. These are all skills you need to be news-literate — to find credible information and separate fact from fiction.  

Always consider the source
Simply learning to consider the source of information is one of the most powerful ways to protect yourself from misinformation that’s created or spread with the help of AI. 

All content comes from somewhere, and when we can trace an image back to its source, we often find context that can help put the image into perspective. Many AI images, for example, start on platforms that are dedicated to AI-generated art. These digital artworks aren’t always created to deceive people, but too often they get removed from that original context by bad actors and shared with false claims. The quickest way to find the original source of an image is usually to perform a reverse image search, which can be as simple as uploading a photo into a search engine.   

Considering context clues alongside a viral image is another way to glean whether something is real or wholly made-up. Who is sharing the image? Are they a news source? Are they the only source of the image? Just as it is important to determine who is sharing an image, it’s also important to consider why they might be sharing it. Conspiratorial, partisan, and engagement-bait accounts may have ulterior motives when sharing a piece of content and a greater incentive to spread shocking claims through AI images. 

It’s just as important to think about what sources aren’t sharing a particular item. In May 2023, images supposedly showing an explosion at the White House and the Pentagon went viral online. If such an event truly took place, you would expect dozens of videos and photos to emerge as standards-based news outlets covered the event. They didn’t. This lack of sources was a major red flag when these images first emerged, and soon, credible news outlets published fact-checks that pegged the image as a hoax. When a newsworthy item doesn’t appear in the news, it is a good reason to be skeptical.  

Stay skeptical, not cynical
In the wake of AI-generated photos going viral, information experts advise us to take a closer look. In this series of images of former President Donald Trump hugging Dr. Anthony Fauci or Pope Francis wearing a puffy coat, take a closer look — especially at the person’s fingers. AI image generators still haven’t mastered the art of crafting a perfect hand. Irregular teeth, unusually smooth surfaces, blurry patches, and surreal background patterns are a few other visual cues that an image may be fabricated.

But take note: Scanning images for clues isn’t foolproof. When you start looking too closely at images with a skeptical eye, there’s a risk you might start seeing things that aren’t there. 

Plus, it’s likely that AI-generated media will get substantially better and become even harder to detect with the naked eye. For these reasons, it’s important to go beyond just scanning images for clues.

A major challenge posed by AI images is the sheer quantity of them. Fake photos and videos were already commonplace on the internet before the use of AI exploded. But these digital fakes — at least, the good ones — used to require time and skill to create, limiting the amount of this kind of fake content. These two barriers are largely removed when using an AI image generator, which creates the potential for an exponential proliferation of fake images. 

So seeing is no longer believing. But it’s important not to give in to cynicism, because it’s still possible to determine what’s real and what’s not. It’s just become more important than ever to learn how. Fact-checking and news literacy skills will give you the confidence to know when your eyes are deceiving you. 

Dan Evon writes for RumorGuard, a website by the News Literacy Project that helps people navigate digital news and information.

More Inspiration

Drag

More Inspiration

Scroll
April 17
Editorial

Conservative Environmentalist Benji Backer Talks Polarization and the Pressures to Conform

Read More
April 10
News

"Civil War" Movie Reactions: "Can That Really Happen in America?"

Read More
April 2
News

PBS Highlights Our Citizen Solutions Work on Gun Rights and Safety

Read More
March 20
Practices

6 Steps for Mending a Politically Strained Relationship

Read More
March 13
Editorial

True or False? Political Passion Is at Odds with Depolarization

Read More
March 6
Editorial

Some People Hated the Depolarizing Heineken Ad. Why?

Read More
February 21
Editorial

How Does This Heineken Beer Commercial Reduce Toxic Polarization?

Read More
Scroll To Top