As part of our new AI Literacy program, Net Literacy is working with other national safety programs to help begin a conversation about Artificial Intelligence. Here’s the first of a two part article written for FOSI, the Family Online Safety Institute – and it’s best viewed when clicking on the View Content button.
– or you can read it below:
BLOG | AUG. 1, 2019
When Is Seeing No Longer Believing?
There’s something very powerful about visual imagery. People have long relied on what they see and it’s a reason why there’s an old cliché that says “the camera doesn’t lie” and “a picture is worth a thousand words.” For as long as pictures have been around, tricksters have used them for both fun and profit by taking advantage of the fact that for most people, seeing is believing. In the past, many of these trick and fraud pictures were primitively done and many instances were unbelievable. But with the advent of artificial intelligence, it’s becoming tougher to tell the difference between what is real or fake. We use the Internet to stay current with the news, share photos, and to gather information. Because images form the basis of much of what we see online, it’s important to know what to believe.
Fooling people with fake pictures is not a recent phenomenon. Since the 1860s, photographic tricks and hoaxes have been used to dupe the masses. Although by today’s standards, some of these pictures seem primitive and not too convincing. An interesting article published by Jocelyn Sears on Mental Floss discusses seven of the early tricks that were used. Spirit pictures and two-headed portraits seem pretty farfetched in 2019!
Since around the time that the Internet was becoming popular in the 1980s, Photoshop has been used to prank the unsuspecting and some of these pictures have gone viral to fool millions of people. Bored Panda has a list of thirty of the most famous fake viral photos that people believed were actually real. Do you remember receiving any of these fake pictures in your email?
For the last five years, artificial intelligence, or AI, has been in the news and several websites have recently launched to show how this technology is becoming better at creating realistic pictures that are completely fake. If a fake AI-generated picture is worth a thousand words, then let’s look at a website that serves up a rotating gallery of different faces, each looking very realistic but all of which are computer generated and absolutely and totally fake. Click on ThisPersonDoesNotExist and refresh the page a few times. Could you tell that the images were frauds? Most people can’t. AI-generated faces are a brilliant way to demonstrate how technology is getting better at manipulating images.
Click here to see examples of AI generated celebrity faces from The Verge.
Okay, now that you have seen how realistic AI-generated pictures can be, here’s a website that challenges you to look at two pictures and choose the one that is of a real person rather than a fake AI-generated image. Click on WhichFaceIsReal to see how well you do. And even if you’re getting most of them right, would you notice the fakes if they were just a random profile photo on a site?
An example from http://www.whichfaceisreal.com.
Ready to improve your odds of recognizing a fake AI-generated picture? It’s becoming tougher to do, but Kyle McDonald shows what to look for. After reading this Medium article, you’ll be a lot more successful choosing between real and fake on the WhichFaceIsReal website! And when you see an image online that doesn’t seem quite right, you’ll be better able to tell if someone is using AI-generated fake images for their fun or profit.
At Net Literacy, we’re optimistic that the thoughtful use of artificial intelligence will help us make our world a better place. Although it will also make our world a more complex place where using critical thinking skills and good judgement will become increasingly important. In my next Good Digital Parenting blog, we’ll discuss deepfakes, an AI-based technology used to produce or alter video content so that it appears to show something that didn’t actually occur. I’ll build a deepfake demo to show how the technology can be used to produce a realistic but completely fake video of people appearing to say things that they never actually have said.