The Real Future of ‘Deepfake’ Media
With face-swapping technology on the rise, institutions are struggling to distinguish what's real and what's fake—paving the way for a new line of work.
By Sarah Watts
In March 2021, when a series of videos from the user “deeptomcruise” appeared on TikTok, the Internet went ballistic.
Taken at face value, the videos, each less than a minute long, are innocuous—boring, even. In them, a man who appears to be the actor Tom Cruise is seen performing a series of mundane tasks, such as practicing his golf swing and performing a magic trick. In another, he trips over his own feet while walking through a living room and then jumps up and apologizes, bashfully. It isn't until you read the comments—and then watch the videos a few more times—that the clips start to get really eerie. Suddenly, the horrified comments (“This is literally hurting my brain, not knowing who I'm watching,” says one) start to make sense. The person in the videos is a dead ringer for Tom Cruise—but it isn't actually him. It's a simulacrum—a synthetic image known as a “deepfake,” designed to deceive viewers into thinking what they're seeing is really the person being portrayed. And it's really effective.
Despite the shock and outrage over the Tom Cruise deepfakes, synthetic media is actually more common than we realize—and growing more common by the day. While the term was originally associated with nefarious face-swapping in a sexual context, it's become a common pastime to synthesize our voices and likenesses and try to create something (or someone) new—usually for comic effect.
“Because there were understandably very negative connotations to the term 'deepfake,' people started imagining what else could be done with it, and that gave rise to a lot of really wild speculation, particularly in regard to the 2020 US election,” says Henry Ajder, an independent advisor and expert on deepfake technology, formerly the Head of Communications and Research at Deeptrace, a cybersecurity company based in Amsterdam. When digitally-altered videos of high-profile politicians surfaced before the 2020 presidential election, news media breathlessly predicted the end of democracy as we knew it, saying that deepfakes could be used to stoke political divisions or incite violence. “There has been a lot of speculation and perhaps sensationalization over what deepfakes can be used for,” says Ajder.
But not all deepfakes are harmful. In February 2021, a family ancestry company called MyHeritage launched a tool called “Deep Nostalgia,” in which users can upload old family photos and have the still image animated to appear as though it's moving. Deepfake technology can even be used to promote political or humanitarian causes: In late 2020, for example, the gun safety non-profit Change the Ref produced a digital campaign using the deepfake likeness of 17-year-old Joaquin Oliver, a student who was killed at the 2018 Marjory Stoneman Douglas shooting in Parkland, Florida. In the campaign, Oliver's likeness asks viewers to “replace” his vote by electing officials who will work to end gun violence.
Whether deepfakes are used for good or for evil, one thing is clear: deepfake technology is now paving the way for an emerging sector of work. More and more, people and organizations, such as news media companies, are scrambling to distinguish between what's real and what isn't—which has led to several new startups and countless hundreds of new jobs, all ready to rise to the challenge.
“Deepfakes have started to pose a new concrete threat to face biometrics, as now anybody can appear on camera with their face swapped or re-enacting another person's face,” says Giorgio Patrini, CEO and Chief Scientist at Sensity, a deepfake detection platform founded in 2018. Detection companies like Sensity, Patrini says, have been commissioned by companies in the financial and insurance industries to detect financial fraud—and while financial fraud with deepfake technology is relatively rare in 2021, Patrini says it's likely to become common in the future. “We foresee a large demand for security experts to counter this new threat, now requiring AI skills and knowledge to be applied,” he says.
Meanwhile, deepfake detection agencies are still trying to find how best to determine whether media has been faked. And unfortunately, there's no easy fix.
“Detection is arguably the best solution,” Ajder says, referring to a filter that uses algorithms to filter out manipulated media, leaving behind what's authentic. Unfortunately, however, “it's hard to do detection well at scale and incorporate it into the digital infrastructure of our lives,” says Ajder. “Detection systems don't always work, and they're not accurate and reliable enough to deploy, especially for platforms that are processing huge amounts of media every second.”
Another technical approach is called content authentication, where a photo verification platform like TruePic or Adobe can authenticate the picture or video at its point of capture, essentially fingerprinting the original image and documenting any changes made to the image along the way. But that approach also has its downsides, Ajder says: “People might take photos of photos, or in some cases people who are taking the picture might want to stay anonymous. It might be dangerous for them to label the content they're submitting.”
The ideal approach, Ajder says, is probably a combination of the two, with some digital literacy thrown in so that individuals can spot tell-tale signs of media manipulation among the less sophisticated deepfakes. As the technology expands, the need for businesses to implement it will continue to grow—and they know it. Before the pandemic, executives who participated in a recent study on the future of work ranked investment in new technologies highest among their strategic priorities. The events of 2020 have only strengthened their resolve: well over half of respondents said that their organizations will step up investments in emerging technologies like cybersecurity once the economic impact of COVID-19 has passed.
For now, however, recognizing deepfakes remains a “cat and mouse” game, with businesses and civilians alike struggling to keep up with the deception—and much of the time, similar to the likeness of Tom Cruise on TikTok, fumbling wildly.