BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Look What You Made Me Do: Why Deepfake Taylor Swift Matters

Following

Taylor Swift’s famous lyric “look what you made me do” took on a new meaning recently as scammers used artificial intelligence technology to create a synthetic version of the singer’s voice which, cobbled together with footage of her standing beside Le Creuset Dutch ovens, falsely offered free cookware sets. This type of fabrication is a “deepfake,” something that uses artificial intelligence technology to create a seemingly real video or audio clip that mimics a person’s face, voice, or both. The level of verisimilitude that deepfakes can create is both impressive and alarming. Before panicking, though, it’s worth considering the pros, cons and media’s current response to these concerns.

The Downside

With the Le Creuset incident fresh in our minds, it’s not hard to see the problems deepfake technology can present. Many celebrities, in addition to Swift, such as media mogul Oprah Winfrey, entrepreneur Martha Stewart, actor Tom Hanks, journalist Gayle King, and YouTube personality MrBeast have all been fabricated and used to either scam people or promote products that the celebrity doesn’t actually endorse. But it’s not just celebrities that are at risk of being deepfaked. Regular people can be synthesized as well, like Clive Kabatznik, an investor in Florida whose “voice” was used by scammers to try and wire money out of Kabatznik’s account.

The Upside

Deepfakes may be scary—as new things generally are—but they’re not all bad. They can be used to create more realistic visual effects in movies or to generate realistic simulations for training purposes. Beyond training, there are even broader applications in education; deepfakes can assist a teacher in delivering engaging lessons that would go beyond traditional visual and media formats. A “deepfaked” video of reenactments or hearing firsthand from an historical figure will be much more memorable than reading a textbook or hearing a lecture. Deepfakes can also support activists and journalists, allowing them to remain anonymous in oppressive or dictatorial regimes.

That being said, one of the most compelling upsides to deepfake technology is presented by Jessica Silbey & Woodrow Hartzog, who postulate that deepfakes don’t create new problems so much as make existing problems worse, “There have been cracks in the system for awhile now, and deep fakes might just be what breaks them wide open and presents an opportunity for us to repair them.” Silbey & Hartzog are particularly bullish on the opportunities for improvement regarding education, journalism, and representative democracy.

The Safe Side

So, the question then becomes: what do we do to protect ourselves? Media organizations are already being impressively proactive. The Wall Street Journal has a division of 21 journalists whose job it is to combat misinformation, particularly deepfakes. The Washington Post has added a team of video experts to their fake news detection team specifically in order to counter deepfakes. And Reuters is collaborating with Meta to detect deepfakes, and even has a course devoted to debunking them.

Print media aren’t the only ones taking precautions. Internet-based companies are also being proactive. Google GOOG has volunteered their datasets of manipulated and non-manipulated videos for the research community. X (formerly known as Twitter) has its own protocols, using a set of four rules to combat deepfakes: identification through notice tweets, warning of manipulated content before sharing it, inclusion of a link to genuine news articles explaining the manipulation, and elimination of potentially safety-endangering materials. Meta, with the help of fact-checking organizations, makes its best effort to delete faked materials from their social network. And, on top of their collaboration with Reuters, Meta has financed an initiative called, the Deepfake Detection Challenge.

The Bright Side

Thankfully, as explained by Peter Soufleris, chief executive of IngenID, a voice biometrics technology vendor, “Synthetic speech leaves artifacts behind, and a lot of anti-spoofing algorithms key off those artifacts.” That being said, the technology is going to improve. “These tools are becoming very accessible these days,” says Dr. Siwei Lyu, a computer science professor who runs the Media Forensic Lab at the University at Buffalo. “It’s becoming very easy, and that’s why we’re seeing more.” He added that it’s now possible to make a “decent-quality video” in less than 45 minutes. This trend means that the defensive mechanisms will need to improve as well, and we can’t just count on technology to protect us. In April of 2023, the Better Business Bureau put out an article telling people how to spot fake celebrity scams.

It’s helpful when consumers are vigilant, and even more helpful when the folks behind the tech are also trying to do the right thing. While the deepfake of Taylor Swift brings to light the risks and ethical concerns surrounding the use of deepfakes, deepfake Tom Cruise is attempting to present a model of acceptable deepfaking behavior. Tom Graham, a London-based tech entrepreneur and co-founder of Metaphysic, the company behind @deeptomcruise asserts that, “The technology is moving forward, whether anybody likes it, really.” His company’s goal is to “really, really focus on trying to develop our product in a way” that avoids adding to the harmful deepfakes already being created by others.

Deepfakes are not the first instance of media being potentially harmful. Like all the media that has come before it, deepfakes have the potential to do as much good as they do harm. Maybe there’s a reason Swift hasn’t commented on the Le Creuset debacle; it’s possible her lyric of choice is “You need to calm down.”

Follow me on Twitter or LinkedInCheck out my website

Join The Conversation

Comments 

One Community. Many Voices. Create a free account to share your thoughts. 

Read our community guidelines .

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's Terms of Service.  We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Spam
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's Terms of Service.