Deepfakes Don’t Just Harm Celebs. Consumers Are Also at Risk

photo illustration of a pixelated celebrity selling products
Photo Illustration: Variety VIP+: Adobe Stock; Getty Images

Nonconsensual deepfakes featuring celebrity likenesses, whether visual or voice, are likely to proliferate on major social platforms as generative AI tools become more publicly available. Because their data are prolific and easily accessible online, celebrities are likely to be among the first public victims of nonconsensual deepfakes.

Over the past year, more instances have emerged of celebrity visual and voice likenesses being used without permission in ads, promotions and scams targeting consumers. Such scams have tended to manipulate likenesses of trusted celebrities to falsely promote products. Just last week, three high-profile personalities — Tom HanksGayle King and MrBeast (Jimmy Donaldson) — spoke out against AI deepfakes featuring their likenesses being used to promote products with no relationship. 

Entertainment workers are increasingly concerned about generative AI misuse for misleading deepfakes of celebrity voices and images, according to our VIP+ survey conducted by YouGov. As of September, over 7 in 10 industry professionals are either very or somewhat concerned that generative AI will be used to create misleading voice clones or digital doubles of celebrities, up several percentage points since June.

More generally, deepfakes and other artificially engineered content ranked among U.S. adults’ top concerns related to AI, with 82% saying they were concerned, according to a July 2023 MITRE-Harris Poll survey. Similar percentages believed AI technologies should be regulated and industry should invest in AI safety measures to protect consumers. Considering risk factors, consumers’ top three concerns were AI being used for cyberattacks or identity theft, and not being able to hold bad actors accountable for misuse.

Of course, consumer scams using deepfake celebrity likenesses that circulate on social platforms are just one of the potential misuse cases that could emerge from generative AI. But such scams are likely to become much more scalable and convincing as scammers implement generative AI tools that themselves are becoming increasingly accessible and powerful. Insofar as such scams directly harm the consumer — at a minimum, erode trust — they can also damage talent reputations.

In April, the Better Business Bureau released a scam alert about celebrity impersonations becoming more realistic and convincing with deepfake technology. The FTC also recently issued a blog post about consumers’ AI concerns, arguing that romance scams and financial fraud could be “turbo-charged” by generative AI. In March, the agency warned advertisers that misleading consumers with deepfakes has led to enforcement action.

Scarce data yet exists on AI-enabled scams, but the consumer risk related to deepfake scams shouldn’t be regarded as minimal.

Consumer-fraud financial losses reported to the FTC overwhelmingly tend to originate on social media, accounting for 38% of money lost to fraud among people ages 20-29 and 47% among those ages 18-19, according to recent data issued by the FTC. Consumer losses originating from scams on social media totaled $2.7 billion since 2021.

Among scams originating on social media in the first half of 2023, the most frequently reported were related to purchases of fake or undelivered products and fake investment opportunities, accounting for 44% and 20% of reports, respectively. But the most money lost went to investment (53%) and romance scammers (14%).

While celebrity deepfakes have been used to falsely promote existing products, they can certainly be used to promote fake products. Likewise, investment and romance scams using celebrity deepfakes have surfaced. Last year, an elderly Japanese artist, Chikae Ide, lost about $500,000 to a romance scammer posing as Mark Ruffalo. Deepfake Elon Musk has commonly been used to promote fake cryptocurrency investment opportunities.

Deepfake scams are increasingly realistic, with many consumers already indicating they’re not sure if they could distinguish an AI voice clone from the voice of the real person, per a McAfee survey.

Unfortunately for talent, consumer scams featuring deepfake actor likenesses are among the least immediately controllable instances of generative AI misuse — even as actors contend with studios over acceptable uses of the technology to create or manipulate their images or voices in productions

That lack of control has led many to suggest that actors would be the first to independently seek more robust ways to legally own, control and protect their identity or digital likeness in new ways. Remedies are being developed, but their real-world effectiveness remains murky, as we discussed in our recent “Generative AI & Entertainment Part 2” report.

As part of their own content moderation efforts, social media companies should begin integrating detection capabilities, to automatically label when posted content contains AI-generated material, take it down or make it easier for deepfake victims to report misuse of their likeness. Although challenging to enforce, major social platforms have policies against misleading manipulated and AI-generated media, including Meta, TikTok, X, Snapchat, and Reddit.

For now, both talent and consumers still face a lack of meaningful defenses against celebrity deepfake scams.