Why the AI Behind Deepfakes May Also Be Their Undoing

May 9, 2023

The same AI technology that creates convincingly fake images, audio and videos can also be used to identify content that isn’t real.

Artificial intelligence (AI) can take massive amounts of information and generate new content. While this could have industry-changing implications in terms of efficiency and productivity, it can also be put to nefarious purposes if AI “deepfakes” spread potentially harmful disinformation, indistinguishable from reputable content. 

 

Fortunately, the cause of the problem may also be the source of the cure: The same generative AI that churns out phony videos can also be trained to help separate the real from the fake in a deluge of derivative content.  

 

“While generative technology is abused to commit fraud, spread fake news and execute cyberattacks on private and public organizations, it can also help AI systems identify and flag deepfakes themselves,” says Ed Stanley, Morgan Stanley’s head of thematic research in Europe. “Software that can achieve this will have an especially important role in the online reality of the future.” 

 

Fighting Fire with Fire 

Deepfakes—digitally manipulated images, audio or video intended to represent real people or situations—aren’t new. But the ease, speed and quality with which they can be created has elevated the urgency to enact safeguards and devise smart countermeasures. 

 

Though clearly doctored celebrity videos were among the first generation of deepfakes, more recent examples reveal two critical shifts: First, today’s deepfakes can be created in real time, which presents problems for businesses whose data security depends on facial or voice biometrics. Second, hyperrealistic facial movements are making AI-created characters indistinguishable from the people they are attempting to mimic.  

 

“Traditional cybersecurity software is likely to become increasingly challenged by AI systems,” Stanley says, “so there could be strong investment plays in AI technology directed as training tools to help employees and consumers better decipher misleading versus authentic content.” 

 

For example, some companies specialize in both creating and detecting deepfakes using large, multi-language datasets. Others use troves of data to create deepfake detectors for faces, voices and even aerial imagery, training their models by developing advanced deepfakes and feeding them into the models’ database. Such AI-driven forensics analyze facial features, voices, background noise and other perceptible characteristics, while also mining file metadata to determine if algorithms created it and, in some cases, find links to the source material.  

 

“Some of the best opportunities seem to lie in applications for safety and content moderation,” says Stanley, “especially as valuations for large-language models and leading players in the space have priced out some participants.”

 

For a more detailed look at disruptive and ambitious technologies that could bring generational returns, ask your Morgan Stanley representative or Financial Advisor for “Moonshots” (Sept. 14, 2022).

More Insights