Last year, Bruce Willis was in a commercial for Megafon, but never actually filmed the commercial. James Earl Jones’ voice was used in a Disney spinoff of Star Wars, but the recording was made after the actor retired and stopped recording. A video of Elon Musk, promoting and claiming to have founded fake cryptocurrency site Bitvex, was used to scam investors—but Musk never made the video (or any of the other scam crypto site promotions he appears in). Just a few months ago, Binance’s chief communications officer led an entire developer conference in which he encouraged people to collaborate with a group of scammers—only it wasn’t really him.
All of these are examples of so called “deepfakes,” in which realistic audio and video are produced without the participation of the person appearing in them. Artificial Intelligence (AI) continues to raise the bar in terms of their quality. While these are generally one-way presentations, the time is fast approaching in which they will be able to interact.
Why should corporate leaders care? When is the last time you jumped on a call or video conference with your CFO or head of Accounts Payable? What if your team got a group voicemail, video message or even a live video conference in which they were told by “you” to take an action that you never requested? Buy a product? Wire money? Make an announcement which would tank your share price to benefit short sellers? What if someone posted edited images of you on social media enjoying your competitors’ products?
Deepfakes come in lots of flavors. Editing and splicing real sound and images to change the meaning is commonly used by groups ranging from internet trolls to government disinformation campaigns. Creating entirely “synthetic” voice and video that never happened is a newer threat, and as it becomes interactive it will change how we trust what we see and hear.
It all started with modifying existing images and audio. At the end of the day, digital pictures and sound are just data. And as security teams know, data can be stolen, corrupted or changed (they call this the CIA model, in which attacks can affect data confidentiality, integrity or availability). Deepfake integrity attacks have gotten better as AI has gotten better. The more voice and video available online from presentations and investor conferences, the more data to train malicious AI. Add to that the introduction of Generative Adversarial Networks, AI that teaches AI to get better, and we end up with terrifyingly realistic results.
What can be done? First, we need to teach people to “trust but verify.” We already know that fake emails pretending to come from an executive who urgently demands an unusual payment is cause for concern. It’s called Business Email Compromise (BEC) and results in billions of dollars of fraud every year. We tell our teams that if a request seems odd, they need to make a call and confirm it. But what if calls can’t be trusted? The important thing is to check unusual requests using a completely different channel. If the request came in by Zoom, send a text to confirm, not a chat message in the Zoom session. If it came over the phone, send an email.
Second, we can reinforce that messaging with our own actions. When we ask someone in the company to do something a little out of pattern, we can say we will confirm it via email/text/voicemail…any channel but the one being used.
Finally, we can use tech to fight tech. Executive headshots on the website should be micro-fingerprinted to detect later misuse, and a brand protection company should be hired that uses micro-fingerprints. We can add subtle background sounds to audio conferences to make them harder to use to train AI. And we can keep inventing and investing in new ways to defend our firms and our brands from the latest methods used to attack them.