Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

Deepfakes are running rampant as tools to detect them lag behind

Bloomberg
Bloomberg • 6 min read
Deepfakes are running rampant as tools to detect them lag behind
The global market for technology to root out manipulated content is relatively small. Photo: Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Artificial intelligence is now so powerful it can trick people into believing an image of Pope Francis wearing a white puffy Balenciaga coat is real, but the digital tools to reliably identify faked images are struggling to keep up with the pace of content generation.

Just ask the researchers at Deakin University’s School of Information Technology, outside of Melbourne. Their algorithm performed the best in identifying the altered images of celebrities in a set of so-called deepfakes last year, according to Stanford University’s Artificial Intelligence Index 2023.

“It’s a fairly good performance,” said Chang-Tsun Li, a professor at Deakin’s Centre for Cyber Resilience and Trust who developed the algorithm, which proved correct 78% of the time. “But the technology is really still under development.” Li said the method needs to be further enhanced before it’s ready for commercial use.

Deepfakes have been around, and prompting concern, for years. Former House Speaker Nancy Pelosi appeared to be slurring her words in a doctored video in 2019 that circulated widely on social media. Meta Platforms Inc. About a month later, Chief Executive Officer Mark Zuckerberg was seen in a video altered to make it seem like he’d said something he didn’t after Facebook earlier refused to take down the Pelosi video.

While the image of the Pope in the puffer was a relatively harmless manipulation, the potential to inflict serious damage from deepfakes, from election manipulation to sex acts, has grown as the technology advances. Last year, a fake video of Ukraine President Volodymyr Zelenskiy asking his soldiers to surrender to Russia, could have had serious repercussions.

Big tech companies as well as a wave of startups have poured tens of billions of dollars into generative AI to claim a leading role in the technology that could change the face of everything from search engines to video games. However, the global market for technology to root out manipulated content is relatively small. According to research firm HSRC, the global market for deepfake detection was valued at US$3.86 billion in 2020 and is expected to expand at a compound annual growth rate of 42% through 2026.

See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way

Experts agree there’s undue attention on AI generation and not enough on detection, said Claire Leibowicz, head of the AI and Media Integrity Program at the nonprofit organisation The Partnership on AI.

While the buzz around the technology, dominated by applications like OpenAI’s ChatGPT, has reached a fever pitch, executives from Tesla Inc. CEO Elon Musk to Alphabet Inc. CEO Sundar Pichai have warned of the risks of going too fast.

It will be a while before detection tools are ready to be used to fight back against the wave of realistic-looking altered images from generative AI programs like Midjourney, which produced the Pope image, and OpenAI’s DALL-E. Part of the problem is the prohibitive cost of developing accurate detection, and there’s little legal or financial incentive to do so.

See also: Responsible AI starts with transparency

“I talk to security leaders every day,” said Jeff Pollard, an analyst at Forrester Research. “They are concerned about generative AI. But when it comes to something like deepfake detection, that’s not something they spend budget on. They’ve got so many other problems.”

Still, a handful of startups such as Netherlands-based Sensity AI and Estonia-based Sentinel are developing deepfake detection technology, as are many of the big tech companies. Intel Corp. launched its FakeCatcher product last November as part of its work in responsible AI. The technology looks for authentic clues in real videos by assessing human traits such as blood flow in the pixels of a video, and can detect fakes with 96% accuracy, according to the company.

“The motivation of doing deepfake detection now is not money; It is helping to decrease online disinformation,” said Ilke Demir, a senior staff research scientist at Intel.

So far, deepfake detection startups mainly serve governments and businesses that want to reduce fraud and aren’t aimed at consumers. Reality Defender, a Y-Combinator-backed startup, charges fees based on the number of scans it performs. Those costs range from tens of thousands of dollars to millions, in order to cover expensive graphics processing chips and cloud computing power.

Platforms like Facebook and Twitter aren’t required by law to detect and alert the deepfake content on their platforms, leaving consumers in the dark, said Ben Colman, CEO of Reality Defender. “The only organisations that do anything are the ones like banks that have a direct connection to financial fraud.”

That could change as deepfakes become more sophisticated and affect more people. AI has made it easier to replicate someone’s voice, and scams are on the rise. Criminals are exploiting such tools to dupe victims into wiring money or approving financial transfers. And a viral song, “Heart on My Sleeve,” that claimed to use AI versions of the voices of Drake and Weeknd to create a passable copy, has created legal and creative concerns in the music industry.

Current methods of detecting fake images and videos involve comparing visual characteristics in the content by training computers to learn from examples and embedding watermarks and camera fingerprints on original works. But the rapid proliferation of deepfakes requires more powerful algorithms and computing resources, said Xuequan Lu, another Deakin University professor who worked on the algorithm.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

And without a commercially available and massively adopted tool to distinguish fake online content from real, there’s plenty of opportunity for bad actors.

“What I see is pretty similar to what I saw in the early days of the anti-virus industry,” said Ted Schlein, chairman and general partner at Ballistic Ventures, who invests in deepfake detection and previously invested in anti-virus software in the early days. As hacks became more sophisticated and damaging, anti-virus software developed and eventually became cheap enough for consumers to download on their PCs. “We’re at the very beginning stages of deepfakes,” which so far is mostly being done for entertainment purposes. “Now you’re just starting to see a few of the malicious cases,” Schlein said.

But even if it’s cheap enough, consumers might not be willing to pay for such technology, said Shuman Ghosemajumder, former head of artificial intelligence at F5 Inc., a security and fraud-prevention company.

“Consumers don’t want to do any additional work themselves,” he said. “They want to automatically be protected as much as possible.”

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.