Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

Keys to tackling the deepfake menace

Nurdianah Md Nur
Nurdianah Md Nur • 5 min read
Keys to tackling the deepfake menace
The Content Credentials “CR” icon aims to become a recognisable symbol of transparency as it shows the origin and detailed history of a piece of digital content. Photo: Coalition for Content Provenance and Authenticity
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Creating content — like images, audio, or videos — is becoming easier, even for non-technical users. Using a detailed natural language description, generative AI tools can quickly produce the desired output.

While this benefits business functions like marketing, it can also be used maliciously. Bad actors can easily use generative AI tools to create deepfakes, which are highly realistic content from AI manipulating existing media. These misleading AI-generated content can be spread to influence public opinion, potentially damaging reputations, swaying elections, causing societal polarisation, and inciting geopolitical tensions.

It may seem fictional, but identity verification provider Sumsub has reported detecting a 245% y-o-y increase in deepfakes worldwide. Countries with elections planned for 2024 saw significant growth of deepfake incidents y-o-y, such as South Korea (1,625%) and Indonesia (1,550%).

Surprisingly, deepfake scams are rapidly advancing even in countries without planned elections this year, such as China (2,800%) and Singapore (1,100%). One notable example is an AI-generated deepfake video of Senior Minister Lee Hsien Loong promoting an investment product earlier this year. Scammers mimicked his voice, layered fake audio over actual footage of him delivering the 2023 National Day message, and synchronised his mouth movements with the audio to make the deepfake video believable.

Credit: Sumsub

See also: Conducting secure data movements in the cloud symphony

Given the rise of deepfakes, industry leaders at the recent Asia Tech x Singapore 2024 summit discussed ways to prevent generative AI misuse.

The role of content provenance tools

One way of fighting deepfakes is to use digital signatures and content credentials for content provenance or to verify the origin and authenticity of digital content.

See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way

For instance, Sony’s recent cameras feature a digital signature system that certifies the authenticity of an image at the point of capture. The system creates a “digital birth certificate”, which is retained throughout revisions and shows if the image captured was of an actual 3D object or a photograph of an image or video. This provides more assurance of the content’s authenticity, which may be useful for news agencies.

“The issue of deepfake is one of the most serious issues we must tackle regarding principles, conduct, and technology. The way we’re tackling that is by upping the intellectual property of creators,” says Hiroaki Kitano, executive deputy president and chief technology officer of Sony Group Corporation, during a panel discussion.

Embedding content credentials into digital content is another way of restoring trust in the online world. Similar to Sony’s digital signature, content credentials show the origin and detailed history of the content. Users can access that information by scrolling over the “CR” icon on a piece of content.

Powered by the Coalition for Content Provenance and Authenticity (C2PA) standard and the Content Authenticity Initiative, content credentials are open source and can be easily integrated into any platform, product, tool or solution. C2PA hopes the CR icon will be so widely adopted that it becomes as ubiquitous and recognisable as the copyright symbol. Major brands and industry leaders are implementing content credentials — including Adobe, Microsoft, Publicis Groupe, Leica, Nikon and Truepic — to bring a new level of recognisable digital content transparency from creation to consumption.

“By attaching the metadata onto their content, organisations can provide greater transparency on their digital content. But we also need to make it [very] clear to consumers that content credentials are available on images. So on LinkedIn, posted images or videos containing C2PA credentials will bear the CR icon,” says Natasha Crampton, chief responsible AI officer of Microsoft.

Collaboration and governance are needed

International collaboration is essential to combat deepfakes, too, says Professor Yi Zeng, director of the Brain-inspired Cognitive Intelligence Lab and The International Research Center for AI Ethics and Governance, Institute of Automation, Chinese Academy of Sciences, at the same panel discussion.

To stay ahead of the latest tech trends, click here for DigitalEdge Section

He suggests building a “worldwide deepfake observatory” to drive a greater understanding of the threat through information exchange among countries. A global fact-checker accessible to everyone, he adds, can also help the public learn and recognise deepfakes to prevent the spread of disinformation.

Since deepfakes are a threat across industries and to national security, standards, laws and regulations for AI-generated content are needed to protect against harmful deepfake content. “[But we need to] first have a clear definition of deepfake before establishing standards to regulate it and developing labelling mechanisms to pick out deepfakes,” says Stefan Schnorr, state secretary of Germany’s Federal Ministry for Digital and Transport.

He continues: “It’s also important that discussions [around those standards and mechanisms] are as transparent as possible and include all relevant stakeholders. [This allows us to] promote innovation while upholding ethical standards and safeguarding human rights.”

While the European Union and some countries have established AI laws, Singapore has “no immediate plans to do so”, says Minister for Communications and Information Josephine Teo at the summit’s opening address.

She adds: “One reason is that existing laws and regulations can already address some of the harms associated with AI. Take, for example, AI-generated fake news that is spread online. Regardless of how fake news is produced, as long as there is public interest in debunking it, our laws already allow us to issue correction notices to alert people.”

An update of existing laws can be a more efficient response to some cases, such as sextortion, where someone threatens to distribute intimate images of a victim. “We can all agree that even if an image was not real but rather a deepfake, the distress caused is enough for it to be outlawed. That was precisely what we did when we updated the Penal Code to introduce a specific offence of sextortion. We ensured that sextortion would be illegal, with or without AI.”

There is no single solution to combating deepfakes. Instead, combining technologies, regulations, and collaborations across nations and stakeholders is essential for building a trustworthy digital future.

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.