Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

The dark side of GenAI: Safeguarding against digital fraud

Angus McDougall
Angus McDougall • 5 min read
The dark side of GenAI: Safeguarding against digital fraud
How are fraudsters using generative AI and what can organisations do to reduce their fraud exposure? Photo: Unsplash
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

In 2023, YouTube users in Singapore were served an unusual advertisement that featured an interview between Loke Wei Sue, a newscaster on Channel NewsAsia, and Elon Musk, CEO of Tesla Motors and owner of X. Musk, in particular, spoke favourably about a new artificial intelligence-driven (AI) investment app that allowed users to “earn up to US$237 ($312) per hour right away”.

Loke never conducted such an interview, nor did Musk attend it. Instead, the entire advertisement was created via deepfake technology. In early 2024, deepfake images were even used to extort money from a dozen individuals, including Members of Parliament.

As much as generative artificial intelligence (GenAI) has created exciting new opportunities, it has unfortunately also caught the attention of fraudsters. In fact, in this brave new digital-first world, fraudsters have more tools than ever—and it’s set to cost. Globally, online payment fraud losses are predicted to increase from US$38 billion in 2023 to US$91 billion in 2028.

The rise of the GenAI fraudster

In the past, organised criminal enterprises had more resources and, thus, posed a higher threat to businesses. However, with GenAI, even the most amateur fraudsters now have easy access to more scalable and increasingly sophisticated types of fraud.

The evidence is in the data. According to Onfido, an Entrust company, 71.7% of fraud caught between 2022 and 2023 in Asia Pacific (Apac) were considered “easy” or less sophisticated. The remainder was classed as “medium” (28.24%) or “hard” (0.05%)—but the level of sophistication is growing. In the last six months, “medium” and “hard” frauds have grown to 36.4% and 1.4% respectively.

See also: 80% of AI projects are projected to fail. Here's how it doesn't have to be this way

How fraudsters are using GenAI deepfakes

GenAI programmes have made it easy for anyone to create realistic, fabricated content. Take deepfake videos, for example. Fraudsters have started using such videos to try and bypass biometric verification and authentication methods.

This type of attack has surged in recent years. Comparing 2023 with 2022, there’s been a 3,000% increase in deepfake attempts globally. Compounding the situation is the growing popularity of “fraud-as-a-service”, where experienced fraudsters offer their services to others.

See also: Responsible AI starts with transparency

Document forgeries

When it comes to document forgeries, there are four types that fraudsters create: physical counterfeits (fake physical documents), digital counterfeits (fake digital representations of documents), physical forgeries (physically altered or edited versions of existing documents), and digital forgeries (digitally altered or edited versions of existing documents).

Businesses operating in Asia are more likely to see higher document fraud rates (9%) compared to Europe (3.1%) and North America (5.1%), with the most attacked document types being identification cards (IDs; 51%) and Tax IDs (29%). This heightened number of digital forgeries can be attributed to the emergence of websites such as OnlyFakes, an online service that sells the ability to create images of identity documents.

Synthetic identity fraud

Meanwhile, synthetic identity fraud is a type of fraud where criminals combine fake and real personal information, such as national ID details, to create new identities. They can then use these fake identities to open accounts, access credit, or make fraudulent purchases.

What GenAI does is generate fake information at scale. Fraudsters can use AI bots to scrape personal information from online sources, including online databases and social platforms, which can then be collated to create synthetic identities. So effective are synthetic identity frauds that, by 2030, it is projected to generate US$23 billion in losses.

Phishing

To stay ahead of the latest tech trends, click here for DigitalEdge Section

Finally, phishing is a type of social engineering attack often used to steal user data. Fraudsters may reach out to individuals via email or other forms of communication requesting they provide sensitive data or click a link to a malicious website, which may contain malware.

Again, GenAI tools allow fraudsters to create sophisticated and personal social engineering scams at scale. For example, they could use AI tools to write convincing phishing emails or for card cracking. In fact, according to recent research, GenAI was one of the top tools used by bad actors in 2023. wormGPT, in particular, is a malicious AI tool designed to automate the creation of convincing, personalised fake emails and other malicious content.

Combatting GenAI fraud with AI

We are entering a new phase of fraud and cyberattacks. As such, the best cyber defence systems of tomorrow will need AI to combat the speed and scale of attacks—think of it as an “AI versus AI showdown”.

With the right training, AI algorithms can recognise the subtle differences between authentic and synthetic images or videos, which are often imperceptible to the human eye. Machine learning, a subset of AI, plays a crucial role in identifying irregularities in digital content. By training on vast datasets of both real and fake media, machine learning models can learn to differentiate between the two with high accuracy.

Securing digital identities against fraud

As AI-driven attacks continue to rise, businesses must consider AI-powered, identity-centric solutions that protect the integrity and authenticity of digital identities. Such solutions can help combat phishing and credential misuse with biometrics and digital certificates, neutralise deepfakes with AI/ML-driven identity verification, and authenticate customers or employees via trusted digital onboarding. These capabilities will help businesses reduce fraud exposure and stay compliant with standards and regulations.

As we look to the future, it is essential to embrace these innovations not just as a means of defence but as a proactive strategy. With identities protected and the potential for fraud diminished, we pave the way for a secure, more trustworthy digital ecosystem. 

Angus McDougall is the regional vice president for Asia Pacific & Japan at Entrust

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.