Continue reading this on our app for a better experience

Open in App
Floating Button
Home Digitaledge In Focus

Biometrics and generative AI: Friend or foe of cybersecurity?

Nurdianah Md Nur
Nurdianah Md Nur • 5 min read
Biometrics and generative AI: Friend or foe of cybersecurity?
Malcho shares more about the emerging malware threats and what organisations can do to avoid falling victim to them. Photo: ESET
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

Biometrics are increasingly being adopted for authentication in sensitive transactions, includ­ing financial services. One rea­son for this is the perception that biometrics are more secure since they are unique to each individual and permanent.

However, biometric authentication is not entirely foolproof. Cybersecu­rity researchers from Group-IB have discovered a malware family stealing facial recognition data. Using face-swap­ping artificial intelligence (AI) services, the stolen facial data is used to create deepfake videos for the authentication of financial transactions. The mobile malware, dubbed GoldPickaxe, has both Android and iOS versions and focuses on owners of cryptocurrency wallets and clients of financial services provided in Southeast Asia.

“Organisations like banks are imple­menting biometric authentication or facial recognition as an extra layer of security to prevent identity theft and fraudulent activities. Ironically, this makes them a more attractive cyberattack target as they store sensitive biometrics data that are key for digital authentication,” says Juraj Malcho, chief technology officer of cybersecurity firm ESET.

As such, banks need to know where and how they store their customers’ identity data as well as who is managing it. “Every firm in the chain of enabling financial transactions that leverage biometric authentication — such as banks, payment networks or digital wallets — is responsible for securing customers’ biometrics data, and they need to ensure their software is written in a robust way,” states Malcho, before highlighting the need for cloud security as the use of cloud platforms and solutions widen the attack surface area.

Misusing generative AI

See also: AI agents will serve you now

While there is no clear evidence that gen­erative AI is increasingly being used to launch cyberattacks, it is believed that bad actors have been using the tech­nology to support their malicious activ­ities. “Generative AI can help improve the quality of phishing emails, making them more convincing and sophisticated. We also think malware authors are lev­eraging generative AI to create malware mutations, which lowers the barrier to launching cyberattacks,” says Malcho.

What’s for sure is the rise of fake generative AI assistants used as traps set by info-stealers (or malware de­signed to steal sensitive user information from the affected computer systems). According to ESET’s H1 2024 Threat Report, an info-stealing malware called Rilide Stealer was spotted misusing the names of generative AI assistants, such as OpenAI’s Sora and Google’s Gemini, to entice potential victims.

Malcho shares that while tech com­panies have been putting up guardrails to prevent malicious use of generative AI, bad actors will always try to bypass security controls. They will manipu­late the large language models in the generative AI systems by feeding them malicious inputs disguised as legitimate user prompts.

See also: Safeguarding trust in the digital age: A blueprint for financial institutions

“Tech companies are still struggling to control their generative AI solution. So, they are learning by observation and continuously adding controls to miti­gate the risks, such as training smaller models to be filters or restrictions for LLMs. Beyond tech, they also have to deal with grey areas like censorships and cultural sensitivities that differ by country,” he says.

Building up cyber defence

As generative AI and biometrics become a staple in our digital lives, how can banks and organisations protect them­selves from the risks that come along with those technologies? Malcho empha­sises the need to first provide cybersecu­rity awareness training so that employ­ees can recognise cyber threats, avoid potentially harmful actions, and take informed steps to protect the business.

Organisations must also adopt a multi-layered defence strategy, as cyber­criminals often employ diverse methods to infiltrate systems. One of the multiple checkpoints of such a strategy could include a security question and answer (such as “What’s your favourite beer?”) for user verification.

 “Additionally, they need to know what their IT infrastructure and network segmentation is like, apply multi-factor authentication and the right data access controls on systems, encrypt data, and more. For instance, app developers should not have access to biom­etrics data. Organisations should also be able to find breaches and mitigate them as fast as possible,” advises Malcho.

The massive volume of rapidly evolving cyber threats calls for the use of machine learning or AI to detect, predict and respond to cyber threats. Malcho says: “Cyber crime-as-a-service (where cybercriminals sell their tools and services to make it easy to launch an attack) is on the rise and will not go away soon. The good news is that those tools or services share the same attack patterns or a mutated version of a known malware. So, machine learn­ing or AI can help automatically detect those threats and respond accordingly.”

He adds that combatting cy­bercrime requires a collaborative effort. “There are lots of grey zones [in cybersecurity]. For example, some countries tolerate some cybercriminal activities due to geopolitical issues. Cybersecurity companies like us can’t influence that. Still, we are providing our cyber threat and attack insights, expertise, telemetry and more to law enforcement authorities globally so that they can combine it with data from other agencies and cybersecurity companies to take the necessary actions. Cyber crime-as-a-service is on the rise because it’s still good business [as bad actors are making profits] so there needs to be a clear deterrence for cybercrime.”

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2024 The Edge Publishing Pte Ltd. All rights reserved.