Biometrics are increasingly being adopted for authentication in sensitive transactions, including financial services. One reason for this is the perception that biometrics are more secure since they are unique to each individual and permanent.
However, biometric authentication is not entirely foolproof. Cybersecurity researchers from Group-IB have discovered a malware family stealing facial recognition data. Using face-swapping artificial intelligence (AI) services, the stolen facial data is used to create deepfake videos for the authentication of financial transactions. The mobile malware, dubbed GoldPickaxe, has both Android and iOS versions and focuses on owners of cryptocurrency wallets and clients of financial services provided in Southeast Asia.
“Organisations like banks are implementing biometric authentication or facial recognition as an extra layer of security to prevent identity theft and fraudulent activities. Ironically, this makes them a more attractive cyberattack target as they store sensitive biometrics data that are key for digital authentication,” says Juraj Malcho, chief technology officer of cybersecurity firm ESET.
As such, banks need to know where and how they store their customers’ identity data as well as who is managing it. “Every firm in the chain of enabling financial transactions that leverage biometric authentication — such as banks, payment networks or digital wallets — is responsible for securing customers’ biometrics data, and they need to ensure their software is written in a robust way,” states Malcho, before highlighting the need for cloud security as the use of cloud platforms and solutions widen the attack surface area.
Misusing generative AI
See also: AI agents will serve you now
While there is no clear evidence that generative AI is increasingly being used to launch cyberattacks, it is believed that bad actors have been using the technology to support their malicious activities. “Generative AI can help improve the quality of phishing emails, making them more convincing and sophisticated. We also think malware authors are leveraging generative AI to create malware mutations, which lowers the barrier to launching cyberattacks,” says Malcho.
What’s for sure is the rise of fake generative AI assistants used as traps set by info-stealers (or malware designed to steal sensitive user information from the affected computer systems). According to ESET’s H1 2024 Threat Report, an info-stealing malware called Rilide Stealer was spotted misusing the names of generative AI assistants, such as OpenAI’s Sora and Google’s Gemini, to entice potential victims.
Malcho shares that while tech companies have been putting up guardrails to prevent malicious use of generative AI, bad actors will always try to bypass security controls. They will manipulate the large language models in the generative AI systems by feeding them malicious inputs disguised as legitimate user prompts.
See also: Safeguarding trust in the digital age: A blueprint for financial institutions
“Tech companies are still struggling to control their generative AI solution. So, they are learning by observation and continuously adding controls to mitigate the risks, such as training smaller models to be filters or restrictions for LLMs. Beyond tech, they also have to deal with grey areas like censorships and cultural sensitivities that differ by country,” he says.
Building up cyber defence
As generative AI and biometrics become a staple in our digital lives, how can banks and organisations protect themselves from the risks that come along with those technologies? Malcho emphasises the need to first provide cybersecurity awareness training so that employees can recognise cyber threats, avoid potentially harmful actions, and take informed steps to protect the business.
Organisations must also adopt a multi-layered defence strategy, as cybercriminals often employ diverse methods to infiltrate systems. One of the multiple checkpoints of such a strategy could include a security question and answer (such as “What’s your favourite beer?”) for user verification.
“Additionally, they need to know what their IT infrastructure and network segmentation is like, apply multi-factor authentication and the right data access controls on systems, encrypt data, and more. For instance, app developers should not have access to biometrics data. Organisations should also be able to find breaches and mitigate them as fast as possible,” advises Malcho.
The massive volume of rapidly evolving cyber threats calls for the use of machine learning or AI to detect, predict and respond to cyber threats. Malcho says: “Cyber crime-as-a-service (where cybercriminals sell their tools and services to make it easy to launch an attack) is on the rise and will not go away soon. The good news is that those tools or services share the same attack patterns or a mutated version of a known malware. So, machine learning or AI can help automatically detect those threats and respond accordingly.”
He adds that combatting cybercrime requires a collaborative effort. “There are lots of grey zones [in cybersecurity]. For example, some countries tolerate some cybercriminal activities due to geopolitical issues. Cybersecurity companies like us can’t influence that. Still, we are providing our cyber threat and attack insights, expertise, telemetry and more to law enforcement authorities globally so that they can combine it with data from other agencies and cybersecurity companies to take the necessary actions. Cyber crime-as-a-service is on the rise because it’s still good business [as bad actors are making profits] so there needs to be a clear deterrence for cybercrime.”