From viral fun to financial fraud: How deepfake technology is threatening financial services
Online fraud has been steadily rising for several years. As services become increasingly digital, fraudsters have followed the opportunity.
During the height of the pandemic in 2020, UK online fraud rose by a third according to a Which? survey, while EY estimates financial crime is costing up to $3.5 trillion a year in the US.
This has created an arms race in digital security. Online trust and safety are fundamental to governments, businesses and individuals interacting online. But as security measures advance, criminals are responding with new fraud tactics that are more sophisticated, and creative, than ever before.
Among the most creative are those using deepfake technology. A source of fun and controversy in equal measure, deepfakes have increasingly gathered public attention in recent years: from viral videos on TikTok of Tom Cruise to apps that can reanimate photos of deceased relatives into video. You may have even seen fake videos of celebrities on social media and not realised.
As the technology becomes more prolific, the threat posed to the financial services industry is becoming very real.
What is deepfake technology?
Artificial intelligence (AI) and machine learning are at the heart of deepfake technology. Using a dataset of images, audio and/or video to generate a person’s likeness onto another, deepfakes are able to show people saying and doing things that they didn’t say or do.
The number of deepfake videos posted online is doubling year-on-year. And while calls for more regulation have been growing, little has materialised to date. The US approved a bill last November to issue further research into deepfakes, while the UK is currently still looking to ban non-consensual deepfake videos.
Like all AI-based technologies, the more data it has, the smarter it gets. This means deepfakes are able to generate a more realistic likeness, match mannerisms and expressions in videos.
They are also scalable. A criminal attempting to impersonate a victim using a mask has to go to considerable lengths to make that happen. With deepfake technology, a much smaller amount of effort is required to cause a significant amount of damage.
Attackers also work on a feedback loop. They will test an idea and quickly give up on it if it doesn’t work. But if it does work, they will persevere with it. If they find vulnerabilities, those vulnerabilities can then be exploited very quickly.
The threat to financial services
In financial services, deepfake technology can be used to attack organisations in a number of ways:
With ghost fraud, criminals use personal data from a deceased person for financial gain. This can take many forms and might be used to access online services, tap into savings accounts and gain credit scores, as well as apply for cars, loans or benefits.
Deepfake technology lends credibility to such applications, as the person checking an application would see a moving, speaking figure on screen and could be persuaded that this was a live human being.
Fraudulent claims from the deceased
In a similar vein, criminals can also make fraudulent insurance or other claims on behalf of the deceased. Claims can successfully continue to be made on pensions, life insurance and benefits for many years after a person dies and could be done either by a family member or professional fraudster. Again, deepfakes could be used to persuade an official that a customer is still alive.
New account fraud
Also known as application fraud, this type of fraud occurs when fake or stolen identities are used specifically to open bank accounts. A criminal can create a deepfake of an applicant and use it to open an account, bypassing many of the usual checks.
The criminal could then use that account to launder money or run up large amounts of debt. Once they know how to do this, they can create fake identities at scale to attack financial services globally. This is an expensive and growing type of fraud, accounting for $3.4 billion in losses.
Synthetic identity fraud
Arguably the most sophisticated deepfake tactic, synthetic identity fraud is extremely challenging to detect. Rather than stealing one identity, criminals combine fake, real and stolen information to create a ‘person’ who doesn’t exist.
These synthetic identities are then used to apply for credit/debit cards or complete other transactions to help build a credit score for the new, non-existent customers.
Forms of synthetic identity fraud are the fastest-growing type of financial crime, and deepfake technology is adding another layer of validity to these types of attacks.
The deepfake threat is undeniable, but how can organisations get ahead of the risk?
Fighting back with biometrics
Biometrics provides financial services institutions with a highly secure and highly usable means of verifying and authenticating online users.
Biometric face verification enables an online user to verify their face against the image in a trusted document (such as a passport or driver’s licence). This is ideal for the first interaction with a new customer, for example at onboarding.
Online face authentication then enables a returning customer to authenticate themselves against the original verification every time they want to log in to their account.
There are many advantages to biometric face verification and authentication. They are secure, convenient, user-friendly, inclusive and scalable, which enables banks and FS players to provide an effortless user experience to their customers.
By acting now and adopting advanced verification methods, organisations can stay one step ahead of the fraudsters and focus on the most important thing – serving their genuine customers.