Deepfakes in finance: a threat to be wary of?
Since the start of the COVID-19 crisis, the number of fraud cases have continued to grow. In late June, over £16 million was lost to online shopping fraud during lockdown according to Action Fraud.
From posing as government officials to online TV subscription services, fraudsters are trying every way they can to entice people for their personal details and prey on their hard-earned savings. Now, the latest weapon fraudsters are adding to their arsenal is synthetic identity fraud.
Fraudsters are turning to synthetic identities, using “deepfaked” images and videos, to open new accounts. According to McKinsey, it’s already the fastest growing type of financial crime in the US, and it won’t be long until we see this spreading to Europe – if it hasn’t already.
What is synthetic identity fraud?
These synthetic identities are made up of completely fake or a unique amalgamation of false information, stolen or modified personal identifying information (PII), be that hacked from a database (phished from an unsuspecting person) or bought on the dark web. Because of the limited impact on those whose PII has been used, often this kind of fraud will go unnoticed for longer than traditional identity fraud.
Traditional anti-fraud measures can’t keep up to these new tactics. Standard behaviour modelling methods cannot differentiate between what is and isn’t fraud, and coupled with the unreliability of PII, companies will need a more effective approach to combat the threat of synthetic identities. Accurately verifying the identities of new applicants at the onboarding stage will be critical to this.
Customers submit a selfie and a picture of the ID document in an app or online form, and in seconds, AI algorithms analyse and compare the submissions. They assess them for potential forgery or alterations; prove that the selfie is a ‘live’ person; and verify the two against one another. This digital identity verification is effective against synthetic identities for several reasons, but one stands out: most fraudsters don’t want to use a picture of their own face to commit fraud.
Multi-layered defense approach
However, the rise of deepfake technology means that it is becoming easier for fraudsters to generate realistic live images or videos for these synthetic identities to commit serious levels of fraud. University College London’s (UCL) recent threat report ranked deepfake technology as one of the biggest threats to our lives online right now, highlighting further that deepfakes is no longer limited to the dark corners of the internet.
While this is currently not happening at scale, it’s important for banks and financial providers to take precautionary measures. A multi-layered approach is critical for a robust anti-fraud system. One that involves using physical biometrics at appropriate stages – since two-factor and knowledge-based authentication are no longer sufficient in proving someone’s identity. It should also involve link analysis, as this checks for overlaps in PII and helps spot suspicious identities. Finally, having a backup line of defense made up of human experts will ensure false identities are kept away.
As we see fraudsters becoming more tactical with the methods they use for fraud, banks and financial providers will need the right technology in place to match the threat. A very robust digital check of an ID document using the right identity verification solution can help in the fight against the growing threat synthetic identities can pose. Combining this with a mobile phone selfie, a liveness test and an expert eye, which then counter checks all elements of a person’s identity, provides a robust system.
With this technology on hand, banks can be rest assured they have the right processes and procedures in place to make sure their customers are really who they say they are. Now, all banks have to do, is innovate.