AI and ML curbing financial fraud
In 2016, thieves stole $16 billion, up almost $1 billion from 2015, using either stolen credit or debit cards, or by creating fraudulent accounts.
At first blush, it would appear that the good guys are rapidly losing ground, but those numbers do not tell the whole story as losses per victim, during that same time period, dropped an impressive 11%, falling from $1,165 to $1,038 for each targeted individual.
The average out-of-pocket cost to the consumer in these cases decreased from $56 to $48. The increase in fraud is partially due to the rise in the number of scammers and is also because of yearly inflation. Artificial intelligence (AI) and machine learning (ML) played a significant role in the decline of loss-per-victim, and seem ready to drive down losses even further.
Prior to the advent and implementation of AI/ML fraud detection systems, merchants and card issuers relied on an exhausting 300 rules to determine whether or not a transaction would be approved. These rules detailed things such as the number of purchases, which products or services were purchased (electronics would be more suspicious than groceries), zip code of transactions, a substantial increase in the number of purchases on an account etc. Rules were and still are used in conjunction with a point system, and a go or no-go is rendered based on the score and tolerance of loss for merchant and card issuer.
With AI and ML, thousands of data points are evaluated and compared to individual and historical data in order to detect fraud. Fraud detection applications will review a customer’s social media accounts, employer histroy, education and much more, to determine if a purchase is consistent with those who have similar backgrounds.
More finite examinations of the item being bought are also made. A purchase of several pairs of shoes, for example, with the barcodes indicating different sizes or from brands often stolen, is a red flag as it is possible that the footwear will be sold on the street. AI/ML still uses a point-based system, and the merchant and card issuers still establish acceptable risk thresholds based on the transaction “score”, but the analysis of the transaction has gone from a 30,000-foot view to one of 20 feet.
Consumers and merchants also suffer when legitimate transactions are denied. In 2016, nearly 2% of revenue was lost to retailers due to fraudulent claims and false declines. Declines occur when an honest customer cannot complete a purchase because the system deems it suspect. A businesses reputation also takes a hit, when it turns away a legitimate buyer, due to a false decline, which is a loss that is more difficult to assess. Fortunately, AI/ML are improving those outcomes, as well.
Mastercard, among the first to implement AI/ML, was able to reduce false declines by an impressive 80% compared to what was experienced previously with the 300-rule method and believes it will improve as it gathers more data and develops better algorithms.
Even with AI/ML, there is still, in many cases a human decision. Per the Fraud Benchmark Report, published by Cybersource, 83% of businesses throughout North America conduct manual reviews of online orders. These firms are examining, on average, 29% of the total number of their transactions, and humans will still catch scammers that AI systems allowed to slip through the cracks.
One reason why people are able to pick up on certain fraud cases, that machines still sometimes struggle with, is that humans are less prone to accept a statistical anomaly. In any large data set, there will be particular points that fall outside of the norm. Because detection systems run on a total point system, looking at thousands of characteristics and applying an overall score, the system may still pass a transaction with one glaring red flag if the score is still within an acceptable range, that a human will not let fly.
AI/ML has undoubtedly curbed fraud in the financial sector, and others have taken notice. Insurance carriers, the Securities Exchange Commission (SEC) and Internal Revenue Service (IRS), among others are all beginning to use techniques and systems, based on those developed by the financial industry, to uncover fraud within their respective domains.
On the flip side, unfortunately, sophisticated cyber crooks are also developing AI/ML systems to perpetrate crimes, based on these same lessons. It is a never-ending cycle in the good guy versus bad, but it does seem for now that good folk are gaining ground and we can expect to see AI/ML play an increasingly prominent role in thwarting fraud.
Reuben Jackson, our editorial contributor in New York