How Explainable Machine Learning Enhances Credit Card Fraud Prevention

How Explainable Machine Learning Enhances Credit Card Fraud Prevention

Card not present fraud detection requires sophisticated techniques for uncovering new forms of illicit activity. Often starting with some form of identity theft, criminals have many methods to de-fraud consumers and card companies.

It’s a constant battle to detect new forms of fraud fast. In order to stay ahead of the curve in ​fraud prevention with machine learning​, you need to be able to understand “why” a transaction is being predicted as fraudulent, whether or not it represents an emerging trend, and adjust business rules to prevent new patterns of fraud from taking hold.

Understanding why is a task for explainable approaches to ML and similarity is one of those methods. Since explainability is inherent to the nature of similarity, it is a method well suited to fraud prevention, when it can operate at the speed a scale necessary to handle millisecond response times on high dimensional data.

One of the most fundamental fraud prevention requirements that explainability supports, is knowing when an individual’s online activity or behavior looks suspicious. Detecting suspicious anomalous consumer behavior is the next frontier in fighting fraud from an offensive perspective vs. a defensive one.

Identity access management underpins the fabric of data relating to an individual credit card holder’s multiple identity data points and ensures this information is secure and accurate. When it comes to account takeover fraud, when fraudsters hack an individual’s identity using part of it, such as an email, to gain access to financial accounts, identity theft can cause extensive damage. This is a growing area of concern already ​costing approximately $16B annually​.

Identity access management platforms are typically rule based systems where specific conditions are set and monitored to trigger verification or flagging of suspicious activity. What is needed in these platforms is an intelligence layer sitting on top of the data to monitor finer levels of anomalous behavior compared to what is considered normal activity – at an individual user level.

 

For example, the time taken on the log-in page to the pattern of keystrokes applied in placing an order, or the way an order is placed in terms of size and frequency or shipping location.

Similarity based explainable AI​ as a method that is well suited to training on a single class (what is normal) and then reporting when one or more combinations of activities are “not normal” while revealing the key areas of difference between the two. In other words, ​explainable AI (XAI)​ can predict when something is not normal and explain why. This is very valuable in preventing account takeover fraud as ultimately the fraudster is pretending to be someone else in accessing accounts and if they have already gained access, how will you know when something there doing is out of the ordinary.

Financial services​ executive teams focused on security and fraud prevention want to know how fraudsters are gaining access to client accounts, and once they have gained access how they can spot fraudulent activity. Explainable anomaly detection enables these executives to see into how behavior patterns reveal fraudulent activity and what exactly the combination of factors are that show abnormalities. Spotting new emerging patterns quickly and preventing re-occurrences is a great application for explainable AI technologies.

Of course, you have to be able to do this at speed and scale across high dimensional data. In recent benchmarks compared to other explainable methods, ​simMachines​ technology performs very fast, and stands alone in maintaining millisecond response times as data width expands to thousands and thousands of columns. This is a valuable way in which explainability can enhance machine learning for fraud prevention.

AI Enabled Customer Segmentation Will Transform Marketing