By Steve Goddard, Fraud Subject Matter Expert, and Dr. David Sutton, Chief Innovation Officer

It’s widely known that criminal enterprises are run like genuine businesses. They have HR departments, finance teams – everything an ‘above board’ business might have. And, just like us, they get excited about the immense potential of cutting-edge technologies.  

Take Generative AI. In February this year, $25 million dollars was stolen using an AI-based deepfake videocall. It’s clear that, among criminals, it’s also a hit.  

Generative AI (GenAI) is a tool to aid efficiency. You may have used it to manage your schedule at work, or to plan a holiday in your spare time.  

It’s also increasingly vital for some software developers who use it to assist them in writing code and developing software – whether those developers are above board, or work for other, far shadier companies… 

Fraudsters are now more effective than ever 

Criminals are using AI to write malicious code more efficiently, to quickly circumvent any new security measures put in place. Others are using it to help scale their attacks on innocent victims. That could mean creating more phishing emails – an increase of over 4000% since ChatGPT was launched, according to SlashNext – and more authentic messages, such as believable WhatsApp texts supposedly from our children, or an urgent-sounding SMS from our bank. 

“Until now, we’ve been told that badly worded emails or messages are scams. But GenAI changes that. It allows criminals to create well-worded, grammatically correct messages that leave little trace of error for victims to spot.” 

 

This access to quality Artificial Intelligence tools is unfortunately allowing opportunistic criminals to enter the fray and use it for fraud and scams. FaaS (Fraud as a Service) marketplaces on the dark web sell GenAI-powered chatbots, spam kits, and phishing kits, readily and to anyone. Thanks to their focus on language, tone, and grammar without the need for human proofreading, they’re hugely popular. 

It’s no wonder people are starting to wonder if a solution to combat this issue actually exists. 

Has AI given the criminals a silver bullet?  

With the continuous rise in scams you could be forgiven for thinking fraudsters have not only stolen millions of dollars, but are also winning the  race against those seeking to protect the payments system.  

There’s no denying their technical capabilities. For instance, they use Large Language Models (LLMs) trained on dark web data that help them get around the constraints and protections that ChatGPT has built in. 

However, there is no need to be alarmed. Operating without rules and regulations, the underworld of crime has always been quick to adapt new technologies for nefarious means. However, those of us who work to protect  payments systems can be confident that we have innovation and the smartest brains in the business on our side to create Tech for Good.  

The answer is simple.  AI can defeat AI.

“The world has barely scratched the surface of the positive benefits AI can bring to our society, as well as the benefits it brings to the reduction of scams, fraud, and financial crime.

“Far from being stifled by regulation, these new parameters needed to protect consumers are actually inspiring vital innovation in AI.” 

 

 The power of AI for real-time fraud detection 

Fraud and financial crime are adversarial problems. When the attacker or defender raises their game, the other side must do so as well. We have seen that fraudsters have started to up their game – so what can the banks do to counter this? 

We believe there are two clear strategies banks can use to leverage Generative AI to fight fraud:  

    1. Building more accurate fraud prediction models
    2. Increasing the number of alerts that fraud analysts can work

Banks have used Machine Learning (ML) to counter fraud for decades. ML algorithms inform automated decisioning systems that approve or decline decisions for every transaction in a matter of milliseconds. Fraud analysts then investigate the alerts generated by these solutions and then contact consumers to take the necessary actions to resolve a case.  

By combining these approaches, automated systems and human analysts represent the best lines of defense against fraud, scams, and all financial crime. 

  1. Building more accurate fraud prediction models 

What sets Generative AI apart from older algorithms is its ability to understand. This unprecedented understanding is a necessary condition to be able to generate realistic new data – the goal of Generative AI. This makes sense if you think about it: you need to understand another person’s face much better to be able to draw it from memory than to be able to recognize it in a photo.  

Used correctly, Generative AI’s data understanding can propel downstream machine learning models to reach new heights in performance. In that regard, it’s a bit like jet fuel – add it to your modelling stack and you will catch a lot more fraud!

To realize this potential, the financial services industry needs a class of Generative Foundation Models built specifically for transactional data. These models extract powerful representations of transaction behaviors that are plugged into downstream fraud models to level up detection rates. They support dozens of other predictive business use cases too, like loyalty or credit risk management. 

At the end of 2023, Featurespace announced the world’s first Generative Foundation Model of this type – TallierLTM™. TallierLTM’s impact on fraud detection performance has started to make serious waves. In a high-profile scam detection competition  led by Pay.UK, a TallierLTM-enhanced fraud model more than halved the value of scams in the UK’s inter-bank Faster Payments system. It returned an impressive Value Detection Rate of 56% at a False Positive Ratio of 5:1. What makes this result even more important is that this was measured only on the scams that were missed by banks’ incumbent systems.

  1. Increasing the number of alerts that fraud analysts can work 

Generative AI tools can increase the number of alerts analysts process by increasing efficiency. This is done in multiple ways, for example extracting and presenting insights from the data more clearly or assisting analysts who are working cases by preparing for a customer call, or writing up case notes for them. 

With more efficient analysts, a bank’s automated decisioning systems can raise more alerts and prevent more frauds. Banks must draw a line in the sand somewhere and say: ‘transactions with risk scores over this threshold will be declined, while transactions with risk scores under this threshold will be approved’. While the considerations that govern a bank’s risk score threshold are complicated, they are often set by operational considerations like “how many alerts can my analysts process each day?”. If they can process more alerts, then the risk threshold can be lowered, and more frauds will be prevented.  

AI gives us the advantage, but we must not drop the ball 

As a result of these approaches, we’re starting to see the immense power of AI to detect and prevent fraud. Technologies are learning faster and faster to identify the anomalies in a person’s normal banking behavior that could be scams, meaning our defenses only get stronger and stronger over time. 

Lawmakers and the financial services industry must not underestimate criminals. We must stay vigilant and anticipate how they could exploit these advanced models for their own gain. It’s important that regulators and model providers recognize these threats and continuously innovate to keep up. 

One thing is for certain. The opportunity that collaboration across the ecosystem presents – Artificial Intelligence trained through banks, payment providers, technologists, governments, and citizens themselves reporting scams – is a recipe for success.