ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Machines versus machines

Ananth Gundabattula at Darwinium explains how cloud-powered AI could revolutionise digital fraud

 

The UK has a fraud problem, which increasingly is being fuelled by AI. According to one estimate, Brits lost as much as £4 billion to scammers in 2022. 

 

Things are set to get worse still, as cyber-criminals tap the power of machine learning to outwit technology used by organisations to spot suspicious behaviour.

A great leap forward

A rapid acceleration in the pace of technology innovation over the past few years has benefitted our society and economy immeasurably. Much of this is built on cloud computing, which provides reasonably priced, on-demand compute power, enabling organisations to innovate at scale while streamlining their operations and enhancing business agility.

 

But while it’s lowered the bar for legitimate users to access these capabilities, it’s done the same for cyber-criminals. Nefarious individuals are using cloud infrastructure every day to scale their operations anonymously.

 

The next wave of innovation in fraud will come from cloud-powered AI – or more correctly, machine learning (ML). Leveraging the power of the cloud, new malign ML models offer the prospect of automating tasks that only humans could perform a few years ago. That’s bad news for us all. 

 

Outwitting the machines

The problem comes when ML models are applied to effectively circumvent the defences built by companies to spot obvious fraud.

 

Consider a typical fraud mitigation system in a retail setting. There may be a rule whereby transactions over £900 in certain geolocations are automatically flagged for secondary verification. An ML tool could be programmed to work out through trial and error the point at which high-value transactions are inspected. Then the adversary need only ensure their fraudulent payments stay under £900 and are based in the right geolocation to avoid detection.

 

What was once a time-consuming process becomes a simple matter of cloud-powered analytics.

 

Even sophisticated ML models can be probed and attacked for weaknesses by malicious AI. The combination of models increasingly becoming ‘black-box’ and the necessity to be trained on data of previous attacks is a perfect recipe for having production decisioning that is vulnerable to exploitation when presented with a slightly different scenario. It only takes some targeted trial and improvement for malicious AI to learn those oversights and blind spots.

 

That’s not all. AI could also generate fake but compelling enough image data of a user’s face which might allow a transaction to proceed, as the checking computer

 

assumes it to be a photo of a new user. Or it could be trained with video/audio data in the public domain (e.g. clips posted to social media) to impersonate legitimate customers in authentication checks.

 

Similarly, AI could be trained to mimic human behaviour such as mouse movements, to outwit machines designed to spot signs of non-human activity in various transactions. It could even generate different combinations of stolen data to bypass validation checks – a compute intensive task which can be solved by using the public cloud.

 

What happens next?

Fraudsters often have the advantage. They have the element of surprise and the financial motivation to succeed. Yet fraud and risk teams can counter malicious AI by tweaking their own approaches. AI can be trained by the bad guys to mimic human behaviour more realistically. But if it’s used in automated attacks, it will still need to be deployed like a bot, which can be detected by the right machines.

 

Businesses could use continuous journey tracking to thwart malicious AI. Because this approach captures intelligence across the entire session/user journey, there’s more opportunity to spot machine-generated anomalies.

 

Flexible signal generation can also be a powerful tool in a security engineer’s arsenal. It could be used in the examples above to trigger image analysis as soon as an image is uploaded. Or to compare mouse movements across non-financial transaction pages with those where a financial transaction is being initiated.

 

The bottom line: we are just at the start of a new arms race in cyber-security and fraud mitigation. Settle in for a bumpy ride.

 


 

Ananth Gundabattula is co-Founder Darwinium a next-generation fraud platform and the world’s first customer protection platform that helps businesses understand trust and risk across full digital journeys, not simply at point-in-time interactions.

 

Main image courtesy of iStockPhoto.com

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543