ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Dark web confessions: the AI arms race in fraud  

Aviram Ganor at Riskified describes how Generative AI tools are making it easier for even novice fraudsters to commit fraudulent online payments when placing an order

 

Retailers are being targeted by fraudsters in ways they may not even realise, and much more often than they know.  

 

Thriving communities exist all over the Dark Web and across forums like Reddit and Telegram where participants share advice, guidance, and even fraud-as-a-service kits aimed at exploiting merchants and their terms of service.  

 

It’s never been easier to become a fraudster. While online forums have existed for a long time, the real game changer in recent months has been the explosion in generative AI. Tools like WormGPT – the malicious cousin of ChatGPT – are broadening access to fraudulent practices, making it significantly easier and faster for individuals to defraud retailers.  

 

There’s a new technology arms race at play – and merchants are in trouble.  

 

Sneaking into these online forums revealed some shocking ways AI and fraudster networks are helping exploit merchants. Here’s what’s happening, how it’s rapidly scaling, and how AI is helping to curtail bad behaviour.  

 

Fraud forum secrets 

Digging into the community chats and forums online reveals alarming tactics where fraudsters are openly sharing what works well to exploit retailers. There’s clear enjoyment in sharing the best way to take advantage of retailers and claim funds.

 

Actors describe themselves as “modern-day Robin Hoods” or reassure others that “big corporations have it coming.” When they find a great new win, they “can’t gatekeep this,” and in some cases, detailed and specific guides are shared and sold that identify specific merchant weak spots, so others can easily replicate the abuse and fraud.

 

Transaction data shows us that when vulnerability points are revealed in merchants, the community swarms and creates a spike. 

 

When reliable fraud and abuse methods stop working, the community will troubleshoot issues together too. For example, say a refund request or a false Item Not Received (INR) claim is rejected, the forum generally recommends chargebacks to banks as a backup.

 

But chargebacks are also championed as a powerful tool, not to be used sparingly, given most banks only allow you to claim a few times a year. As a result, you also find bad actors recommend these are only used on high-value transactions, not low-value orders: “Don’t waste a chargeback on $100.”

 

From fraudster novice to fraud rings  

Forums are a clear gateway for novices to become serial fraudsters. A newbie experimenter becomes an opportunist scammer, until the money becomes so alluring that individuals set up professionally or join organised rings.  

 

Setting up a simple but profitable fraud business can be as easy as 1, 2, 3…  

  1. Search the Dark Web to buy stolen credit card details. These are easy to come by and relatively low cost.  
  2. Next, open up WormGPT and prompt it to code a bot to buy the hottest trainers right as they drop. Like ChatGPT, users can have an easy back-and-forth dialogue until the code is just right.  
  3. Finally, open up a forum like Telegram and start selling.   

Online communities are being supercharged by AI and other technologies, as the above example shows. The rise of fraud in the last ten years can even be part measured by the growing user base of common fraudster tech: Bitcoin is trading $10 billion on the Dark Web; Telegram has over 196 million daily active users (DAU); Tor Browser is at 2 million DAUs; even WormGPT, barely two years old, now has over 5,000 paying subscribers.  

 

AI is catching fraud before it is committed  

The good news is that while fraudsters level-up their sophistication with AI, merchants that are employing AI for fraud prevention are also setting themselves up with the best chance of neutralising threats.  

 

AI-based fraud detection models have been transformative. Rather than relying on human-generated rules to determine which transactions should be declined and which are legitimate, the algorithms adjust and evolve constantly with fraud trends. “Normal” shopper activity is calculated by looking at patterns of behaviour in the merchant data, and the algorithm learns to detect anomalies and suspicious behaviour without the need to set strict rules beforehand. This means when patterns of risk start appearing, merchants can protect themselves with real time insights.  

 

Risk management is also starting to benefit from collective network data sets. This means algorithms can use large data sets, from multiple retailers and millions of transactions, to identify risk trends and stop bad actors before they can reach more unsuspecting retailers.  

 

Catching grey areas with AI  

Policy abuse is a rising trend, given extra fuel by this Dark Web community and AI tools. But it’s also a highly complex area for merchants to tackle, as it is often perpetuated by historically ‘good’ customers. Return fraud is one of the most common – for example, a customer wears an item once and returns it (known as “wardrobing”) or even replaces the original item with an empty box or a substitute item that weighs the same.     

 

AI models are helping merchants become more dynamic in this area. Running risk assessments in real-time, merchants can set more agile policies that adapt to the individuals. For example, a loyal customer may be rewarded with free and flexible returns, while a customer that has behaved suspiciously or has a bad track-record may be asked to pay a fee.   

 

Fighting fire with fire 

Bad actors will always be out to get retailers. The gossiping on Dark Web forums will never be shut down, and new technologies will continue to supercharge their efforts.  

 

Retailers need to be on the front foot. And in the AI-powered threat landscape, AI is the only way to fight back. Those retailers that are being proactive and harnessing AI are starting to catch bad actors before they’ve caused damage, all without making ‘good’ consumers pay for fraud with a bad user experience or tight returns and refunds policies.  

 


 

Aviram Ganor is General Manager EMEA at Riskified

 

Main image courtesy of iStockPhoto.com and ArtistGNDphotography

Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543