Mike Kiser at SailPoint describes how cyber-criminals are cashing in on the cost-of-living crisis by developing new scams around synthetic recruitment
Cyber-criminals are notorious for their brutal methods of attack.
Unlike the ‘regular’ crook or swindler, cyber-criminals are renowned for playing on our human vulnerabilities and using our personal relationships against us. Whether that’s by sending a phishing email posing as a customer service agent or by imitating employers like CEOs on social media, today’s modern cyber-criminals are increasingly taking advantage of trusting relationships to gain access to sensitive information.
With deception and impersonation high on the fraud agenda, it comes as no surprise that fraudsters are now cashing in on the cost-of-living crisis. Amid the looming recession, with significant job losses announced across industries, they are moving quickly to take advantage.
With their latest method of attack being in recruitment impersonation scams, playing on the financial vulnerabilities of desperate employees, offering them false promises of a higher income and job security in the hopes that they hand over their personal information.
With such tactics becoming far more sophisticated and harder to spot, let’s explore the different ways job seekers and businesses can protect and detect against impersonation scams.
The rise in synthetic recruiting
In Q3 last year, JobsAware, a service that provides free help to UK victims, reported a 35% year-on-year increase in job scams. With a higher number of people now searching for job opportunities or being more inclined to switch jobs if it boosts income, scammers are cashing in. Often, their approach is to use fake recruitment processes to extract personal information and gain access to sensitive data.
Sophisticated techniques can be used to create a synthetic recruiting experience: fake adverts, application processes, and interviews are all becoming more targeted and convincing.
Now, with the boom in conversational AI, such as ChatGPT, it is even easier for criminals to mockup recruitment materials packed full of relevant and convincing detail which are tonally accurate. These techniques are a powerful combination when paired with the rise in gig and remote working. People gain work through apps with little human interaction, and the associated threat is heightened.
However, scammers rely on less technologically advanced tactics, such as wholesale ripping off of legitimate job adverts - copy/paste jobs that lead job seekers to malicious links. Trusted platforms like LinkedIn are seeing a surge in fabricated job ads and profiles – with nearly 22 million fake accounts blocked by LinkedIn between January to June last year alone.
Senior employees are the preferred targets for this type of targeted scam. Sending bulk messages to numerous social media profiles or via text continues to be a less-sophisticated entry point for scammers to request further personal information or induce malicious link-clicking.
From Aviva to PwC, UK companies are warning job seekers to be wary of fake online recruiters purporting to represent real opportunities. With reputations on the line, companies are keen to counter scammer’s efforts. However, beyond the reputational issues, successful approaches by scammers to individuals can often open a back door for criminals to access organisational data and systems.
Over four-fifths (84%) of organisations have experienced an identity-related breach, and now, job scams are opening another avenue for hackers to capitalise on weak identity points. With most enterprises housing thousands to millions of identities, the opportunity for scammers to infiltrate these identities via job scams is increasing exponentially.
How, then, can organisations protect against the fallout of fake recruiters targeting weak identity defences?
Organisations’ role in education
As with most scams and hacks, the first line of defence is people. Beating scammers at their game is a team sport. Employees must be educated and supported to recognise the subtle signs of malicious communications, maintain a high level of scepticism and take steps to verify sources.
Employers can help by ensuring that they maintain consistent communication methods and clarify what sort of comms employees should expect. When communications raise suspicion, there should be a straightforward way for employees to raise the red flag and processes in place to manage and communicate the risks to the other employees.
Beyond employee communications, businesses should set expectations for their interactions with people inside and outside the enterprise and maintain authenticity and veracity in all their communications.
With interaction on various social platforms now commonplace, education is even more important. Companies should publicise clearly how and where they communicate job opportunities and maintain consistency in this messaging across all their channels.
Where scammers do breakthrough and obtain stolen employee credentials, ensuring security through proper identity safeguards is vital. As the scale of identities in today’s enterprise environments rises rapidly, organisations must shift from human operations to newer, innovative approaches that can keep pace with a rapidly evolving environment.
Using AI as a frontline defence
Identity security technologies powered by AI and machine learning are an essential element of defence against malicious intruders. Such tools not only control the access that humans and non-human identities have to systems, but they can also spot risky user behaviours, detecting and preventing toxic access combinations that could lead to breaches and data theft.
AI-enabled identity security is key in providing organisations with centralised visibility, allowing them to see, manage, control, and secure all variations of identity, knowing who has access to what, and why across their entire network.
However, despite this technology fortifying organisational defenses, and being a core component in the security ecosystem, it shouldn’t stand alone – human defense is just as vital.
Blending AI models with human education and awareness is key in maintaining strong security hygiene and providing a robust approach to identity security.
When human defenders stand alongside AI-enabled security tools, closing the gates to scammers becomes far easier work.
Mike Kiser is Director of Strategy and Standards at SailPoint
Main image courtesy of iStockPhoto.com
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543