Google Gemini, an AI assistant, aims to boost productivity but poses security risks if exploited maliciously.
Attackers can manipulate Gemini via social engineering, tricking users into revealing confidential information such as MFA codes through deceptive emails.
This proof-of-concept attack involves prompting Gemini to extract and display sensitive data disguised as random text. Despite Google’s security measures against automated exfiltration, user caution is essential.
To mitigate risks, organisations should treat LLMs as untrusted entities, implement strict policies, use filtering technologies and validate LLM outputs.
© 2024, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543