SCAM GPT: GenAI and the Automation of Fraud
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.

Generative AI tools like ChatGPT and similar large language models are being leveraged by bad actors to create convincing scams at scale, including phishing emails, deepfake content, and social engineering attacks.
It highlights the democratization of fraud through AI, lowering skill and language barriers for cybercriminals, while also discussing technical, legal, and policy challenges in detection and prevention. Case studies and examples demonstrate the speed and adaptability of AI-enabled scams. The report concludes with recommendations for governments, technology companies, and financial institutions, including improved AI detection tools, industry collaboration, public awareness campaigns, and proactive regulation to mitigate risks.
Share this Resource: