Combatting Identify Fraud in Government Benefits Programs: Government Agencies Tackling Identity Fraud Should Look to Cybersecurity Methods, Avoid AI-Driven Approaches that Can Penalize Real Applicants
This article advises government agencies to prioritize cybersecurity methods over AI-driven approaches when combating identity fraud in benefits programs, highlighting potential risks that automated systems pose to legitimate applicants.

In response to increased fraudulent applications for unemployment benefits, many states have adopted automated systems, such as facial recognition technology, to verify applicant identities.
However, these AI-driven methods often exhibit racial and gender biases, require access to modern technology, and can be challenging for less tech-savvy users, leading to wrongful denials of legitimate claims. The article cites examples like Michigan’s MiDAS system, which erroneously classified thousands of applications as fraudulent, causing significant harm to applicants. CDT recommends that agencies focus on robust cybersecurity practices to detect and prevent large-scale, organized fraud without compromising access for legitimate users.
Share this Resource: