An in-depth report that examines how states use automated eligibility algorithms for home and community-based services (HCBS) under Medicaid and assesses their implications for access and fairness.
This article explores how legal documents can be treated like software programs, using methods like software testing and mutation analysis to enhance AI-driven statutory analysis, aiding legal decision-making and error detection.
This report documents four experiments exploring if AI can be used to expedite the translation of SNAP and Medicaid policies into software code for implementation in public benefits eligibility and enrollment systems under a Rules as Code approach.
This report outlines best practices for developing transparent, accessible, and standardized public sector AI use case inventories across federal, state, and local governments
A unified taxonomy and tooling suite that consolidates AI risks across frameworks and links them to datasets, benchmarks, and mitigation strategies to support practical AI governance.
Guidance outlining how Australian government agencies can train staff on artificial intelligence, covering key concepts, responsible use, and alignment with national AI ethics and policy frameworks.
NYC's My File NYC and New Jersey's unemployment insurance system improvements demonstrate how successful digital innovations can be scaled across various programs, leveraging trust-building, open-source technology, and strategic partnerships.
A report that defines what effective “human oversight” of AI looks like in public benefits delivery and offers practical guidance for ensuring accountability, equity, and trust in algorithmic systems.
This is the summary version of a report that documents four experiments exploring if AI can be used to expedite the translation of SNAP and Medicaid policies into software code for implementation in public benefits eligibility and enrollment systems under a Rules as Code approach.
This post introduces EPIC's exploration of actionable recommendations and points of agreement from leading A.I. frameworks, beginning with the National Institute of Standards and Technology's AI Risk Management Framework.
This blog post shares findings from the February 2025 AI Trust Study on Canada.ca, revealing how Canadians perceive government AI and what builds trust.
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.