NYC's My File NYC and New Jersey's unemployment insurance system improvements demonstrate how successful digital innovations can be scaled across various programs, leveraging trust-building, open-source technology, and strategic partnerships.
This article explores how legal documents can be treated like software programs, using methods like software testing and mutation analysis to enhance AI-driven statutory analysis, aiding legal decision-making and error detection.
This blog post shares findings from the February 2025 AI Trust Study on Canada.ca, revealing how Canadians perceive government AI and what builds trust.
An in-depth report that examines how states use automated eligibility algorithms for home and community-based services (HCBS) under Medicaid and assesses their implications for access and fairness.
This post introduces EPIC's exploration of actionable recommendations and points of agreement from leading A.I. frameworks, beginning with the National Institute of Standards and Technology's AI Risk Management Framework.
A report that defines what effective “human oversight” of AI looks like in public benefits delivery and offers practical guidance for ensuring accountability, equity, and trust in algorithmic systems.
This report outlines best practices for developing transparent, accessible, and standardized public sector AI use case inventories across federal, state, and local governments
This is the summary version of a report that documents four experiments exploring if AI can be used to expedite the translation of SNAP and Medicaid policies into software code for implementation in public benefits eligibility and enrollment systems under a Rules as Code approach.
This framework outlines USDA’s principles and approach to support States, localities, Tribes, and territories in responsibly using AI in the implementation and administration of USDA’s nutrition benefits and services. This framework is in response to Section 7.2(b)(ii) of Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
AI resources for public professionals on responsible AI use, including a course showcasing real-world applications of generative AI in public sector organizations.
Guidance outlining how Australian government agencies can train staff on artificial intelligence, covering key concepts, responsible use, and alignment with national AI ethics and policy frameworks.
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.