This post introduces EPIC's exploration of actionable recommendations and points of agreement from leading A.I. frameworks, beginning with the National Institute of Standards and Technology's AI Risk Management Framework.
An in-depth report that examines how states use automated eligibility algorithms for home and community-based services (HCBS) under Medicaid and assesses their implications for access and fairness.
This report outlines best practices for developing transparent, accessible, and standardized public sector AI use case inventories across federal, state, and local governments
A unified taxonomy and tooling suite that consolidates AI risks across frameworks and links them to datasets, benchmarks, and mitigation strategies to support practical AI governance.
This playbook provides federal agencies with guidance on implementing AI in a way that is ethical, transparent, and aligned with public trust principles.
U.S. Department of Health and Human Services (HHS)
AI resources for public professionals on responsible AI use, including a course showcasing real-world applications of generative AI in public sector organizations.
This is the summary version of a report that documents four experiments exploring if AI can be used to expedite the translation of SNAP and Medicaid policies into software code for implementation in public benefits eligibility and enrollment systems under a Rules as Code approach.
This blog post shares findings from the February 2025 AI Trust Study on Canada.ca, revealing how Canadians perceive government AI and what builds trust.
This report provides an overview of the task force’s work in assessing, guiding, and recommending policies for the safe, ethical, and effective use of generative AI across Alabama’s executive-branch agencies.
State of Alabama Generative Artificial Intelligence (GenAI) Task Force
Guidance outlining how Australian government agencies can train staff on artificial intelligence, covering key concepts, responsible use, and alignment with national AI ethics and policy frameworks.
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.
A report that defines what effective “human oversight” of AI looks like in public benefits delivery and offers practical guidance for ensuring accountability, equity, and trust in algorithmic systems.