The report examines how AI deployment across state and local public administration such as chatbots, voice transcription, content summarization, and eligibility automation are reshaping government work.
A unified taxonomy and tooling suite that consolidates AI risks across frameworks and links them to datasets, benchmarks, and mitigation strategies to support practical AI governance.
Research identified five key obstacles that researchers, activists, and advocates face in efforts to open critical public conversations about AI’s relationship with inequity and advance needed policies.
This report explores technologies that have the potential to significantly affect employment and job quality in the public sector, the factors that drive choices about which technologies are adopted and how they are implemented, how technology will change the experience of public sector work, and what kinds of interventions can protect against potential downsides of technology use in the public sector. The report categories technologies into five overlapping categories including manual task automation, process automation, automated decision-making systems, integrated data systems, and electronic monitoring.
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court.
This report offers a critical framework for designing algorithmic impact assessments (AIAs) by drawing lessons from existing impact assessments in areas like environment, privacy, and human rights to ensure accountability and reduce algorithmic harms.
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
In accordance with Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Federal agencies began publishing their first annual inventories of artificial intelligence (AI) use cases in June 2022.
This award documentation from the National Association of State Chief Information Officers (NASCIO) explains how agencies in Ohio used automation to support administration of public benefits programs.
National Association of State Chief Information Officers (NASCIO)
The Guide to Robotic Process Automation, including the RPA Playbook provides detailed guidance for federal agencies starting a new RPA program or evolving an existing one.
Webinar that shares Nava’s partnership with the Gates Foundation and the Benefits Data Trust that seeks to answer if generative and predictive AI can be used ethically to help reduce administrative burdens for benefits navigators.