Executed April 3, 2025, this memo provides federal agencies with government-wide guidance for accelerating AI adoption through innovation, governance, and public trust.
This action plan outlines Oregon’s strategic approach to adopting AI in state government, emphasizing ethical use, privacy, transparency, and workforce readiness.
This case study details the development of a document extraction prototype to streamline benefits application processing through automated data capture and classification.
The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.
National Institute of Standards and Technology (NIST)
This paper explores how legacy procurement processes in U.S. cities shape the acquisition and governance of AI tools, based on interviews with local government employees.
This hub introduces the UK government's Algorithmic Transparency Recording Standard (ATRS), a structured framework for public sector bodies to disclose how they use algorithmic tools in decision-making.
This blog post shares findings from the February 2025 AI Trust Study on Canada.ca, revealing how Canadians perceive government AI and what builds trust.
This profile provides a cross-sectoral profile of the AI Risk Management Framework specifically for Generative AI (GAI), outlining risks unique to or exacerbated by GAI and offering detailed guidance for organizations to govern, map, measure, and manage those risks responsibly.
National Institute of Standards and Technology (NIST)
The report examines how AI deployment across state and local public administration such as chatbots, voice transcription, content summarization, and eligibility automation are reshaping government work.