This handbook provides local governments with practical guidelines, best practices, and ethical considerations for adopting and using AI tools, emphasizing transparency, human oversight, and risk management.
The "Public Sector AI Playbook" provides public sector officers with practical guidance on adopting and implementing AI technologies to improve government operations, service delivery, and policymaking.
This article examines how Chile’s SUSESO is balancing cost-focused procurement criteria with ethical AI concerns in its medical claims automation process.
The strategic plan outlines intentions to responsibly leverage artificial intelligence (AI) to enhance health, human services, and public health by promoting innovation, ethical use, and equitable access across various sectors, while managing associated risks.
U.S. Department of Health and Human Services (HHS)
The Ethical Artificial Intelligence (AI) Policy of the City of Tempe establishes principles and governance structures to ensure the responsible, fair, and transparent use of AI in municipal operations.
This Guide to Artificial Intelligence provides a strategic framework for the ethical and responsible implementation of GenA technologies in state operations.
Colorado Governor's Office of Information Technology (OIT)
Executed on March 28, 2024, this memorandum establishes new agency requirements and guidance for AI governance, innovation, and risk management, including through specific minimum risk management practices for uses of AI that impact the rights and safety of the public.
Executed April 3, 2025, this memo provides federal agencies with government-wide guidance for accelerating AI adoption through innovation, governance, and public trust.
This report offers a detailed assessment of how AI and emerging technologies could impact the Social Security Administration’s disability benefits determinations, recommending guardrails and principles to protect applicant rights, mitigate bias, and promote fairness.
The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.
National Institute of Standards and Technology (NIST)