On December 5, 2022, an expert panel, including representatives from the White House, unpacked what’s included in the AI Bill of Rights, and explored how to operationalize such guidance among consumers, developers, and other users designing and implementing automated decisions.
This framework provides voluntary guidance to help employers use AI hiring technology in ways that are inclusive of people with disabilities, while aligning with federal risk management standards.
This action plan outlines Oregon’s strategic approach to adopting AI in state government, emphasizing ethical use, privacy, transparency, and workforce readiness.
This framework provides practical guidance, detailed reference designs, and example solutions to help organizations securely adopt and operationalize Zero Trust principles across diverse IT environments.
National Institute of Standards and Technology (NIST)
This framework outlines USDA’s principles and approach to support States, localities, Tribes, and territories in responsibly using AI in the implementation and administration of USDA’s nutrition benefits and services. This framework is in response to Section 7.2(b)(ii) of Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
Executed on March 28, 2024, this memorandum establishes new agency requirements and guidance for AI governance, innovation, and risk management, including through specific minimum risk management practices for uses of AI that impact the rights and safety of the public.
This guide helps UK public bodies understand how to responsibly procure, develop, and use AI while meeting their legal duties to prevent discrimination and promote equality under the Public Sector Equality Duty (PSED).
The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.
National Institute of Standards and Technology (NIST)