The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.
National Institute of Standards and Technology (NIST)
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. The principles were adopted in 2019; this webpage provides an overview of the principles and key terms.
Organisation for Economic Co-operation and Development (OECD)
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
In accordance with Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Federal agencies began publishing their first annual inventories of artificial intelligence (AI) use cases in June 2022.
The report examines how AI deployment across state and local public administration such as chatbots, voice transcription, content summarization, and eligibility automation are reshaping government work.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.