This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
This article examines how Chile’s SUSESO is balancing cost-focused procurement criteria with ethical AI concerns in its medical claims automation process.
This action plan outlines Oregon’s strategic approach to adopting AI in state government, emphasizing ethical use, privacy, transparency, and workforce readiness.
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.
The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.
National Institute of Standards and Technology (NIST)
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.