The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.
This study examines public attitudes toward balancing equity and efficiency in algorithmic resource allocation, using online advertising for SNAP enrollment as a case study.
Guidance outlining how Australian government agencies can train staff on artificial intelligence, covering key concepts, responsible use, and alignment with national AI ethics and policy frameworks.
This report explores the role that academic and corporate Research Ethics Committees play in evaluating AI and data science research for ethical issues, and also investigates the kinds of common challenges these bodies face.
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.