This report explores the role that academic and corporate Research Ethics Committees play in evaluating AI and data science research for ethical issues, and also investigates the kinds of common challenges these bodies face.
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This paper explores how legacy procurement processes in U.S. cities shape the acquisition and governance of AI tools, based on interviews with local government employees.
A panel of experts discuss the application of civil rights protections to emerging AI technologies, highlighting potential harms, the need for inclusive teams, and the importance of avoiding technology-centric solutions to social problems.
This action plan outlines Oregon’s strategic approach to adopting AI in state government, emphasizing ethical use, privacy, transparency, and workforce readiness.
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.