Topic: Mitigating Harm + Bias
-
A Human Rights-Based Approach to Responsible AI
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
-
Blueprint for an AI Bill of Rights
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
-
Screened & Scored in the District of Columbia
This report by EPIC investigates how automated decision-making (ADM) systems are used across Washington, D.C.’s public services and the resulting impacts on equity, privacy, and access to benefits.
-
Guidance for Inclusive AI
This resource helps individuals with aligning their work with the needs of the communities they wish to serve, while reducing the likelihood of harms and risks those communities may face due to the development and deployment of AI technologies.
-
Governing Digital Legal Systems: Insights on Artificial Intelligence and Rules as Code
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
-
NIST AI Risk Management Framework (RMF 1.0)
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
-
Disability, Bias, and AI
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.
-
Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing
This paper explores design considerations and ethical tensions related to auditing of commercial facial processing technology.
-
I Am Not a Number
In early 2023, Wired magazine ran four pieces exploring the use of algorithms to identify fraud in public benefits and potential harms, deeply exploring cases from Europe.
-
AI Toolkit
Guidance and Resources for Policymakers, Teachers and Parents to Advance AI Readiness in Ohio Schools.
-
Digital Welfare States and Human Rights
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.
-
NIST AI Risk Management Framework Playbook
The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.