Topic: Mitigating Harm + Bias
-
Automation + AI Draft Guidelines for Participatory and Inclusive AI
This working draft resource helps individuals with aligning their work with the needs of the communities they wish to serve, while reducing the likelihood of harms and risks those communities may face due to the development and deployment of AI technologies.
-
Digitizing Policy + Rules as Code Governing Digital Legal Systems: Insights on Artificial Intelligence and Rules as Code
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
-
Automation + AI AI Technologies Today at BenCon 2024
Sarah Bargal provides an overview of AI, machine learning, and deep learning, illustrating their potential for both positive and negative applications, including authentication, adversarial attacks, deepfakes, generative models, personalization, and ethical concerns.
-
Automation + AI Unpacking How Long-Standing Civil Rights Protections Apply to Emerging Technologies like AI at BenCon 2024
A panel of experts discuss the application of civil rights protections to emerging AI technologies, highlighting potential harms, the need for inclusive teams, and the importance of avoiding technology-centric solutions to social problems.
-
Automation + AI Enabling Principles for AI Governance
A report from the Center for Security and Emerging Technology (CSET) discussing enabling principles for artificial intelligence (AI) governance.
-
Automation + AI AI Toolkit
Guidance and Resources for Policymakers, Teachers and Parents to Advance AI Readiness in Ohio Schools.
-
Automation + AI Disability, Bias, and AI
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.
-
Automation + AI The Privacy-Bias Trade-Off
Safeguarding privacy and addressing algorithmic bias can pose an under-recognized trade-off. This brief documents tradeoffs by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. The authors propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.
-
Automation + AI Looking before we leap: Exploring AI and data science ethics review process
This report explores the role that academic and corporate Research Ethics Committees play in evaluating AI and data science research for ethical issues, and also investigates the kinds of common challenges these bodies face.
-
Digital Identity Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing
This paper explores design considerations and ethical tensions related to auditing of commercial facial processing technology.
-
Automation + AI A Human Rights-Based Approach to Responsible AI
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
-
Automation + AI The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government
An emerging concern in algorithmic fairness is the tension with privacy interests. Data minimization can restrict access to protected attributes, such as race and ethnicity, for bias assessment and mitigation. This paper examines how this “privacy-bias tradeoff” has become an important battleground for fairness assessments in the U.S. government and provides rich lessons for resolving these tradeoffs.