This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
This paper introduces a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)