Sarah Bargal provides an overview of AI, machine learning, and deep learning, illustrating their potential for both positive and negative applications, including authentication, adversarial attacks, deepfakes, generative models, personalization, and ethical concerns.
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This guide, directed at poverty lawyers, explains automated decision-making systems so lawyers and advocates can better identify the source of their clients' problems and advocate on their behalf. Relevant for practitioners, this report covers key questions around automated decision-making systems.
A panel of experts discuss the application of civil rights protections to emerging AI technologies, highlighting potential harms, the need for inclusive teams, and the importance of avoiding technology-centric solutions to social problems.
A comprehensive series of workshops and courses designed to equip public sector professionals with the knowledge and skills to responsibly integrate AI technologies into government operations.​
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
This paper explores how legacy procurement processes in U.S. cities shape the acquisition and governance of AI tools, based on interviews with local government employees.
In early 2023, Wired magazine ran four pieces exploring the use of algorithms to identify fraud in public benefits and potential harms, deeply exploring cases from Europe.
This resource helps individuals with aligning their work with the needs of the communities they wish to serve, while reducing the likelihood of harms and risks those communities may face due to the development and deployment of AI technologies.