This report offers a critical framework for designing algorithmic impact assessments (AIAs) by drawing lessons from existing impact assessments in areas like environment, privacy, and human rights to ensure accountability and reduce algorithmic harms.
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
Government agencies adopting generative AI tools seems inevitable at this point. But there is more than one possible future for how agencies use generative AI to simplify complex government information.
This essay explains why the Center on Privacy & Technology has chosen to stop using terms like "artificial intelligence," "AI," and "machine learning," arguing that such language obscures human accountability and overstates the capabilities of these technologies.
A comprehensive analysis of how government digital service teams document and communicate their impact across federal, state, and local levels. This report aims to identify key reporting trends and practices to help teams develop impact narratives that demonstrate their value to stakeholders.
Guidance on improving how well AI systems can understand digital content. It emphasizes using machine-readable formats and applying clear content design strategies to enhance both AI processing and human accessibility
beta.gouv.fr, a French government incubator, developed Mes Aides, an online benefits simulator launched in 2014 to help residents assess their eligibility for various social programs, addressing the issue of unclaimed benefits. The tool, built with open-source technology, enabled users to quickly estimate their potential benefits but was later integrated into a broader platform in 2020 following internal government disputes over authority.
Sarah Bargal provides an overview of AI, machine learning, and deep learning, illustrating their potential for both positive and negative applications, including authentication, adversarial attacks, deepfakes, generative models, personalization, and ethical concerns.