Topic: Automation + AI
-
Parking signs and possible futures for LLMs in government
Government agencies adopting generative AI tools seems inevitable at this point. But there is more than one possible future for how agencies use generative AI to simplify complex government information.
-
Assembling Accountability: Algorithmic Impact Assessment for the Public Interest
This report offers a critical framework for designing algorithmic impact assessments (AIAs) by drawing lessons from existing impact assessments in areas like environment, privacy, and human rights to ensure accountability and reduce algorithmic harms.
-
BenCon 2024 Recap
A recap of the two-day conference focused on charting the course to excellence in digital benefits delivery hosted at Georgetown University and online.
-
AI Technologies Today at BenCon 2024
Sarah Bargal provides an overview of AI, machine learning, and deep learning, illustrating their potential for both positive and negative applications, including authentication, adversarial attacks, deepfakes, generative models, personalization, and ethical concerns.
-
The Equitable Tech Horizon in Digital Benefits Panel
Hear perspectives on topics including centering beneficiaries and workers in new ways, digital service delivery, digital identity, and automation.This video was recorded at the Digital Benefits Conference (BenCon) on June 14, 2023.
-
Disability, Bias, and AI
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.
-
Nava Labradors at Policy2Code Demo Day at BenCon 2024
The team developed an AI solution to assist benefit navigators with in-the-moment program information, finding that while LLMs are useful for summarizing and interpreting text, they are not ideal for implementing strict formulas like benefit calculations, but can accelerate the eligibility process by leveraging their strengths in general tasks.
-
Controlling Large Language Models: A Primer
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.
-
mRelief at Policy2Code Demo Day at BenCon 2024
The team conducted experiments to determine whether clients would be responsive to proactive support offered by a chatbot, and identify the ideal timing of the intervention.
-
Domain Shift and Emerging Questions in Facial Recognition Technology
This policy brief offers recommendations to policymakers relating to the computational and human sides of facial recognition technologies based on a May 2020 workshop with leading computer scientists, legal scholars, and representatives from industry, government, and civil society
-
Code The Dream at Policy2Code Demo Day at BenCon 2024
The team introduced an AI assistant for benefits navigators to streamline the process and improve outcomes by quickly assessing client eligibility for benefits programs.
-
Unpacking How Long-Standing Civil Rights Protections Apply to Emerging Technologies like AI at BenCon 2024
A panel of experts discuss the application of civil rights protections to emerging AI technologies, highlighting potential harms, the need for inclusive teams, and the importance of avoiding technology-centric solutions to social problems.