The article discusses the phenomenon of model multiplicity in machine learning, arguing that developers should be legally obligated to search for less discriminatory algorithms (LDAs) to reduce disparities in algorithmic decision-making.
This report reviews global AI governance tools, highlighting their importance in ensuring trustworthy AI, while identifying gaps and risks in their effectiveness, and offering recommendations to improve their development, oversight, and integration into policy frameworks.
This report on the use of Generative AI in State government presents an initial analysis of the potential benefits to individuals, communities, government and State government workers, while also exploring potential risks.
Hear perspectives on topics including centering beneficiaries and workers in new ways, digital service delivery, digital identity, and automation.This video was recorded at the Digital Benefits Conference (BenCon) on June 14, 2023.
This internal glossary defines key terms and concepts related to automating enrollment proofs for public benefits programs to support shared understanding among product and policy teams.
This news release highlights Pennsylvania’s first-in-the-nation Generative AI pilot under Governor Shapiro, showcasing its positive impact on state employees and commitment to responsible, ethical AI use.
A recap of the two-day conference focused on charting the course to excellence in digital benefits delivery hosted at Georgetown University and online.
This panel discussion from the Academy's 2025 Policy Summit explores the intersection of artificial intelligence (AI) and public benefits, examining how technological advancements are influencing policy decisions and the delivery of social services.
The Digital Benefit Network's Digital Identity Community of Practice held a session to hear considerations from civil rights technologists and human-centered design practitioners on ways to ensure program security while simultaneously promoting equity, enabling accessibility, and minimizing bias.