What exactly are the differences between generative AI, large language models, and foundation models? This post aims to clarify what each of these three terms mean, how they overlap, and how they differ.
Center for Security and Emerging Technology (CSET)
Building on our February 2022 report Benefit Eligibility Rules as Code: Reducing the Gap Between Policy and Service Delivery for the Safety Net, the Beeck Center’s Digital Benefits Network (DBN) hosted Rules as Code Demo Day on June 28, 2022 where there were eight demonstrations of projects and code followed by a collaborative problem solving session on how to continue advancing rules as code for the U.S. social safety net.
This essay explains why the Center on Privacy & Technology has chosen to stop using terms like "artificial intelligence," "AI," and "machine learning," arguing that such language obscures human accountability and overstates the capabilities of these technologies.
Automated decision systems (ADS) are increasingly used in government decision-making but lack clear definitions, oversight, and accountability mechanisms.
On May 19, 2023, the Digital Benefits Network published a new, open dataset documenting authentication and identity proofing requirements across online SNAP, WIC, TANF, Medicaid, child care (CCAP)applications, and unemployment insurance applications. This page includes data and observations about authentication and identity proofing steps specifically for online unemployment insurance applications.
This piece highlights promising design patterns for account creation and identity proofing in public benefits applications. The publication also identifies areas where additional evidence, resources, and coordinated federal guidance may help support equitable implementations of authentication and identity proofing, enabling agencies to balance access and security.
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
A case study describing how Massachusetts is building long-term public-sector capacity to deliver people-centered digital services by strengthening in-house expertise, shared tools, and agency-embedded support.
A summary of the initial CMS guidance (CMCS informational bulletin) on how states should implement Medicaid work reporting requirements under H.R. 1, clarifying high-level expectations and key technical points.