A report from the State of California presenting an initial analysis of where generative AI (GenAI) may improve access of essential goods and services.
The team developed an application to simplify Medicaid and CHIP applications through LLM APIs while addressing limitations such as hallucinations and outdated information by implementing a selective input process for clean and current data.
Sarah Bargal provides an overview of AI, machine learning, and deep learning, illustrating their potential for both positive and negative applications, including authentication, adversarial attacks, deepfakes, generative models, personalization, and ethical concerns.
The team introduced an AI assistant for benefits navigators to streamline the process and improve outcomes by quickly assessing client eligibility for benefits programs.
On December 5, 2022, an expert panel, including representatives from the White House, unpacked what’s included in the AI Bill of Rights, and explored how to operationalize such guidance among consumers, developers, and other users designing and implementing automated decisions.
The state of South Dakota Bureau of Information and Telecommunications (BIT) designed guidelines for the responsible use of AI-generated content in state government agencies, emphasizing the need for proofing, editing, fact-checking, and using AI-generated content as a starting point, not the finished product.
South Dakota Bureau of Information and Telecommunications
Hear perspectives on topics including centering beneficiaries and workers in new ways, digital service delivery, digital identity, and automation.This video was recorded at the Digital Benefits Conference (BenCon) on June 14, 2023.
In this policy brief and video, Michele Gilman summarizes evidence-based recommendations for better structuring public participation processes for AI, and underscores the urgency of enacting them.
This research explores how software engineers are able to work with generative machine learning models. The results explore the benefits of generative code models and the challenges software engineers face when working with their outputs. The authors also argue for the need for intelligent user interfaces that help software engineers effectively work with generative code models.
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.
Center for Security and Emerging Technology (CSET)
This policy brief offers recommendations to policymakers relating to the computational and human sides of facial recognition technologies based on a May 2020 workshop with leading computer scientists, legal scholars, and representatives from industry, government, and civil society
This is a searchable tool that compiles and categorizes over 4,700 policy recommendations submitted in response to the U.S. government's 2025 Request for Information on artificial intelligence policy.