The Sprint 2 Report: Michigan UI Claimant Experience by Civilla and New America examines challenges in Michigan’s unemployment insurance (UI) system and provides human-centered design recommendations to improve accessibility, clarity, and user experience.
To help policy makers, regulators, legislators and others characterize AI systems deployed in specific contexts, the OECD has developed a user-friendly tool to evaluate AI systems from a policy perspective.
Organisation for Economic Co-operation and Development (OECD)
The experience of the COVID-19 pandemic and its induced recession underscored the crucial importance of unemployment insurance (UI) to workers, and to the stability of the American economy. Temporary federal expansions of unemployment systems during the pandemic showed how they can quickly be scaled to increase benefit levels and to include categories of workers who were not previously eligible, such as the self-employed, caregivers, and low-wage workers. And, states showed that separate programs can be set up to provide similar benefits to workers who are explicitly excluded from unemployment insurance—in particular immigrants who do not have a documented immigration status.
Artificial intelligence promises exciting new opportunities for the government to make policy, deliver services and engage with residents. But government procurement practices need to adapt if we are to ensure that rapidly-evolving AI tools meet intended purposes, avoid bias, and minimize risks to people, organizations, and communities. This report lays out five distinct challenges related to procuring AI in government.
This playbook is designed to help government and other key sectors use data sharing to illuminate who is not accessing benefits, connect under-enrolled populations to vital assistance, and make the benefits system more efficient for agencies and participants alike.
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.
Center for Security and Emerging Technology (CSET)