Errors in administrative processes are costly and burdensome for clients but are understudied. Using U.S. Unemployment Insurance data, this study finds that while automation improves accuracy in simpler programs, it can increase errors in more complex ones.
These principles and best practices for AI developers and employers to center the well-being of workers in the development and deployment of AI in the workplace and to value workers as the essential resources they are.
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.
Recording of GOVChats hosted by GTA's Digital Services Georgia, where speakers dive into the artificial intelligence (AI) programs and initiatives unfolding across the states of Georgia, Maryland, and Vermont.
Government agencies adopting generative AI tools seems inevitable at this point. But there is more than one possible future for how agencies use generative AI to simplify complex government information.
Webinar that shares Nava’s partnership with the Gates Foundation and the Benefits Data Trust that seeks to answer if generative and predictive AI can be used ethically to help reduce administrative burdens for benefits navigators.
In 2023, OECD member countries approved a revised version of the Organisation’s definition of an AI system. This post explains the reasoning behind the updated definition.
Organisation for Economic Co-operation and Development (OECD)
A report from the State of California presenting an initial analysis of where generative AI (GenAI) may improve access of essential goods and services.
A guide from the General Service Administration to help government decision makers clearly see what AI means for their agencies and how to invest and build AI capabilities.
This essay explains why the Center on Privacy & Technology has chosen to stop using terms like "artificial intelligence," "AI," and "machine learning," arguing that such language obscures human accountability and overstates the capabilities of these technologies.