This report explores technologies that have the potential to significantly affect employment and job quality in the public sector, the factors that drive choices about which technologies are adopted and how they are implemented, how technology will change the experience of public sector work, and what kinds of interventions can protect against potential downsides of technology use in the public sector. The report categories technologies into five overlapping categories including manual task automation, process automation, automated decision-making systems, integrated data systems, and electronic monitoring.
This report offers a critical framework for designing algorithmic impact assessments (AIAs) by drawing lessons from existing impact assessments in areas like environment, privacy, and human rights to ensure accountability and reduce algorithmic harms.
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
On December 5, 2022, an expert panel, including representatives from the White House, unpacked what’s included in the AI Bill of Rights, and explored how to operationalize such guidance among consumers, developers, and other users designing and implementing automated decisions.
The primer–originally prepared for the Progressive Congressional Caucus’ Tech Algorithm Briefing–explores the trade-offs and debates about algorithms and accountability across several key ethical dimensions, including fairness and bias; opacity and transparency; and lack of standards for auditing.
In accordance with Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, Federal agencies began publishing their first annual inventories of artificial intelligence (AI) use cases in June 2022.
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
This video shows you how to get started with using Generative AI tools, including Bard, Bing, and ChatGPT, in your work as public sector professionals.
The OECD AI Principles promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. The principles were adopted in 2019; this webpage provides an overview of the principles and key terms.
Organisation for Economic Co-operation and Development (OECD)
Companies have been developing and using artificial intelligence (AI) for decades. But we've seen exponential growth since OpenAI released their version of a large language model (LLM), ChatGPT, in 2022. Open-source versions of these tools can help agencies optimize their processes and surpass current levels of data analysis, all in a secure environment that won’t risk exposing sensitive information.
This research explores how software engineers are able to work with generative machine learning models. The results explore the benefits of generative code models and the challenges software engineers face when working with their outputs. The authors also argue for the need for intelligent user interfaces that help software engineers effectively work with generative code models.
This study evaluates the use of RPA technology by three states to automate SNAP administration, focusing on repetitive tasks previously performed manually.