The Commonwealth of Pennsylvania's Executive Order 2023-19: Expanding and Governing the Use of Generative Artificial Intelligence Technologies Within the Commonwealth of Pennsylvania
This executive order establishes governance, values, and oversight structures for the ethical and responsible use of generative AI technologies within the Commonwealth of Pennsylvania.
This study evaluates the use of RPA technology by three states to automate SNAP administration, focusing on repetitive tasks previously performed manually.
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
What exactly are the differences between generative AI, large language models, and foundation models? This post aims to clarify what each of these three terms mean, how they overlap, and how they differ.
Center for Security and Emerging Technology (CSET)
This report investigates how D.C. government agencies use automated decision-making (ADM) systems and highlights their risks to privacy, fairness, and accountability in public services.
In May 2020, Stanford's HAI hosted a workshop to discuss the performance of facial recognition technologies that included leading computer scientists, legal scholars, and representatives from industry, government, and civil society. The white paper this workshop produced seeks to answer key questions in improving understandings of this rapidly changing space.
This report explores technologies that have the potential to significantly affect employment and job quality in the public sector, the factors that drive choices about which technologies are adopted and how they are implemented, how technology will change the experience of public sector work, and what kinds of interventions can protect against potential downsides of technology use in the public sector. The report categories technologies into five overlapping categories including manual task automation, process automation, automated decision-making systems, integrated data systems, and electronic monitoring.
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
A catalogue to help teams design trustworthy services that work for people. Categories including informing decisions, signing into services, giving and removing consent, and doing security checks.