Through a field scan, this paper identifies emerging best practices as well as methods and tools that are becoming commonplace, and enumerates common barriers to leveraging algorithmic audits as effective accountability mechanisms.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
The report examines how current remote identity proofing methods can create barriers to Medicaid enrollment and suggests improvements to ensure equitable access for all applicants.
Annual Computers, Software, and Applications Conference (COMPSAC)
This study examines how providing information about administrative burden influences public support for government programs like TANF, showing that awareness of these burdens can increase favorability toward the programs and their recipients.
A case study describing how Massachusetts is building long-term public-sector capacity to deliver people-centered digital services by strengthening in-house expertise, shared tools, and agency-embedded support.
The Better Government Lab at the McCourt School of Public Policy at Georgetown University has developed a new scale for measuring the experience of burden when accessing public benefits. They offer both a three-item scale and a single-item scale, which can be utilized for any public benefit program. The shorter scales provide a less burdensome way to measure by requiring less information from users.
This study investigates how administrative burdens influence differential receipt of income transfers after a family member loses a job, looking at Unemployment Insurance, Temporary Assistance for Needy Families, and the Supplemental Nutrition Assistance Program.
This research explores how software engineers are able to work with generative machine learning models. The results explore the benefits of generative code models and the challenges software engineers face when working with their outputs. The authors also argue for the need for intelligent user interfaces that help software engineers effectively work with generative code models.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.