Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
This study examines how providing information about administrative burden influences public support for government programs like TANF, showing that awareness of these burdens can increase favorability toward the programs and their recipients.
This study assesses five commercial RIdV solutions for equity across demographic groups and finds that two are equitable, while two have inequitable performance for certain demographics.
This study explores the causal impacts of income on a rich array of employment outcomes, leveraging an experiment in which 1,000 low-income individuals were randomized into receiving $1,000 per month unconditionally for three years, with a control group of 2,000 participants receiving $50/month.
A national survey of low-wage workers showing that administrative burdens in SNAP and Medicaid are common and strongly linked to food hardship, healthcare hardship, and chronic illness.
This paper examines three key questions in participatory HCI: who initiates, directs, and benefits from user participation; in what forms it occurs; and how control is shared with users, while addressing conceptual, ethical, and pragmatic challenges, and suggesting future research directions.
This study found that using state-specific names for Medicaid programs increased confusion and reduced both positive and negative opinions about the program.
This article examines how the decentralization of safety net programs after welfare reform has led to growing inequality in benefit generosity and access across U.S. states.
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
Automated decision systems (ADS) are increasingly used in government decision-making but lack clear definitions, oversight, and accountability mechanisms.