Resource Format: Article: Academic
-
The Wait List as Redistributive Policy: Access and Burdens in the Subsidized Childcare System
This article theorizes the wait list as an underexamined vehicle of administrative burden. Focusing on the example of subsidized chidl care, the article's findings suggest wait lists as understudied but consequential sites of opaque policymaking that shape access to critical social services and the legibility of unmet need.
-
“I Used to Get WIC . . . But Then I Stopped”: How WIC Participants Perceive the Value and Burdens of Maintaining Benefits
This study examines how individuals assess administrative burdens and how these views change over time within the context of the Special Supplemental Nutrition Assistance Program for Women, Infants, and Children (WIC).
-
Automation + AI A Human Rights-Based Approach to Responsible AI
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
-
Automation + AI Who Audits the Auditors? Recommendations from a Field Scan of the Algorithmic Auditing Ecosystem
Through a field scan, this paper identifies emerging best practices as well as methods and tools that are becoming commonplace, and enumerates common barriers to leveraging algorithmic audits as effective accountability mechanisms.
-
Automation + AI Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
-
Automation + AI The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in U.S. Government
An emerging concern in algorithmic fairness is the tension with privacy interests. Data minimization can restrict access to protected attributes, such as race and ethnicity, for bias assessment and mitigation. This paper examines how this “privacy-bias tradeoff” has become an important battleground for fairness assessments in the U.S. government and provides rich lessons for resolving these tradeoffs.
-
Automation + AI Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
-
Automation + AI Surveillance, Discretion and Governance in Automated Welfare
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
-
Automation + AI Popular Support for Balancing Equity and Efficiency in Resource Allocation
This article explores how online advertising algorithms bias between Spanish and English speakers for SNAP in California.
-
Automation + AI Defining and Demystifying Automated Decision Systems
This article suggests that a lack of clear, shared definitions makes it harder for the public and policymakers to evaluate and regulate technical systems that may have significant impacts on communities and individuals by shaping access to benefits, opportunities, and liberty. It presents and evaluates a definition for automated decision systems, developed through workshops with interdisciplinary scholars and practitioners.
-
Automation + AI Against Predictive Optimization: On the Legitimacy of Decision-Making Algorithms that Optimize Predictive Accuracy
This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
-
Diversity, Equity + Inclusion Re-Envisioning Medicaid & CHIP as Anti-Racist Programs
This report puts forth an anti-racist reimagining of Medicaid and CHIP that actively reckons with the racist history of the Medicaid program and offers principles and recommendations that capitalize on the transformative potential of the programs. The principles center the voices and agency of program participants and prioritize direct community involvement at all stages of the policy process.