This academic paper examines predictive optimization, a category of decision-making algorithms that use machine learning (ML) to predict future outcomes of interest about individuals. Through this examination, the authors explore how predictive optimization can raise concerns that make its use illegitimate and challenge claims about predictive optimization's accuracy, efficiency, and fairness.
This paper examines three key questions in participatory HCI: who initiates, directs, and benefits from user participation; in what forms it occurs; and how control is shared with users, while addressing conceptual, ethical, and pragmatic challenges, and suggesting future research directions.
This report documents four experiments exploring if AI can be used to expedite the translation of SNAP and Medicaid policies into software code for implementation in public benefits eligibility and enrollment systems under a Rules as Code approach.
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.
This paper analyzes the unique challenges of conducting participatory design in large-scale public projects, focusing on stakeholder management, fostering engagement, and integrating participatory methods into institutional transformation.
A webinar presenting fresh data on how young adults aged 22 are faring in terms of poverty, employment, education, living arrangements, and access to public benefits.
The report examines how current remote identity proofing methods can create barriers to Medicaid enrollment and suggests improvements to ensure equitable access for all applicants.
Annual Computers, Software, and Applications Conference (COMPSAC)
This article explores how legal documents can be treated like software programs, using methods like software testing and mutation analysis to enhance AI-driven statutory analysis, aiding legal decision-making and error detection.
The article discusses the phenomenon of model multiplicity in machine learning, arguing that developers should be legally obligated to search for less discriminatory algorithms (LDAs) to reduce disparities in algorithmic decision-making.