Topic: Audits + Accountability
-
Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts
Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. This academic paper explores how to co-construct impacts that closely reflects harms, and emphasizes the need for input of various types of expertise and affected communities.
-
Red-Teaming in the Public Interest
This report explores how red-teaming practices can be adapted for generative AI in ways that serve the public interest.
-
SNAP Language Access Study
The study investigates how state agencies administering SNAP comply with Title VI of the Civil Rights Act by providing language access for individuals with limited English proficiency (LEP).
-
State of digital government review
This review evaluates the UK public sector's use of digital technology, identifying successes and systemic challenges, and proposes reforms to enhance service delivery.
-
Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing
This paper introduces a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle.
-
Algorithmic Accountability: Moving Beyond Audits
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
-
Algorithmic Accountability for the Public Sector
This report presents evidence on the use of algorithmic accountability policies in different contexts from the perspective of those implementing these tools, and explores the limits of legal and policy mechanisms in ensuring safe and accountable algorithmic systems.
-
AI Data Readiness Checklist
A data artificial intelligence (AI) checklist from the Commonwealth of Virginia.
-
Algorithmic Transparency Recording Standard Hub
This hub introduces the UK government's Algorithmic Transparency Recording Standard (ATRS), a structured framework for public sector bodies to disclose how they use algorithmic tools in decision-making.
-
What Makes a Good AI Benchmark?
This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.
-
AI-Ready: An Evaluation Guide for Health and Human Services Agencies
This is a practical tool designed to help government agencies make informed, responsible decisions about adopting AI technologies.
-
Who Audits the Auditors? Recommendations from a Field Scan of the Algorithmic Auditing Ecosystem
Through a field scan, this paper identifies emerging best practices as well as methods and tools that are becoming commonplace, and enumerates common barriers to leveraging algorithmic audits as effective accountability mechanisms.