This framework provides practical guidance, detailed reference designs, and example solutions to help organizations securely adopt and operationalize Zero Trust principles across diverse IT environments.
National Institute of Standards and Technology (NIST)
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
This report presents evidence on the use of algorithmic accountability policies in different contexts from the perspective of those implementing these tools, and explores the limits of legal and policy mechanisms in ensuring safe and accountable algorithmic systems.
The study investigates how state agencies administering SNAP comply with Title VI of the Civil Rights Act by providing language access for individuals with limited English proficiency (LEP).
This report provides an overview of the task force’s work in assessing, guiding, and recommending policies for the safe, ethical, and effective use of generative AI across Alabama’s executive-branch agencies.
State of Alabama Generative Artificial Intelligence (GenAI) Task Force
This hub introduces the UK government's Algorithmic Transparency Recording Standard (ATRS), a structured framework for public sector bodies to disclose how they use algorithmic tools in decision-making.
This report offers a critical framework for designing algorithmic impact assessments (AIAs) by drawing lessons from existing impact assessments in areas like environment, privacy, and human rights to ensure accountability and reduce algorithmic harms.
Through a field scan, this paper identifies emerging best practices as well as methods and tools that are becoming commonplace, and enumerates common barriers to leveraging algorithmic audits as effective accountability mechanisms.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)