This essay explains why the Center on Privacy & Technology has chosen to stop using terms like "artificial intelligence," "AI," and "machine learning," arguing that such language obscures human accountability and overstates the capabilities of these technologies.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
This analysis examines the surge in U.S. state-level AI legislation in 2023, highlighting enacted laws, proposed bills, and emerging regulatory trends.
To help policy makers, regulators, legislators and others characterize AI systems deployed in specific contexts, the OECD has developed a user-friendly tool to evaluate AI systems from a policy perspective.
Organisation for Economic Co-operation and Development (OECD)
This plan promotes responsible AI use in public benefits administration by state, local, tribal, and territorial governments, aiming to enhance program effectiveness and efficiency while meeting recipient needs.
U.S. Department of Health and Human Services (HHS)
This paper introduces a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
The State of California government published guidelines for the safe and effective use of Generative Artificial (GenAI) within state agencies, in accordance with Governor Newsom's Executive Order N-12-23 on Generative Artificial Intelligence.
Little is known about how agencies are currently using AI systems, and little attention has been devoted to how agencies acquire such tools or oversee their use.
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.