This report presents evidence on the use of algorithmic accountability policies in different contexts from the perspective of those implementing these tools, and explores the limits of legal and policy mechanisms in ensuring safe and accountable algorithmic systems.
In 2023, OECD member countries approved a revised version of the Organisation’s definition of an AI system. This post explains the reasoning behind the updated definition.
Organisation for Economic Co-operation and Development (OECD)
The U.S. Department of Homeland Security (DHS) Artificial Intelligence (AI) Roadmap outlines the agency's AI initiatives and AI's potential across the homeland security enterprise.
The state of Indiana developed a policy framework for the ethical and efficient use of artificial intelligence (AI) within state agencies. The policy adopts the National Institute of Standards and Technology’s AI Risk Management Framework to manage potential risks effectively. It also details the applicability of the actions undertaken by the Office of the Chief Data Officer (OCDO) to enable the deployment of trustworthy AI systems.
The State of Connecticut's policy on Artificial Intelligence (AI) Responsible Use establishes a comprehensive framework for the ethical utilization of artificial intelligence in the Connecticut state government.