This essay explains why the Center on Privacy & Technology has chosen to stop using terms like "artificial intelligence," "AI," and "machine learning," arguing that such language obscures human accountability and overstates the capabilities of these technologies.
The Center for Democracy and Technology's brief clarifies misconceptions about artificial intelligence (AI) in government services, emphasizing the need for precise definitions, awareness of AI's limitations, recognition of inherent biases, and acknowledgment of the significant resources required for effective implementation.
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This report offers a critical framework for designing algorithmic impact assessments (AIAs) by drawing lessons from existing impact assessments in areas like environment, privacy, and human rights to ensure accountability and reduce algorithmic harms.
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.
This review evaluates the UK public sector's use of digital technology, identifying successes and systemic challenges, and proposes reforms to enhance service delivery.
This guide helps UK public bodies understand how to responsibly procure, develop, and use AI while meeting their legal duties to prevent discrimination and promote equality under the Public Sector Equality Duty (PSED).