This action plan outlines Oregon’s strategic approach to adopting AI in state government, emphasizing ethical use, privacy, transparency, and workforce readiness.
This strategy document establishes a governance framework and roadmap to ensure responsible, trustworthy, and effective AI use across Canadian federal institutions.
A recap of a community innovation hackathon in Seattle where technologists and students used AI to prototype solutions that help youth discover and access local programs and services.
This guide, directed at poverty lawyers, explains automated decision-making systems so lawyers and advocates can better identify the source of their clients' problems and advocate on their behalf. Relevant for practitioners, this report covers key questions around automated decision-making systems.
A comprehensive series of workshops and courses designed to equip public sector professionals with the knowledge and skills to responsibly integrate AI technologies into government operations.​
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.