The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
Guidance outlining how Australian government agencies can train staff on artificial intelligence, covering key concepts, responsible use, and alignment with national AI ethics and policy frameworks.
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.
A unified taxonomy and tooling suite that consolidates AI risks across frameworks and links them to datasets, benchmarks, and mitigation strategies to support practical AI governance.
The Digital Benefit Network's Digital Identity Community of Practice held a session to hear considerations from civil rights technologists and human-centered design practitioners on ways to ensure program security while simultaneously promoting equity, enabling accessibility, and minimizing bias.
The article discusses the phenomenon of model multiplicity in machine learning, arguing that developers should be legally obligated to search for less discriminatory algorithms (LDAs) to reduce disparities in algorithmic decision-making.
This strategy document establishes a governance framework and roadmap to ensure responsible, trustworthy, and effective AI use across Canadian federal institutions.
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
National Institute of Standards and Technology (NIST)
This policy brief explores how federal privacy laws like the Privacy Act of 1974 limit demographic data collection, undermining government efforts to conduct equity assessments and address algorithmic bias.
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court.
This UN report warns against the risks of digital welfare systems, emphasizing their potential to undermine human rights through increased surveillance, automation, and privatization of public services.