Library
Discover the latest innovations, learn about promising practices, and find out what’s coming next with best-in-class resources from trusted sources.
Is there something missing from our library?

Search and filters
Search for the topic or resource you're looking for, or use the filters to narrow down results below.
Results
-
AI in government: fundamentals training
Guidance outlining how Australian government agencies can train staff on artificial intelligence, covering key concepts, responsible use, and alignment with national AI ethics and policy frameworks.
-
SCAM GPT: GenAI and the Automation of Fraud
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.
-
Less Discriminatory Algorithms
The article discusses the phenomenon of model multiplicity in machine learning, arguing that developers should be legally obligated to search for less discriminatory algorithms (LDAs) to reduce disparities in algorithmic decision-making.
-
A Human Rights-Based Approach to Responsible AI
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
-
Governing Digital Legal Systems: Insights on Artificial Intelligence and Rules as Code
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
-
AI Strategy for the Federal Public Service 2025-2027
This strategy document establishes a governance framework and roadmap to ensure responsible, trustworthy, and effective AI use across Canadian federal institutions.
-
Guidance for Inclusive AI
This resource helps individuals with aligning their work with the needs of the communities they wish to serve, while reducing the likelihood of harms and risks those communities may face due to the development and deployment of AI technologies.
-
What’s in a name? A survey of strong regulatory definitions of automated decision-making systems
The Electronic Privacy Information Center (EPIC) emphasizes the necessity of adopting broad regulatory definitions for automated decision-making systems (ADS) to ensure comprehensive oversight and protection against potential harms.
-
Looking before we leap: Exploring AI and data science ethics review process
This report explores the role that academic and corporate Research Ethics Committees play in evaluating AI and data science research for ethical issues, and also investigates the kinds of common challenges these bodies face.
-
AI Toolkit
Guidance and Resources for Policymakers, Teachers and Parents to Advance AI Readiness in Ohio Schools.
-
NIST AI Risk Management Framework (RMF 1.0)
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
-
Blueprint for an AI Bill of Rights
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.