These principles and best practices for AI developers and employers to center the well-being of workers in the development and deployment of AI in the workplace and to value workers as the essential resources they are.
This strategy document establishes a governance framework and roadmap to ensure responsible, trustworthy, and effective AI use across Canadian federal institutions.
Executed on March 28, 2024, this memorandum establishes new agency requirements and guidance for AI governance, innovation, and risk management, including through specific minimum risk management practices for uses of AI that impact the rights and safety of the public.
This framework provides voluntary guidance to help employers use AI hiring technology in ways that are inclusive of people with disabilities, while aligning with federal risk management standards.
This is a searchable tool that compiles and categorizes over 4,700 policy recommendations submitted in response to the U.S. government's 2025 Request for Information on artificial intelligence policy.
This report reviews global AI governance tools, highlighting their importance in ensuring trustworthy AI, while identifying gaps and risks in their effectiveness, and offering recommendations to improve their development, oversight, and integration into policy frameworks.
This plan promotes responsible AI use in public benefits administration by state, local, tribal, and territorial governments, aiming to enhance program effectiveness and efficiency while meeting recipient needs.
U.S. Department of Health and Human Services (HHS)
This action plan outlines Oregon’s strategic approach to adopting AI in state government, emphasizing ethical use, privacy, transparency, and workforce readiness.
The Maryland Information Technology Master Plan 2025 lays out the state’s strategy to modernize IT, expand digital services, and strengthen infrastructure to better serve residents and government agencies.