These principles and best practices for AI developers and employers to center the well-being of workers in the development and deployment of AI in the workplace and to value workers as the essential resources they are.
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
This resource helps individuals with aligning their work with the needs of the communities they wish to serve, while reducing the likelihood of harms and risks those communities may face due to the development and deployment of AI technologies.
BenCon 2024 explored state and federal AI governance, highlighting the rapid increase in AI-related legislation and executive orders. Panelists emphasized the importance of experimentation, learning, and collaboration between government levels, teams, agencies, and external partners.
This panel discussion from the Academy's 2025 Policy Summit explores the intersection of artificial intelligence (AI) and public benefits, examining how technological advancements are influencing policy decisions and the delivery of social services.
This profile provides a cross-sectoral profile of the AI Risk Management Framework specifically for Generative AI (GAI), outlining risks unique to or exacerbated by GAI and offering detailed guidance for organizations to govern, map, measure, and manage those risks responsibly.
National Institute of Standards and Technology (NIST)
Executed on September 24, 2024, a memorandum for the heads of executive departments and agencies on advancing the responsible acquisition of artificial intelligence in government.
On December 5, 2022, an expert panel, including representatives from the White House, unpacked what’s included in the AI Bill of Rights, and explored how to operationalize such guidance among consumers, developers, and other users designing and implementing automated decisions.