This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
This framework provides practical guidance, detailed reference designs, and example solutions to help organizations securely adopt and operationalize Zero Trust principles across diverse IT environments.
National Institute of Standards and Technology (NIST)
BenCon 2024 explored state and federal AI governance, highlighting the rapid increase in AI-related legislation and executive orders. Panelists emphasized the importance of experimentation, learning, and collaboration between government levels, teams, agencies, and external partners.
A customizable policy template that establishes governance, roles, principles, and risk-management processes for the responsible use of artificial intelligence within a government agency.
These principles and best practices for AI developers and employers to center the well-being of workers in the development and deployment of AI in the workplace and to value workers as the essential resources they are.
This panel discussion from the Academy's 2025 Policy Summit explores the intersection of artificial intelligence (AI) and public benefits, examining how technological advancements are influencing policy decisions and the delivery of social services.
This hub introduces the UK government's Algorithmic Transparency Recording Standard (ATRS), a structured framework for public sector bodies to disclose how they use algorithmic tools in decision-making.
This profile provides a cross-sectoral profile of the AI Risk Management Framework specifically for Generative AI (GAI), outlining risks unique to or exacerbated by GAI and offering detailed guidance for organizations to govern, map, measure, and manage those risks responsibly.
National Institute of Standards and Technology (NIST)