This profile provides a cross-sectoral profile of the AI Risk Management Framework specifically for Generative AI (GAI), outlining risks unique to or exacerbated by GAI and offering detailed guidance for organizations to govern, map, measure, and manage those risks responsibly.
National Institute of Standards and Technology (NIST)
This framework provides a structured approach for ensuring responsible and transparent use of AI systems across government, emphasizing governance, data integrity, performance evaluation, and continuous monitoring.
This plan promotes responsible AI use in public benefits administration by state, local, tribal, and territorial governments, aiming to enhance program effectiveness and efficiency while meeting recipient needs.
U.S. Department of Health and Human Services (HHS)
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
The "Public Sector AI Playbook" provides public sector officers with practical guidance on adopting and implementing AI technologies to improve government operations, service delivery, and policymaking.
The strategic plan outlines intentions to responsibly leverage artificial intelligence (AI) to enhance health, human services, and public health by promoting innovation, ethical use, and equitable access across various sectors, while managing associated risks.
U.S. Department of Health and Human Services (HHS)
This hub introduces the UK government's Algorithmic Transparency Recording Standard (ATRS), a structured framework for public sector bodies to disclose how they use algorithmic tools in decision-making.
The Maryland Information Technology Master Plan 2025 lays out the state’s strategy to modernize IT, expand digital services, and strengthen infrastructure to better serve residents and government agencies.