This profile provides a cross-sectoral profile of the AI Risk Management Framework specifically for Generative AI (GAI), outlining risks unique to or exacerbated by GAI and offering detailed guidance for organizations to govern, map, measure, and manage those risks responsibly.
National Institute of Standards and Technology (NIST)
Outlines recommendations from the U.S. House of Representatives for the responsible adoption, governance, and oversight of artificial intelligence technologies across state agencies.
Bipartisan House Task Force on Artificial Intelligence
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.
This report examines how governments use AI systems to allocate public resources and provides recommendations to ensure these tools promote equity, transparency, and fairness.
This framework provides a structured approach for ensuring responsible and transparent use of AI systems across government, emphasizing governance, data integrity, performance evaluation, and continuous monitoring.
A unified taxonomy and tooling suite that consolidates AI risks across frameworks and links them to datasets, benchmarks, and mitigation strategies to support practical AI governance.
A case study explaining how a predictive, data-driven machine-learning model was developed to detect unauthorized cash benefit withdrawals more quickly and accurately in California.
The Guide to Robotic Process Automation, including the RPA Playbook provides detailed guidance for federal agencies starting a new RPA program or evolving an existing one.