This framework provides practical guidance, detailed reference designs, and example solutions to help organizations securely adopt and operationalize Zero Trust principles across diverse IT environments.
National Institute of Standards and Technology (NIST)
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
Through a field scan, this paper identifies emerging best practices as well as methods and tools that are becoming commonplace, and enumerates common barriers to leveraging algorithmic audits as effective accountability mechanisms.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Recording of GOVChats hosted by GTA's Digital Services Georgia, where speakers dive into the artificial intelligence (AI) programs and initiatives unfolding across the states of Georgia, Maryland, and Vermont.
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
The team developed an AI-powered explanation feature that effectively translates complex, multi-program policy calculations into clear and accessible explanations, enabling users to explore "what-if" scenarios and understand key factors influencing benefit amounts and eligibility thresholds.
The Electronic Privacy Information Center (EPIC) emphasizes the necessity of adopting broad regulatory definitions for automated decision-making systems (ADS) to ensure comprehensive oversight and protection against potential harms.
This study examines the adoption and implementation of AI chatbots in U.S. state governments, identifying key drivers, challenges, and best practices for public sector chatbot deployment.