This framework provides practical guidance, detailed reference designs, and example solutions to help organizations securely adopt and operationalize Zero Trust principles across diverse IT environments.
National Institute of Standards and Technology (NIST)
The Commonwealth of Virginia's Executive Order Number Five (2023): Recognizing The Risks And Seizing The Opportunities Of Artificial Intellignece to ensure responsible, ethical, and transparent use of artificial intelligence (AI) technology by state government.
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
In early 2023, Wired magazine ran four pieces exploring the use of algorithms to identify fraud in public benefits and potential harms, deeply exploring cases from Europe.
This primer is written for a non-technical audience to increase understanding of the terminology, applications, and difficulties of evaluating facial recognition technologies.
This research explores how software engineers are able to work with generative machine learning models. The results explore the benefits of generative code models and the challenges software engineers face when working with their outputs. The authors also argue for the need for intelligent user interfaces that help software engineers effectively work with generative code models.
In this policy brief and video, Michele Gilman summarizes evidence-based recommendations for better structuring public participation processes for AI, and underscores the urgency of enacting them.
This study examines public attitudes toward balancing equity and efficiency in algorithmic resource allocation, using online advertising for SNAP enrollment as a case study.
Through a field scan, this paper identifies emerging best practices as well as methods and tools that are becoming commonplace, and enumerates common barriers to leveraging algorithmic audits as effective accountability mechanisms.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
Webinar that shares Nava’s partnership with the Gates Foundation and the Benefits Data Trust that seeks to answer if generative and predictive AI can be used ethically to help reduce administrative burdens for benefits navigators.