On December 5, 2022, an expert panel, including representatives from the White House, unpacked what’s included in the AI Bill of Rights, and explored how to operationalize such guidance among consumers, developers, and other users designing and implementing automated decisions.
This research explores how software engineers are able to work with generative machine learning models. The results explore the benefits of generative code models and the challenges software engineers face when working with their outputs. The authors also argue for the need for intelligent user interfaces that help software engineers effectively work with generative code models.
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.
Center for Security and Emerging Technology (CSET)
This policy brief offers recommendations to policymakers relating to the computational and human sides of facial recognition technologies based on a May 2020 workshop with leading computer scientists, legal scholars, and representatives from industry, government, and civil society
Webinar that shares Nava’s partnership with the Gates Foundation and the Benefits Data Trust that seeks to answer if generative and predictive AI can be used ethically to help reduce administrative burdens for benefits navigators.
These principles and best practices for AI developers and employers to center the well-being of workers in the development and deployment of AI in the workplace and to value workers as the essential resources they are.