The strategic plan outlines intentions to responsibly leverage artificial intelligence (AI) to enhance health, human services, and public health by promoting innovation, ethical use, and equitable access across various sectors, while managing associated risks.
U.S. Department of Health and Human Services (HHS)
A guide from the General Service Administration to help government decision makers clearly see what AI means for their agencies and how to invest and build AI capabilities.
This primer is written for a non-technical audience to increase understanding of the terminology, applications, and difficulties of evaluating facial recognition technologies.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
Research identified five key obstacles that researchers, activists, and advocates face in efforts to open critical public conversations about AI’s relationship with inequity and advance needed policies.
To help policy makers, regulators, legislators and others characterize AI systems deployed in specific contexts, the OECD has developed a user-friendly tool to evaluate AI systems from a policy perspective.
Organisation for Economic Co-operation and Development (OECD)
The Electronic Privacy Information Center (EPIC) emphasizes the necessity of adopting broad regulatory definitions for automated decision-making systems (ADS) to ensure comprehensive oversight and protection against potential harms.
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.
Center for Security and Emerging Technology (CSET)
This academic article develops a framework for evaluating whether and how automated decision-making welfare systems introduce new harms and burdens for claimants, focusing on an example case from Germany.