This report outlines a dozen fintech and civic tech organizations working across fourteen safety net programs to show what’s possible when modern technology is married to a consumer insights perspective.
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.
The strategic plan outlines intentions to responsibly leverage artificial intelligence (AI) to enhance health, human services, and public health by promoting innovation, ethical use, and equitable access across various sectors, while managing associated risks.
U.S. Department of Health and Human Services (HHS)
This brief outlines the U.S. federal government’s framework to identify, reduce, and address administrative burdens through a series of executive orders, legislative actions, and updated policies focused on improving customer experience and increasing access to government benefits.
This report documents four experiments exploring if AI can be used to expedite the translation of SNAP and Medicaid policies into software code for implementation in public benefits eligibility and enrollment systems under a Rules as Code approach.
This framework provides voluntary guidance to help employers use AI hiring technology in ways that are inclusive of people with disabilities, while aligning with federal risk management standards.
This workshop summary provides practical guidance for web developers and designers on implementing accessible web design practices, emphasizing the importance of inclusivity and usability for all users.
This one-pager introduces Iowa Child Care Connect (C3), a centralized data system that integrates near-real-time child care data to support families, providers, policymakers, and economic development efforts across the state.
This profile provides a cross-sectoral profile of the AI Risk Management Framework specifically for Generative AI (GAI), outlining risks unique to or exacerbated by GAI and offering detailed guidance for organizations to govern, map, measure, and manage those risks responsibly.
National Institute of Standards and Technology (NIST)
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.