The AI RMF Playbook offers organizations detailed, voluntary guidance for implementing the NIST AI Risk Management Framework to map, measure, manage, and govern AI risks effectively.
National Institute of Standards and Technology (NIST)
This Urban Institute article argues that poverty is driven by structural barriers rather than individual choices and advocates for safety net programs that address systemic inequities.
This publication summarizes a body of research about how state benefits administering agencies build and maintain integrated eligibility and enrollment (IEE) systems. It is an easy to reference guide for state administrators, legislators, advocates, and delivery partners.
This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. This academic study presents an approach to evaluate bias present in automated facial analysis algorithms and datasets.
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
Digital IDs can improve convenience, but they risk surveillance, data misuse, and exclusion if not designed with privacy, security, and accessibility safeguards.