Automated decision systems (ADS) are increasingly used in government decision-making but lack clear definitions, oversight, and accountability mechanisms.
This reporting explores how algorithms used to screen prospective tenants, including those waiting for public housing, can block renters from housing based on faulty information.
This course is designed to help public professionals accelerate the process of finding and implementing urgently-needed evidence-based solutions to public problems.
NYC's My File NYC and New Jersey's unemployment insurance system improvements demonstrate how successful digital innovations can be scaled across various programs, leveraging trust-building, open-source technology, and strategic partnerships.
Outlines recommendations from the U.S. House of Representatives for the responsible adoption, governance, and oversight of artificial intelligence technologies across state agencies.
Bipartisan House Task Force on Artificial Intelligence
The team explored using LLMs to interpret the Program Operations Manual System (POMS) into plain language logic models and flowcharts as educational resources for SSI and SSDI eligibility, benchmarking LLMs in RAG methods for reliability in answering queries and providing useful instructions to users.
What exactly are the differences between generative AI, large language models, and foundation models? This post aims to clarify what each of these three terms mean, how they overlap, and how they differ.
Center for Security and Emerging Technology (CSET)
This guidebook offers an introduction to the risks of discrimination when using automated decision-making systems. This report also includes helpful definitions related to automation.
The article discusses key takeaways from BenCon 2023, highlighting the importance of creating equitable and ethical public benefits technology. It emphasizes the need for tech solutions that address systemic inequalities, ensure accessibility, and promote inclusivity for underserved communities in accessing public services.
This paper introduces the problem of semi-automatically building decision models from eligibility policies for social services, and presents an initial emerging approach to shorten the route from policy documents to executable, interpretable and standardised decision models using AI, NLP and Knowledge Graphs. There is enormous potential of AI to assist government agencies and policy experts in scaling the production of both human-readable and machine executable policy rules, while improving transparency, interpretability, traceability and accountability of the decision making.