Louisiana issued an RFI to identify solutions that can provide a technology platform for determining eligibility and managing cases across multiple human services programs.
This article explores how legal documents can be treated like software programs, using methods like software testing and mutation analysis to enhance AI-driven statutory analysis, aiding legal decision-making and error detection.
A recap of the two-day conference focused on charting the course to excellence in digital benefits delivery hosted at Georgetown University and online.
The Atlanta Fed’s CLIFF tools provide greater transparency to workers about potential public assistance losses when their earnings increase. We find three broad themes in organization-level implementation of the CLIFF tools: identifying the tar- get population of users; integrating the tool into existing operations; and integrating the tool into coaching sessions.
The team introduced an AI assistant for benefits navigators to streamline the process and improve outcomes by quickly assessing client eligibility for benefits programs.
This report highlights key findings from the Rules as Code Community of Practice, including practitioners' challenges with complex policies, their desire to share knowledge and resources, the need for increased training and support, and a collective interest in developing open standards and a shared code library.
This article explores how AI and Rules as Code are turning law into automated systems, including how governance focused on transparency, explainability, and risk management can ensure these digital legal frameworks stay reliable and fair.
The team explored the performance of various AI chatbots and LLMs in supporting the adoption of Rules as Code for SNAP and Medicaid policies using policy data from Georgia and Oklahoma.
The team aimed to automate applying rules efficiently by creating computable policies, recognizing the need for AI tools to convert legacy policy content into automated business rules using Decision Model Notation (DMN) for effective processing and monitoring.
The team conducted experiments to determine whether clients would be responsive to proactive support offered by a chatbot, and identify the ideal timing of the intervention.