This case study details the development of a document extraction prototype to streamline benefits application processing through automated data capture and classification.
This resource page provides comprehensive information on the state's initiatives, policies, training, and governance related to the adoption and implementation of generative AI technologies in government operations.
This academic paper examines how federal privacy laws restrict data collection needed for assessing racial disparities, creating a tradeoff between protecting individual privacy and enabling algorithmic fairness in government programs.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This report analyzes lawsuits that have been filed within the past 10 years arising from the use of algorithm-driven systems to assess people’s eligibility for, or the distribution of, public benefits. It identifies key insights from the various cases into what went wrong and analyzes the legal arguments that plaintiffs have used to challenge those systems in court.
This study examines public attitudes toward balancing equity and efficiency in algorithmic resource allocation, using online advertising for SNAP enrollment as a case study.
This paper introduces the problem of semi-automatically building decision models from eligibility policies for social services, and presents an initial emerging approach to shorten the route from policy documents to executable, interpretable and standardised decision models using AI, NLP and Knowledge Graphs. There is enormous potential of AI to assist government agencies and policy experts in scaling the production of both human-readable and machine executable policy rules, while improving transparency, interpretability, traceability and accountability of the decision making.
The White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.
This post argues that for the types of large-scale, organized fraud attacks that many state benefits systems saw during the pandemic, solutions grounded in cybersecurity methods may be far more effective than creating or adopting automated systems.
The Commonwealth of Virginia's Executive Order Number Five (2023): Recognizing The Risks And Seizing The Opportunities Of Artificial Intellignece to ensure responsible, ethical, and transparent use of artificial intelligence (AI) technology by state government.