This post argues that for the types of large-scale, organized fraud attacks that many state benefits systems saw during the pandemic, solutions grounded in cybersecurity methods may be far more effective than creating or adopting automated systems.
Government agencies adopting generative AI tools seems inevitable at this point. But there is more than one possible future for how agencies use generative AI to simplify complex government information.
The Commonwealth of Virginia's Executive Order Number Five (2023): Recognizing The Risks And Seizing The Opportunities Of Artificial Intellignece to ensure responsible, ethical, and transparent use of artificial intelligence (AI) technology by state government.
The U.S. Department of Homeland Security (DHS) Artificial Intelligence (AI) Roadmap outlines the agency's AI initiatives and AI's potential across the homeland security enterprise.
The state of Indiana developed a policy framework for the ethical and efficient use of artificial intelligence (AI) within state agencies. The policy adopts the National Institute of Standards and Technology’s AI Risk Management Framework to manage potential risks effectively. It also details the applicability of the actions undertaken by the Office of the Chief Data Officer (OCDO) to enable the deployment of trustworthy AI systems.
The State of Connecticut's policy on Artificial Intelligence (AI) Responsible Use establishes a comprehensive framework for the ethical utilization of artificial intelligence in the Connecticut state government.
The state of South Dakota Bureau of Information and Telecommunications (BIT) designed guidelines for the responsible use of AI-generated content in state government agencies, emphasizing the need for proofing, editing, fact-checking, and using AI-generated content as a starting point, not the finished product.
South Dakota Bureau of Information and Telecommunications