This report on the use of Generative AI in State government presents an initial analysis of the potential benefits to individuals, communities, government and State government workers, while also exploring potential risks.
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.
Center for Security and Emerging Technology (CSET)
The State of California government published guidelines for the safe and effective use of Generative Artificial (GenAI) within state agencies, in accordance with Governor Newsom's Executive Order N-12-23 on Generative Artificial Intelligence.
Companies have been developing and using artificial intelligence (AI) for decades. But we've seen exponential growth since OpenAI released their version of a large language model (LLM), ChatGPT, in 2022. Open-source versions of these tools can help agencies optimize their processes and surpass current levels of data analysis, all in a secure environment that won’t risk exposing sensitive information.
AI resources for public professionals on responsible AI use, including a course showcasing real-world applications of generative AI in public sector organizations.
A comprehensive series of workshops and courses designed to equip public sector professionals with the knowledge and skills to responsibly integrate AI technologies into government operations.
A report that defines what effective “human oversight” of AI looks like in public benefits delivery and offers practical guidance for ensuring accountability, equity, and trust in algorithmic systems.