Topic: Automation + AI
-
Algorithmic Transparency Recording Standard Hub
This hub introduces the UK government's Algorithmic Transparency Recording Standard (ATRS), a structured framework for public sector bodies to disclose how they use algorithmic tools in decision-making.
-
A list of open LLMs
These LLMs (Large Language Models) are all licensed for commercial use (e.g., Apache 2.0, MIT, OpenRAIL-M).
-
City of Boston Interim Guidelines for Using Generative AI
Interim guidelines for the use of generative AI in the City of Boston, MA.
-
State of New York Acceptable Use of Artificial Intelligence Technologies
The New York State Office of Information Technology established guidelines for the acceptable and responsible use of Artificial Intelligence technologies by state entities.
-
Emerging Legal and Policy Trends in State and Federal AI Governance at BenCon 2024
BenCon 2024 explored state and federal AI governance, highlighting the rapid increase in AI-related legislation and executive orders. Panelists emphasized the importance of experimentation, learning, and collaboration between government levels, teams, agencies, and external partners.
-
POMs and Circumstance at Policy2Code Demo Day at BenCon 2024
The team explored using LLMs to interpret the Program Operations Manual System (POMS) into plain language logic models and flowcharts as educational resources for SSI and SSDI eligibility, benchmarking LLMs in RAG methods for reliability in answering queries and providing useful instructions to users.
-
Using open-source LLMs to optimize government data
Companies have been developing and using artificial intelligence (AI) for decades. But we've seen exponential growth since OpenAI released their version of a large language model (LLM), ChatGPT, in 2022. Open-source versions of these tools can help agencies optimize their processes and surpass current levels of data analysis, all in a secure environment that won’t risk exposing sensitive information.
-
Artifice and Intelligence
This essay explains why the Center on Privacy & Technology has chosen to stop using terms like "artificial intelligence," "AI," and "machine learning," arguing that such language obscures human accountability and overstates the capabilities of these technologies.
-
Framework for State, Local, Tribal, and Territorial Use of Artificial Intelligence for Public Benefit Administration
This framework outlines USDA’s principles and approach to support States, localities, Tribes, and territories in responsibly using AI in the implementation and administration of USDA’s nutrition benefits and services. This framework is in response to Section 7.2(b)(ii) of Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
-
Regulating Biometrics: Taking Stock of a Rapidly Changing Landscape
This post reflects on and excerpts from AI Now's 2020 report on biometrics regulation.
-
NIST AI Risk Management Framework (RMF 1.0)
NIST has created a voluntary AI risk management framework, in partnership with public and private sectors, to promote trustworthy AI development and usage.
-
Disability, Bias, and AI
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.