Topic: Automation + AI
-
Design Patterns Catalogue
A catalogue to help teams design trustworthy services that work for people. Categories including informing decisions, signing into services, giving and removing consent, and doing security checks.
-
A list of open LLMs
These LLMs (Large Language Models) are all licensed for commercial use (e.g., Apache 2.0, MIT, OpenRAIL-M).
-
What’s in a name? A survey of strong regulatory definitions of automated decision-making systems
The Electronic Privacy Information Center (EPIC) emphasizes the necessity of adopting broad regulatory definitions for automated decision-making systems (ADS) to ensure comprehensive oversight and protection against potential harms.
-
What Are Generative AI, Large Language Models, and Foundation Models?
What exactly are the differences between generative AI, large language models, and foundation models? This post aims to clarify what each of these three terms mean, how they overlap, and how they differ.
-
The Privacy-Bias Trade-Off
Safeguarding privacy and addressing algorithmic bias can pose an under-recognized trade-off. This brief documents tradeoffs by examining the U.S. government’s recent efforts to introduce government-wide equity assessments of federal programs. The authors propose a range of policy solutions that would enable agencies to navigate the privacy-bias trade-off.
-
The Ohio Benefits Program is “BOT” In
This award documentation from the National Association of State Chief Information Officers (NASCIO) explains how agencies in Ohio used automation to support administration of public benefits programs.
-
Saving Face: Investigating the Ethical Concerns of Facial Recognition Auditing
This paper explores design considerations and ethical tensions related to auditing of commercial facial processing technology.
-
Looking before we leap: Exploring AI and data science ethics review process
This report explores the role that academic and corporate Research Ethics Committees play in evaluating AI and data science research for ethical issues, and also investigates the kinds of common challenges these bodies face.
-
Large Language Models (LLMs): An Explainer
In this blog post, CSET’s Natural Language Processing (NLP) Engineer, James Dunham, helps explain LLMs in plain English.
-
Disability, Bias, and AI
This report explores key questions that a focus on disability raises for the project of understanding the social implications of AI, and for ensuring that AI technologies don’t reproduce and extend histories of marginalization.
-
Democratizing AI: Principles for Meaningful Public Participation
In this policy brief and video, Michele Gilman summarizes evidence-based recommendations for better structuring public participation processes for AI, and underscores the urgency of enacting them.
-
Controlling Large Language Models: A Primer
Concerns over risks from generative artificial intelligence systems have increased significantly over the past year, driven in large part by the advent of increasingly capable large language models. But, how do AI developers attempt to control the outputs of these models? This primer outlines four commonly used techniques and explains why this objective is so challenging.