Enabling Principles for AI Governance
A report from the Center for Security and Emerging Technology (CSET) discussing enabling principles for artificial intelligence (AI) governance.
The Center for Security and Emerging Technology (CSET)Â defines principles to support U.S. policymakers in approaching goverming artificial intelligence (AI).
In order for future AI governance efforts to prove most effective, they offer three principles for U.S. policymakers to follow. They have drawn these thematic principles from across CSET’s wide body of original, in-depth research, as well as granular findings and specific recommendations on different aspects of AI, which we cite throughout this report. They are:
- Know the terrain of AI risk and harm:Â Use incident tracking and horizon-scanning across industry, academia, and the government to understand the extent of AI risks and harms; gather supporting data to inform governance efforts and manage risk.
- Prepare humans to capitalize on AI:Â Develop AI literacy among policymakers and the public to be aware of AI opportunities, risks, and harms while employing AI applications effectively, responsibly, and lawfully.
- Preserve adaptability and agility:Â Develop policies that can be updated and adapted as AI evolves, avoiding onerous regulations or regulations that become obsolete with technological progress; ensure that legislation does not allow incumbent AI firms to crowd out new competitors through regulatory capture.
These principles are interlinked and self-reinforcing: continually updating the understanding of the AI landscape will help lawmakers remain agile and responsive to the latest advancements, and inform evolving risk calculations and consensus.
Share this Resource: