This report analyzes the growing use of generative AI, particularly large language models, in enabling and scaling fraudulent activities, exploring the evolving tactics, risks, and potential countermeasures.
A unified taxonomy and tooling suite that consolidates AI risks across frameworks and links them to datasets, benchmarks, and mitigation strategies to support practical AI governance.
The A11y Jam opening session that introduces the April 2026 DOJ digital accessibility requirements and provides practical, plain-language guidance for government teams preparing for compliance.
A TLDR of the State CDO Archetypes report—covering how state CDO offices operate and the six archetypes that define them. Written for event attendees and government staff: governor's office, IT and budget leadership, legal and data officials, and legislators who oversee CDO funding and establishment.
DSN Spotlights are short-form project profiles that feature exciting work happening across our network of digital government practitioners. Spotlights celebrate our members’ stories, lift up actionable takeaways for other practitioners, and put the examples we host in the Digital Government Hub in context.
With a federal accessibility deadline approaching, the Beeck Center convened government leaders to turn compliance pressure into lasting digital inclusion.
The second event in the Digital Service Network’s summer event series, Let’s Get Digital, focused on the City of Boston’s transformative journey to streamline its procurement processes.
This report explores how despite unresolved concerns, an audit-centered algorithmic accountability approach is being rapidly mainstreamed into voluntary frameworks and regulations.
Through a field scan, this paper identifies emerging best practices as well as methods and tools that are becoming commonplace, and enumerates common barriers to leveraging algorithmic audits as effective accountability mechanisms.
ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT)
This paper argues that a human rights framework could help orient the research on artificial intelligence away from machines and the risks of their biases, and towards humans and the risks to their rights, helping to center the conversation around who is harmed, what harms they face, and how those harms may be mitigated.
Recording of GOVChats hosted by GTA's Digital Services Georgia, where speakers dive into the artificial intelligence (AI) programs and initiatives unfolding across the states of Georgia, Maryland, and Vermont.