Skip to the main content
Project

Responsible Generative AI: Accountable Technical Oversight

Generative AI is at a tipping point of adoption and impact, much like that of machine learning and AI years ago. But this time the stakes, regulatory environment, potential social, political, and economic effects of these technologies, and the exponential pace of adoption and evolution are different. Now is the time to ask: How should regulators and companies enable meaningful transparency from generative AI technologies to ensure accountability to society? 

Drawing on years of the center and community’s work on the governance of AI technologies, the Berkman Klein Center is exploring mechanisms for enabling accountable technical oversight of generative AI. Critical topics for generative AI to be explored include: new developments in harms and their impacts, balancing transparency and security in open research, and how to enable meaningful technical oversight within the nascent regulatory landscape. This work will surface and synthesize key themes and questions that regulators and independent technical auditors should understand and be prepared to address. 

Why now?

First, these new technologies have enormous power to both magnify existing harms and create new ones in ways we cannot yet imagine. For example, generative AI can help produce effective materials for propaganda campaigns at a much faster scale and higher level of accuracy than was previously possible. And, models often return confidently incorrect answers, even citing “hallucinated” papers or other reference materials that exist without empirical guardrails. There is legitimate concern about the social fabric of trustworthiness and democracy as models are incorporated into search engines, customer support, and other critical tools used by the public.

Yet, these technologies are not without potential. There’s a promising early focus on improving accessibility technologies by using image-to-text generation. The potential for personalized education could be transformative. And the use of AI in improving design efficiencies could help sustainability efforts. So how can we capture these benefits but mitigate against harm? What governance systems are needed? What technical capabilities need to be developed for those governance systems to ensure the technology is accountable to societal needs? How can we protect both the users at the end of these AI systems as well as the workers tasked with making these systems safe? 

Second, the pace of growth and evolution of these models are rapid. In releasing new models, companies are blurring the line of ‘research’ and ‘product’ while struggling with balancing transparency and security. Given that these models are trained on public data, there are calls to operate them both openly and as a public resource. Yet, models that are open source – while aspirational – may reduce abilities to curb malicious actors. To enable responsible transparency, models of ‘open research’ must contend with concerns for malicious use. 

Third, lawmakers are grappling with regulating these systems in a nascent AI regulatory environment. Many nascent and possible regulations rely on technical auditing and data access as a key transparency mechanism to ensure accountability. Yet, for these mechanisms to be an effective accountability tool, we need both a clearer understanding of what (if anything) has changed with the leaps forward in generative AI, where the tradeoffs are between transparency and security, and what technical capabilities are needed outside of industry to implement effective technical oversight.   

The Berkman Klein Center is generating new insights, convening experts, and engaging policymakers with evidence-based solutions in the midst of this rapid advancement.


Our Work 04

News
May 31, 2023

Exploring the Impacts of Generative AI on the Future of Teaching and Learning

In December 2022, researchers from BKC, OpenAI, and Khan Academy and other experts gathered to discuss the impacts of generative AI on teaching and learning.

Event
May 22, 2023 @ 12:45 PM

Enabling Accountable Technical Oversight of Generative AI

JOIN US FOR  A VIRTUAL CONVERSATION WITH JULIA ANGWIN, BRANDON SILVERMAN, AND RUMMAN CHOWDHURY FOR THE THIRD IN A SERIES ON ACCOUNTABLE TECHNICAL OVERSIGHT OF GENERATIVE AI

Join Julia Angwin, investigative journalist and New York Times contributing Opinion writer, and Brandon Silverman, policy expert on data sharing and transparency and founder of…

May 15, 2023 @ 1:00 PM

Balancing Transparency and Security in Open Research on Generative AI

JOIN US FOR  A VIRTUAL CONVERSATION WITH SUE HENDRICKSON AND BRUCE SCHNEIER FOR THE SECOND IN A SERIES ON ACCOUNTABLE TECHNICAL OVERSIGHT OF GENERATIVE AI

Join Bruce Schneier, security researcher and affiliate at the Berkman-Klein Center for Internet and Society at Harvard University, for a conversation on balancing security and…

May 8, 2023 @ 1:00 PM

How is Generative AI changing the landscape of AI harms?

Join us for  a virtual conversation with Dr. Rumman Chowdhury and Reva Schwartz for the first in a series on accountable technical oversight of generative AI

Join us for  the first in a series of virtual conversations on accountable technical oversight of generative AI. On Monday, May 8 from 1-1:30 ET, Dr. Rumman Chowdhury and Reva…


People 08

Point of Contact

Team


Related Projects & Tools 01

Artificial Intelligence Initiative

Tackling short- and long-term questions related to the social impact, governance, and ethical implications of artificial intelligence.