BKC x AI Student Safety Team Announce Event Series
Check out upcoming events and previous speakers
BKC x AI Student Safety Team (AISST) are collaborating to present a speaker series featuring a lineup of experts in AI governance. Through engaging one-hour talks with interactive Q&A sessions, you’ll have the opportunity to engage in meaningful dialogue with peers and explore exciting career options in AI governance. Don’t miss out on this student-led initiative and the unique opportunity to connect with AI leaders.
Regular Sessions: November 7-19
US Tort Liability for Large-Scale AI Harms: A Panel with Dr. Ketan Ramakrishnan (Yale Law), Greg Smith (RAND), and Conor Downey (RAND)
November 7th at WCC
Join us for the latest installment in the AI governance speaker series, co-sponsored by the Berkman Klein Center and the AI Safety Student Team! In this panel, Greg Smith, Ketan Ramakrishnan, and Conor Downey will discuss different theories of tort liability for AI developers when their models cause large scale harms through malfunctions or misuse. You'll have a chance to discuss AI policy with expert speakers, and free lunch will be provided. Check the BKC events page for all events in this series. Thursday, November 7th, from 12:15-1:15 pm, in WCC B010 Singer Classroom - more info and RSVP here. Lunch will be provided.
Technology and Security Policy: How Frontier AI Affects International Tech Competition and Risks (Fireside Chat with Dr. Jeff Alstott)
November 11th at WCC
Advanced AI systems are pushing the boundaries of what’s possible in technology. This fireside chat will explore how AI will change international technological competition, reshape the international security landscape, and potentially introduce new risks on a global scale. We will discuss policy implications in AI governance and how governments can navigate the complex dynamics in AI development, national security, and global competition. Monday, November 11th, from 12:15-1:15 pm, in WCC 1015 Singer Classroom - more info and RSVP here. Lunch will be provided.
The Evolving Discourse of Open-Source AI with Dr. Elizabeth Seger
November 15th at BKC
Dr. Elizabeth Seger explores the rapidly evolving landscape of open-source AI, tracing how the discourse has shifted dramatically in recent years. This talk will highlight emerging areas of consensus, persistent tensions, and critical open questions that continue to shape the debate around AI openness. Join us for a nuanced exploration of the complex trade-offs between innovation, safety, and accessibility in AI development. Friday, November 15th, from 12:15-1:15 pm ET in Lewis International Law Center (5th Floor) - more info and RSVP here. Lunch will be provided.
The Law and Economics of AI Agents with Dr. Noam Kolt
November 19th at WCC
AI developers are increasingly working to build autonomous AI agents that can plan and execute complex tasks with only limited human involvement. While existing legal and economic theories offer insight into the challenges presented by this technology, new approaches are needed. Capturing the benefits of AI agents and mitigating the associated risks will require diverse methodologies for rigorously studying the technology and designing appropriate governance infrastructure. Tuesday, November 19th, from 12:15-1:15pm, in WCC - more info and RSVP here. Lunch will be provided.
Launch Day: November 1st
On November 1st, over 75 students attended 6 sessions with renowned guests from industry, government, and research.
Enabling Principles for AI Governance with Kendrea Beers
November 1st at BKC
CSET researchers have identified three principles for effective AI governance: policymakers should 1) monitor the landscape of AI risk, 2) promote AI literacy among themselves and the public, and 3) develop policies that can adapt to rapid technological changes. Kendrea will unpack these principles and share insights on their implementation, highlighting topics such as sociotechnical AI safety and privacy-preserving external scrutiny of AI systems. Event is on Friday, November 1st, from 10:00-11:00 am in Lewis International Law Center (5th Floor) - more info and RSVP here. Breakfast will be provided.
Understanding the Problems and Stakes in AI Governance with Julia Bossmann
November 1st at BKC
The decisions we make about AI governance today will have lasting consequences. Through a talk followed by interactive discussion, we will examine what's at stake for our future and what stands in our way to effective governance of increasingly powerful AI systems. With AI capabilities advancing rapidly, there is both urgency and opportunity in addressing these challenges. Event on Friday, November 1st, from 11:00-12:00 pm in Lewis International Law Center (5th Floor).
Governing AI Agents with Alan Chan
November 1st at BKC
Many companies are building AI agents: AI systems that can behave autonomously with little human input. Agents could be extremely useful, but could also present novel risks.How should we think about governing AI agents and managing the risks they pose? Can existing frameworks be adapted to AI agents, and what technical tools and policies will we need to ensure that the rise of AI agents goes smoothly? This talk will discuss some of these issues and present the state of research on AI agent governance. Event is on Friday, November 1st, from 12:15-1:15 pm ET in Lewis International Law Center (5th Floor) - more info and RSVP here. Lunch will be provided.
An Overview of Federal Legislative Efforts in AI Policy with Jason Green-Lowe
November 1st at BKC
Jason will provide an overview of the categories of AI-related threats that Congress has been discussing (e.g. deepfakes, supply chain bottlenecks, bias, privacy, intellectual property theft, weapons of mass destruction, and takeover risk), and then for each category, Jason will describe how far along the current legislative efforts are (a loose framework, a draft bill, committee markup, etc.) along with some brief speculation about if and when and how those bills might move forward. Jason will conclude with some brief observations on why Congress needs to address AI takeover risk and why takeover risk is amenable to legislative solutions. In-person event on Friday, November 1st, from 1:30-2:30 pm in Lewis International Law Center (5th Floor) - more info and RSVP here. Snacks will be provided.
One Year of Anthropic's Responsible Scaling Policy with Zac Hatfield-Dodds
November 1st at BKC
In September 2023, Anthropic released our Responsible Scaling Policy (RSP), a public commitment not to train or deploy models capable of causing catastrophic harm unless we have implemented safety and security measures that will keep risks below acceptable levels. This October, we updated it to account for the lessons we’ve learned - reflecting our view that risk governance in this rapidly evolving domain should be proportional, iterative, and exportable. After a short talk introducing and explaining the RSP, we'll turn to a fireside chat and take questions from the audience. Event on Friday, November 1st, from 2:30-3:30 pm in Lewis International Law Center (5th Floor) - more info and RSVP here. Snacks will be provided.
Understanding Frontier AI Systems and the Role of the UK AI Safety Institute with Kwamina Orleans-Pobee
November 1st at BKC
Since the 2023 AI Safety Summit in Bletchley, the UK has helped set the global agenda for frontier AI safety. In this talk, join the UK AI Safety Institute's Head of Engineering, Kwamina Orleans-Porbee, for a discussion of the research of the UK AISI, the state of AI risk management in the UK and across the world, and how governments can stay informed of the rapid developments in advanced AI systems. Event on Friday, November 1st, from 3:30-4:30 pm, in Lewis International Law Center (5th Floor) - more info and RSVP here. Snacks will be provided.