AI Welfare Seminars

AI Welfare Seminars

Monthly research presentations on AI consciousness, welfare, and moral status.

Talks

Archive opening soon.

Talk pages, recordings, and reading lists will appear here as the series gets underway. Subscribe for launch updates.

About this series

AI welfare is the study of whether and how AI systems have morally significant states, and what follows from the answers. This seminar series brings together researchers working across the field.

Talks run monthly, with a 40-minute presentation followed by 20–30 minutes of open Q&A.

What we cover

Consciousness science What is consciousness, and do current theories extend to AI systems? How to determine if an AI system is conscious? How to make progress given theoretical disagreement?
Characterization If AI systems are conscious, what is that consciousness like? What would experience be in systems with different architectures and substrates? How do we reason about the structure of minds unlike our own?
Moral status What grounds moral consideration for AI? Is consciousness necessary for moral status, or can other properties suffice? How to make decisions when the theories disagree and the stakes are high?
Welfare What does AI welfare mean, and what does it require? What would count as harm, benefit, or care? How to evaluate welfare when the nature of the experience is uncertain?
Safety & system design What design choices in AI systems have welfare implications? What do architectural and training choices mean for welfare? Where do safety and welfare align, and where are they in tension?
Governance & society What legal, institutional, and policy frameworks exist for AI moral status and rights, and the fact that almost none yet exist. How public understanding shapes the space, and whether society is ready.

Get involved