PIBBSS Fellowship
About
The PIBBSS Fellowship is a flagship program of Principles of Intelligence (originally named PIBBSS — Principles of Intelligent Behavior in Biological and Social Systems), a nonprofit founded in 2022 by Nora Ammann, TJ, and Anna Gajdova. The fellowship was launched to address a talent and epistemic gap in AI safety: the field lacked sufficient engagement from researchers who study intelligence as it appears in natural and social systems. By recruiting PhD and postdoctoral researchers from disciplines such as computational neuroscience, complex systems, evolutionary biology, ecology, philosophy, linguistics, and political theory, the program attempts to bring new conceptual tools and research perspectives to bear on alignment problems. Each cohort runs for approximately three months during the summer, typically June through early September. Fellows participate in an 8-week pre-program reading group on AI risks (held remotely in April and May), then gather for a multi-day in-person opening retreat, shared office-based research, regular external speaker events, and a closing symposium. Fellows receive a $4,000 per month stipend plus a $1,000 per month housing allowance, with travel costs covered. The program is centrally aimed at PhD students and postdoctoral researchers, though applicants with comparable research experience are encouraged to apply regardless of formal credentials. Since the inaugural 2022 cohort, which was based in Oxford and Prague, the fellowship has run iterations in San Francisco and London (including at the LISA facility in 2024). Cohorts have hosted roughly 12-20 fellows each year. By 2025, the program reported having supported around 50 researchers across cohorts. Alumni have gone on to positions at Anthropic, Epoch AI, the UK AI Security Institute, and academia, and have founded organizations such as Simplex. Starting in 2025, the fellowship introduced dedicated thematic tracks: a Cooperative AI track (supported by the Cooperative AI Foundation) focused on multi-agent risks, and a Gradual Disempowerment track (supported by ACS Research) examining systemic existential risks and human agency preservation. The fellowship is funded by Open Philanthropy, the Survival and Flourishing Fund, the Long-Term Future Fund, the Cooperative AI Foundation, the Foresight Institute, and the AI Safety Tactical Opportunities Fund. The parent organization, Principles of Intelligence, also runs an Affiliate Program providing 6-12 month research stipends, a Research Residency in London, and a Horizon Scanning initiative.
Theory of Change
The fellowship operates on the premise that AI safety requires foundational scientific advances that current technical AI safety research alone cannot provide. By recruiting researchers from fields that study complex and intelligent behavior in biological and social systems, PIBBSS aims to import conceptual frameworks, methodologies, and empirical findings from those fields into AI alignment. Each fellow works on a project bridging their home discipline and a specific alignment problem, mentored by experienced alignment researchers. Over time, fellows are expected to transition into full-time AI safety careers, expanding the field's talent base and diversifying its intellectual toolkit. The cumulative effect — more researchers, more interdisciplinary approaches, and more foundational scientific work — is intended to increase the likelihood of solving alignment before advanced AI systems pose catastrophic risks.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:19 AM UTC
- Created
- Apr 3, 2026, 1:19 AM UTC