The Oxford AI Safety Initiative (OAISI) is an Oxford-based organisation supporting both technical and governance AI safety work. Founded in Summer 2024 as a spin-off from the OxAI Safety and Governance team, OAISI runs recurring programmes including ARBOx (an intensive ML safety bootcamp), the AI Strategy Series, Governance Roundtables, Technical Roundtables, and research labs that pair students with experienced supervisors. OAISI treats AI safety as a sociotechnical issue and targets two audiences: existing Oxford AI safety researchers seeking community and productivity support, and capable Oxford students not yet working in the field.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- Jericho Alignment Infrastructure
Theory of Change
OAISI operates on two core premises: first, that the AI safety community at Oxford would benefit from dedicated organisational support to increase researcher productivity and the quality of technical and governance work happening there; and second, that many highly capable Oxford students and researchers have not yet been introduced to AI safety as a field or career path. By running structured skill-building programmes (technical bootcamps, governance fellowships, reading groups) and community infrastructure (roundtables, office hours, socials), OAISI aims to grow the pipeline of talent entering AI safety roles and improve the output of those already working in the field. The causal chain is: attract and upskill Oxford talent -> more people entering technical and governance AI safety careers -> increased research capacity at top safety organisations -> reduced catastrophic risk from advanced AI.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:53 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC