Lausanne AI Alignment (LAIA), also known as Safe AI Lausanne (SAIL), is a volunteer-driven community of students, researchers, and professionals at EPFL working to ensure AI systems are developed safely and in alignment with human values. Founded as a spinoff of the first ML4Good bootcamp organized by EffiSciences, the group operates under the umbrella of EA Lausanne and brings together students from EPFL and the University of Lausanne (UNIL). Its activities include intensive AI safety bootcamps, 48-hour hackathons (Alignment Jams), bi-weekly reading groups, workshops, and hands-on research projects in areas such as mechanistic interpretability, robustness, and LLM security.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- EA Geneva (CHE-357.229.406)
Theory of Change
LAIA/SAIL operates on a field-building theory of change: by training and mentoring promising students and researchers at one of Europe's top technical universities (EPFL), the group aims to grow the pipeline of people working on AI safety. Intensive bootcamps and hackathons give participants hands-on technical experience with alignment problems, while reading groups and seminars raise awareness of AI risk among a broader student audience. By producing researchers who go on to work at leading AI safety organizations, and by producing research outputs at top academic venues, the group seeks to contribute both to the talent pipeline and the direct research base needed to navigate the risks of transformative AI.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 21, 2026, 9:43 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC