AI Alignment Forum
The AI Alignment Forum (alignmentforum.org) is the field's primary online platform for technical AI safety research discussion. Launched in October 2018 and built by the LessWrong team (now Lightcone Infrastructure), the forum replaced MIRI's earlier Intelligent Agent Foundations Forum and was designed to lower barriers for new researchers while maintaining high-quality discourse among established experts. All forum content cross-posts to LessWrong.com, and the two platforms share a codebase and team. The forum is operated as a project of Lightcone Infrastructure, a Berkeley-based 501(c)(3).
Funding Details
- Annual Budget
- $1,700,000
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- $3,000,000
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
The AI Alignment Forum aims to reduce existential risk from advanced AI by improving coordination and knowledge-sharing among alignment researchers globally. The forum's theory of change is that solving alignment requires large-scale coordination across many researchers and organizations with different approaches; by providing a high-signal, curated venue for cutting-edge ideas, the forum accelerates progress on technical alignment research. It does this by lowering coordination costs, enabling researchers to build on each other's work, and onboarding new talent into the field. Better epistemic infrastructure directly translates into a faster, higher-quality alignment research ecosystem.
Grants Received
No grants recorded.
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:54 PM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC