Massachusetts Institute of Technology
Founded in 1861, MIT advances knowledge through education, research, and innovation across science, engineering, and the humanities. Its AI safety-relevant work is concentrated in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), which houses the FutureTech Research Group studying AI trends and risks, the Algorithmic Alignment Group working on safe and trustworthy AI, and the MIT AI Risk Repository — a comprehensive living database of AI risks. The MIT Schwarzman College of Computing further addresses the social, ethical, and policy dimensions of AI development.
Funding Details
- Annual Budget
- $4,782,700,000
- Monthly Burn Rate
- $398,558,333
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
MIT's approach to reducing AI risk operates through multiple channels: producing foundational technical research on AI safety, alignment, and interpretability; training the next generation of AI researchers with safety awareness; building public resources such as the AI Risk Repository to enable coordinated risk identification and management; and informing policy through rigorous academic work that reaches government and industry decision-makers. By embedding safety considerations into mainstream AI research at one of the world's most influential technical institutions, MIT aims to shift norms and practices across the global AI research community.
Grants Received
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
from Open Philanthropy
Projects
Max Tegmark's AI safety research group at MIT, focused on mechanistic interpretability, physics-informed machine learning, and frameworks for guaranteed safe AI.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:53 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC