Founded in 1831, New York University (NYU) is one of the largest private universities in the United States, with over 60,000 students and campuses globally. NYU is relevant to the AI safety field primarily through the NYU Alignment Research Group (ARG), which conducts empirical research on language models addressing longer-term AI safety concerns, and the Center for Responsible AI (R/AI), which focuses on fairness, transparency, and accountability in AI systems. In 2024, NYU also launched the Global AI Frontier Lab in partnership with South Korea's IITP, which includes a Trustworthy and Responsible AI research pillar led by Yann LeCun and Kyunghyun Cho.
Funding Details
- Annual Budget
- $4,349,000,000
- Monthly Burn Rate
- $362,416,667
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
NYU contributes to AI safety primarily through academic research and training. The NYU Alignment Research Group operates on the theory that empirical research on large language models — studying their capabilities, failure modes, and potential for misalignment — is necessary groundwork for ensuring advanced AI systems remain beneficial. By producing and publishing alignment-relevant research and training PhD students who go on to work at AI safety organizations (as evidenced by alumni at METR, OpenAI, UK AI Safety Institute, and Anthropic), NYU builds the technical knowledge base and human capital needed to address catastrophic AI risk. The Center for Responsible AI operates on a complementary theory that establishing norms, tools, and public literacy around fair and accountable AI reduces near-term harms and shapes a broader culture of responsible development.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:08 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC