Jennifer Lin
Jennifer Lin is an independent researcher working on technical AI safety and forecasting. She gained recognition in the effective altruism community for her critical review of Ajeya Cotra's biological anchors report on AI timelines, which won $20,000 in the EA Criticism and Red Teaming Contest in 2022. Her work spans AI timeline forecasting, interpretability, and LLM capabilities evaluation. In 2024, she received a $70,000 grant from Open Philanthropy to produce a publicly available report investigating whether large language models have the cognitive ability of model-based planning.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $90,000
- Fiscal Sponsor
- -
Theory of Change
Lin's work focuses on improving the quality of reasoning about AI risk by critically evaluating existing forecasting methodologies and empirically assessing LLM capabilities. By producing rigorous, publicly available analyses of AI timelines and capabilities (such as whether LLMs can do model-based planning), she aims to give researchers and funders better epistemic foundations for decision-making about AI development. Her work on transparency argues that interpretability research is a key lever for AI safety, as it could enable verification of AI system behavior before deployment and help detect misalignment.
Grants Received
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:51 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC