Daniel Dewey is an individual AI safety researcher who served as Open Philanthropy's program officer for potential risks from advanced AI from approximately 2017 to 2022, during which he led grantmaking efforts that distributed tens of millions of dollars to AI safety research organizations. Before that he was an Alexander Tamas Research Fellow on Machine Superintelligence at the Future of Humanity Institute at Oxford. He is now pursuing independent research focused on documenting global risks from deep learning and producing writing aimed at helping researchers understand how to contribute to the AI alignment field.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $350,000
- Fiscal Sponsor
- -
Theory of Change
Dewey believes that the most important lever for reducing existential risk from advanced AI is building a professional field of AI alignment researchers, strategists, and governance experts before transformative AI arrives. During his time at Open Philanthropy, this meant funding research organizations and fellowships to grow the field. In his independent work, this means producing accessible writing and analysis that helps new researchers understand how to contribute to AI alignment, with an emphasis on empirical safety methodologies over speculative theoretical approaches. He also sees international coordination and hardware verification systems as important long-term strategies for governing advanced AI development.
Grants Received
from Open Philanthropy
from Open Philanthropy
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:52 PM UTC
- Created
- Mar 20, 2026, 2:34 AM UTC