Grantmaking.ai
SummaryDatabase

Organizations

5050

5050 is a free 12-14 week company-builder program run by Fifty Years that helps scientists, researchers, and engineers become deep-tech startup founders, with a dedicated AI safety track.

https://www.fiftyyears.com/5050/ai
San Francisco, CAestablished
80,000 Hours logo

80,000 Hours

80,000 Hours is a nonprofit that provides free research, career advice, and a job board to help people find careers that effectively tackle the world's most pressing problems, with a current focus on AI safety.

https://80000hours.org/
London, United KingdommatureTeam: 50
AAAI/ACM Conference on Artificial Intelligence, Ethics and Society logo

AAAI/ACM Conference on Artificial Intelligence, Ethics and Society

AIES is a peer-reviewed academic conference series jointly organized by AAAI and ACM that brings together a multidisciplinary community to examine the ethical, social, and policy dimensions of artificial intelligence.

https://www.aies-conference.com/
mature

ACM FAccT

The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) is a premier peer-reviewed academic conference that brings together researchers and practitioners to investigate fairness, accountability, and transparency in socio-technical systems.

https://facctconference.org/
established

ACX Atlanta

ACX Atlanta (The Atlanta Moloch Slayers) is a monthly in-person meetup group for rationalists and readers of the Slate Star Codex and Astral Codex Ten blogs in Atlanta, Georgia.

https://acxatlanta.com/
Atlanta, Georgia, USAseedTeam: 1

Adam Jermyn

Adam Jermyn is a physicist and AI safety researcher at Anthropic, working on neural network interpretability and inner alignment. He previously conducted independent AI alignment research after transitioning from a career in computational astrophysics.

https://adamjermyn.com/
Boston, MA, USAestablishedTeam: 1

Advanced Research + Invention Agency (ARIA)

ARIA is a UK government research funding agency that backs high-risk, high-reward R&D in underexplored areas, including a major £59 million programme on formal mathematical safety guarantees for AI systems.

https://aria.org.uk/
London, United KingdommatureTeam: 53
AE Studio logo

AE Studio

AE Studio is a bootstrapped technology studio and AI alignment research organization that funds neglected safety research from its software consulting profits. Their work spans brain-computer interfaces, self-other overlap fine-tuning to reduce LLM deception, and consciousness research.

https://www.ae.studio/
Venice, Los Angeles, CAestablishedTeam: 150

Aether

Aether is an independent research lab focused on LLM agent safety, conducting technical research on the alignment, control, and evaluation of large language model agents.

https://aether-ai-research.org/
Toronto, CanadaseedTeam: 3

Agent Foundations Field Network

AFFINE (Agent Foundations FIeld NEtwork) runs intensive superintelligence alignment seminars and fellowships to upskill promising newcomers in agent foundations and AI alignment research.

https://www.affi.ne/
Hostacov, CzechiaearlyActively fundraisingTeam: 3

AGI Inherent Non-Safety

A research project developing non-maximizing, aspiration-based designs for AI agents that avoid objective function maximization, arguing that such optimization is inherently unsafe in sufficiently capable AGI systems.

https://pik-gane.github.io/satisfia/
Potsdam, GermanyearlyTeam: 15
AI & Democracy Foundation logo

AI & Democracy Foundation

The AI & Democracy Foundation accelerates innovation, evaluation, and adoption of deliberative, democratic, human-centered governance and alignment systems for and with AI, serving as both a nonprofit funder and advisor to philanthropic organizations, AI companies, civil society, and governments.

https://aidemocracyfoundation.org/
San Francisco, CA, USAearlyTeam: 11
AI Alignment Awards logo

AI Alignment Awards

AI Alignment Awards is a prize contest program that awards up to $100,000 for novel research progress on core AI alignment problems. It is a project of the Players Philanthropy Fund, funded by Open Philanthropy.

https://www.alignmentawards.com/
California, USAwinding-down
AI Alignment Forum logo

AI Alignment Forum

A curated online hub for researchers to discuss technical AI alignment research, operated by Lightcone Infrastructure. It serves as the primary venue for sharing and coordinating cutting-edge alignment ideas across organizations including MIRI, OpenAI, DeepMind, CHAI, and others.

https://www.alignmentforum.org/
Berkeley, CaliforniaestablishedActively fundraisingTeam: 8
AI Alignment Foundation (AIAF) logo

AI Alignment Foundation (AIAF)

A 501(c)(3) nonprofit that funds, accelerates, and advocates for AI alignment research by providing engineering teams, compute, and infrastructure to researchers pursuing neglected approaches.

https://www.aialignmentfoundation.org/
Marina Del Rey, CAseed

AI Alignment Slack

A large community Slack workspace for AI safety researchers, practitioners, and enthusiasts to connect, collaborate, and discuss alignment-related topics in real time.

https://ai-alignment.slack.com/
AI Digest logo

AI Digest

AI Digest creates interactive explainers and demos to help policymakers and the public understand AI capabilities and their effects, operated as a project of Sage Future, a US 501(c)(3) charity.

https://theaidigest.org/
United States (remote)earlyTeam: 4
AI Explained logo

AI Explained

AI Explained is a London-based YouTube channel by a creator known as Philip that provides hype-free coverage of AI developments, capabilities, and safety topics for a general audience.

https://www.youtube.com/@AIExplainedYT
London, United KingdomestablishedTeam: 1

AI Forensics

A European non-profit that investigates influential and opaque algorithms, holding major tech platforms accountable through independent technical audits and free software auditing tools.

https://aiforensics.org/
Brighton, United KingdomestablishedTeam: 16

AI Futures Project

A nonprofit research organization that develops detailed scenario forecasts of advanced AI trajectories to inform policymakers, researchers, and the public.

https://www.aifutures.org/
San Francisco Bay Area, USAearlyTeam: 5
AI Governance & Safety Canada (AIGS Canada) logo

AI Governance & Safety Canada (AIGS Canada)

AIGS Canada is a nonpartisan Canadian not-for-profit working to ensure that advanced AI is safe and beneficial for all, by catalysing Canadian leadership in AI governance and safety.

https://aigs.ca/
Ottawa, Ontario, CanadaearlyTeam: 8
AI Governance and Safety Institute (AIGSI) logo

AI Governance and Safety Institute (AIGSI)

A small nonprofit conducting outreach, education, and advocacy to improve institutional responses to existential risk from advanced AI. Led by Mikhail Samin and based in London.

https://aigsi.org/
London, United KingdomseedTeam: 1
AI Impacts logo

AI Impacts

A research project that investigates decision-relevant questions about the future of artificial intelligence, including AI timelines, expert forecasts, and the potential societal impacts of advanced AI systems.

https://aiimpacts.org/
Berkeley, Californiaestablished
AI Lab Watch logo

AI Lab Watch

A project that tracks and evaluates frontier AI companies on their safety practices through a weighted scorecard, focusing on actions labs should take to avert extreme risks from advanced AI.

https://ailabwatch.org/
winding-downTeam: 1

AI Objectives Institute

A nonprofit R&D lab working to ensure that AI and future economic systems are built and deployed with genuine human objectives at their core, through research, open-source tools, and broad public input.

https://ai.objectives.institute/
San Francisco, CAestablishedTeam: 15

AI Policy Bulletin

AI Policy Bulletin is a peer-reviewed digital magazine publishing policy-relevant perspectives on frontier AI governance, aimed at informing policymakers and the broader AI policy community.

https://www.aipolicybulletin.org/
early
AI Policy Institute logo

AI Policy Institute

A research and advocacy nonprofit that conducts public opinion polling on AI risks and advocates for government policies to mitigate catastrophic risks from frontier AI technology.

https://theaipi.org/
New York, NYearly
AI Prospects logo

AI Prospects

AI Prospects is a Substack publication by K. Eric Drexler exploring how advanced AI will transform society and what strategic options humanity has for navigating this transition safely.

https://aiprospects.substack.com/
Oxford, United KingdomseedTeam: 1

AI Risk Explorer (AIRE)

AI Risk Explorer (AIRE) is an online platform that monitors large-scale AI risks across cyber offense, biological risk, loss of control, and manipulation, providing curated evidence and actionable insights for policymakers and researchers.

https://www.airiskexplorer.com/
Madrid, SpainseedTeam: 7

AI Risk Mitigation Fund

A nonprofit grantmaking fund that supports technical AI safety research, AI governance policy, and training programs for new AI safety researchers to reduce catastrophic risks from advanced AI.

https://www.airiskfund.com/
earlyTeam: 8

AI Risk: Why Care?

An interactive public education tool that explains AI existential risk to general audiences using a personalized AI chatbot, operated by the AI Governance and Safety Institute (AIGSI) and AI Safety and Governance Fund (AISGF).

https://whycare.aisgf.us/
London, United Kingdomseed

AI Safety Argentina

AI Safety Argentina (AISAR) is a 6-month research scholarship program based at the University of Buenos Aires that connects Argentine students with mentors to conduct AI safety research.

https://scholarship.aisafety.ar/en/
Buenos Aires, ArgentinaseedTeam: 2
AI Safety Asia (AISA) logo

AI Safety Asia (AISA)

A global non-profit building AI safety governance capacity across Asia through policy research, training, and multi-stakeholder dialogue, starting in Southeast Asia.

https://www.aisafety.asia/
Manila, Philippinesearly

AI Safety at the Frontier

A monthly newsletter curating and summarizing the most important AI safety research papers focused on frontier models, written by Johannes Gasteiger of Anthropic's Alignment Science team.

https://aisafetyfrontier.substack.com/
Team: 1

AI Safety Australia and New Zealand

AI Safety ANZ builds and supports a community of AI safety researchers and advocates across Australia and New Zealand, empowering careers and local field-building to mitigate catastrophic AI risks.

https://www.aisafetyanz.com.au/
Melbourne, AustraliaearlyTeam: 4
AI Safety Awareness Project logo

AI Safety Awareness Project

A 501(c)(3) nonprofit that educates the American public and traditional societal institutions about AI safety through free in-person workshops nationwide.

https://aisafetyawarenessproject.org/
Seattle, WAearlyTeam: 5

AI Safety Camp

A non-profit initiative that runs an online, part-time research program connecting early-career researchers with experienced AI safety mentors to collaborate on concrete projects aimed at reducing existential risk from AI.

https://www.aisafety.camp/
establishedActively fundraisingTeam: 4
AI Safety Communications Centre logo

AI Safety Communications Centre

The AI Safety Communications Centre (AISCC) connects journalists to AI safety experts and resources, helping improve media coverage of AI risks and safety issues.

https://aiscc.org/
United Kingdomearly
AI Safety Events & Training logo

AI Safety Events & Training

Weekly newsletter listing newly announced AI safety events and training programs, both online and in-person.

https://aisafetyeventsandtraining.substack.com/
early

AI Safety for Fleshy Humans

An interactive educational web series by Nicky Case explaining AI safety concepts to general audiences through accessible comics and interactive explainers.

https://aisafety.dance/
earlyTeam: 1

AI Safety Foundation

A Canadian registered charity that increases public and scientific awareness of AI's catastrophic risks through education and research.

https://www.aisfoundation.ai/
Toronto, Ontario, CanadaearlyActively fundraising

AI Safety Funding

A newsletter listing newly announced funding opportunities for individuals and organizations working to reduce existential risk from AI.

https://aisafetyfunding.substack.com/
early

AI Safety Hub

AI Safety Hub was a UK-based field-building organization that ran the Safety Labs research programme, matching early-career researchers with experienced AI safety mentors to produce publishable research.

https://www.aisafetyhub.org/
Oxford, United Kingdomwinding-downTeam: 2

AI Safety Hungary

AI Safety Hungary is a Budapest-based nonprofit that runs educational programs and career support to help Hungarian students and professionals enter the AI safety field.

https://www.aishungary.com/
Budapest, HungaryearlyTeam: 4
AI Safety in China logo

AI Safety in China

A bi-weekly newsletter by Concordia AI covering technical AI safety research, governance, and policy developments in China, aimed at bridging the knowledge gap between China's AI safety ecosystem and the global community.

https://aisafetychina.substack.com/
Beijing, ChinaestablishedTeam: 12
AI Safety Initiative at Georgia Tech (AISI) logo

AI Safety Initiative at Georgia Tech (AISI)

AISI is a student-led community at Georgia Tech working to ensure AI is developed safely, running fellowships, research projects, and policy programs across technical and governance tracks.

https://www.aisi.dev/
Atlanta, Georgia, USseedTeam: 24
AI Safety Map Anki Deck logo

AI Safety Map Anki Deck

An Anki flashcard deck of 167 cards covering the main organizations, projects, and programs in the AI safety ecosystem, designed for learning via spaced repetition.

https://ankiweb.net/shared/info/1103716634
seedTeam: 1

AI Safety Nudge Competition

A one-time behavioral nudge initiative run in October 2022 that used a prize draw to encourage people to complete self-defined AI safety goals and overcome procrastination.

Australiawinding-downTeam: 2
AI Safety Quest logo

AI Safety Quest

AI Safety Quest is a fully volunteer-based organization that helps people navigate the AI safety ecosystem through personalized advising calls, cohort learning, and mentorship matching.

https://aisafety.quest/
seed

AI Safety Support

AI Safety Support was a community-building project that reduced existential risk from AI by providing career resources, networking, mentorship, and operational support to early-career, independent, and transitioning AI safety researchers.

https://www.aisafetysupport.org/
Sydney, Australiawinding-downTeam: 2

AI Safety Tactical Opportunities Fund (AISTOF)

A pooled multi-donor charitable fund that rapidly deploys grants to reduce catastrophic risks from advanced AI, covering technical alignment, governance, and evaluations.

https://manifoldmarkets.notion.site/AI-Safety-Tactical-Opportunities-Fund-AISTOF-1bf54492ea7a80fcb088fd431b6b10b4
establishedTeam: 1
AI Safety Takes logo

AI Safety Takes

A personal Substack newsletter by AI safety researcher Daniel Paleka covering recent AI safety research papers and technical developments.

https://newsletter.danielpaleka.com/
Zurich, Switzerlandpre-seedTeam: 1
AI Safety Videos logo

AI Safety Videos

A curated resource page listing where to find AI safety video content, maintained by the AISafety.info project (Stampy's AI Safety Info), founded by Rob Miles.

https://aisafety.info/questions/2222
early

AI Scholarships

A scholarship program through which Open Philanthropy provided direct funding support to individual AI safety researchers for tuition, living expenses, and related costs during their degree programs.

AI Standards Lab logo

AI Standards Lab

An independent nonprofit and affiliated research company dedicated to accelerating the development of AI safety standards and risk management frameworks, with a focus on EU AI Act standards and global AI safety engineering.

https://aistandardslab.org/
Virtual (global team); Holtman Systems Research based in Eindhoven, NetherlandsearlyTeam: 9

AI Timeline

An open-source interactive timeline of major AI events from the 2020s, documenting the road to AGI. No longer actively maintained.

https://ai-timeline.org/
winding-downTeam: 1
AI Watch logo

AI Watch

A database and website maintained by Issa Rice that tracks people, organizations, and products in the AI safety and alignment field.

https://aiwatch.issarice.com/
Bothell, Washington, USAestablishedTeam: 1
AI X-risk Research Podcast (AXRP) logo

AI X-risk Research Podcast (AXRP)

AXRP is a podcast hosted by Daniel Filan featuring in-depth interviews with AI safety researchers about their published work and how it might reduce the risk of AI causing an existential catastrophe.

https://axrp.net/
Berkeley, California, USAestablishedTeam: 4
AI-Plans logo

AI-Plans

AI-Plans is a platform for discovering, critiquing, and advancing AI alignment strategies, hosting a contributable compendium of alignment plans and running community research events.

https://ai-plans.com/
Chelmsford, UKpre-seedTeam: 5

AI: Futures and Responsibility Programme

A collaborative research programme between the Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk at the University of Cambridge, focused on the global risks, governance, and long-term safety of advanced AI.

https://www.ai-far.org/
Cambridge, United KingdomestablishedTeam: 14
AI2050 logo

AI2050

AI2050 is a philanthropic initiative of Schmidt Sciences that funds exceptional researchers worldwide working on the hard problems required for AI to be hugely beneficial to society by 2050.

https://ai2050.schmidtsciences.org/
New York, NYmature
AISafety.com logo

AISafety.com

AISafety.com is a curated resource hub for the AI safety ecosystem, providing newcomers and practitioners with organized directories of courses, communities, events, jobs, funders, and more. It is the flagship platform of Alignment Ecosystem Development (AED), led by Søren Elverlin.

https://www.aisafety.com/
Copenhagen, Denmarkearly

AISafety.info

A comprehensive, community-written interactive FAQ about AI existential safety, founded by Rob Miles and hosted at aisafety.info.

https://aisafety.info/
Remote (global team)early

AISafety.info (Robert Miles)

AI safety education through YouTube videos and an interactive FAQ website (aisafety.info), making alignment concepts accessible to broad audiences.

https://aisafety.info/
San Francisco, CAearly

Algorithmic Research Group

An AI safety research lab studying how software and industrial systems recursively improve themselves, building benchmarks and evaluation frameworks to understand the behavior and limits of self-improving AI systems.

https://www.algorithmicresearchgroup.com/
Durham, NC, United Statespre-seedTeam: 2

Ali Merali

Ali Merali is an Economics PhD candidate at Yale University researching how AI model scaling affects real-world economic productivity. He received Open Philanthropy funding to run randomized controlled trials estimating the economic impact of LLM scale.

https://economics.yale.edu/people/ali-merali
London, United KingdomseedTeam: 1
Aligned AI logo

Aligned AI

Oxford-based AI safety company developing concept extrapolation technology to enable AI systems to generalize human values and intent beyond their training data.

https://buildaligned.ai/
Thame, Oxfordshire, England, UKseedTeam: 5

Alignment Ecosystem Development

An AI safety field-building nonprofit that builds and maintains digital infrastructure to grow and improve the AI safety ecosystem, including AISafety.com, AISafety.info, and approximately 15 other projects.

https://alignment.dev/
earlyTeam: 3
Alignment of Complex Systems Research Group logo

Alignment of Complex Systems Research Group

An interdisciplinary research group based at Charles University in Prague studying multi-agent systems composed of humans and advanced AI, focused on understanding and mitigating systemic risks from AI integration into human institutions.

https://acsresearch.org/
Prague, Czech RepublicearlyTeam: 5
Alignment Research Center logo

Alignment Research Center

A nonprofit research organization focused on theoretical AI alignment research, developing formal mechanistic explanations of neural network behavior to ensure future ML systems are aligned with human interests.

https://www.alignment.org/
Berkeley, CAestablishedTeam: 9

Alignment Research Engineer Accelerator

ARENA is a 4-5 week intensive ML engineering bootcamp in London that trains technically skilled individuals to contribute to AI safety research. It covers deep learning fundamentals, mechanistic interpretability, reinforcement learning, and model evaluations.

https://www.arena.education/
London, UKestablishedTeam: 9

All-Party Parliamentary Group for Future Generations

A cross-party group in the UK Parliament that works to make the welfare of future generations salient to policymakers, combating political short-termism on issues like catastrophic risks, climate change, and emerging technology.

https://www.appgfuturegenerations.com/
London, United KingdomearlyTeam: 2

Alliance to Feed the Earth in Disasters (ALLFED)

ALLFED is a nonprofit that researches and develops resilient food solutions to ensure humanity can be fed during global catastrophes such as nuclear winter, supervolcano eruptions, or events that disable critical infrastructure.

https://allfed.info/
Lafayette, CO, USAestablishedTeam: 26
Americans for Responsible Innovation logo

Americans for Responsible Innovation

Americans for Responsible Innovation (ARI) is a bipartisan 501(c)(4) nonprofit that advocates for thoughtful AI governance frameworks in the United States. It works to help policymakers develop policies that protect the public from AI-related harms while maintaining American technological leadership.

https://ari.us/
Washington, DCearlyTeam: 37
Amodo Design logo

Amodo Design

A Sheffield-based hardware engineering consultancy focused on differential technology development across AI safety, biosecurity, humane tech, and accelerating science.

https://amododesign.com/
Sheffield, England, UKearlyTeam: 31

Amplifying AI Safety

An AI safety project fiscally sponsored by Epistea, z.s., a Czech umbrella organization for existential security and epistemics projects based in Prague.

Prague, Czech Republicearly

An Overview of the AI Safety Funding Situation

A research article by Stephen McAleese providing a comprehensive overview of the AI safety funding landscape, published on the EA Forum and LessWrong.

https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situation

Andrew Lohn

Andrew Lohn is a Senior Fellow at Georgetown's Center for Security and Emerging Technology (CSET), where he leads the CyberAI Project examining the intersection of artificial intelligence and cybersecurity. His research focuses on how AI shifts the cyber offense-defense balance and the security vulnerabilities inherent in AI systems.

https://cset.georgetown.edu/staff/andrew-lohn/
Washington, DCestablishedTeam: 1

Angela Aristizábal

Angela Aristizábal is a Colombian researcher and Program Director of the ITAM AI Futures Fellowship, focused on building research capacity in Latin America around catastrophic and existential risks from advanced AI.

https://aifuturesfellowship.org/
Mexico City, MexicoseedTeam: 3
Anthropic logo

Anthropic

Anthropic is an AI safety company and public benefit corporation building reliable, interpretable, and steerable AI systems, best known for developing the Claude family of large language models.

https://www.anthropic.com/
San Francisco, CAmatureTeam: 3000
Apart Research logo

Apart Research

An independent AI safety research organization that accelerates AI safety talent development and produces impactful research through hackathons, structured fellowships, and collaborative research programs.

https://apartresearch.com/
Copenhagen, DenmarkearlyTeam: 8

Apollo Fellowship

An EA-aligned summer debate camp at Oxford University for high school and first-year college students, combining competitive debate training with Effective Altruism concepts including AI safety and global catastrophic risk.

https://www.apollofellowship.com/
Oxford, United Kingdomwinding-down
Apollo Research logo

Apollo Research

Apollo Research is an AI safety organization that develops evaluations and tools to detect and mitigate deceptive alignment (scheming) in frontier AI systems.

https://www.apolloresearch.ai/
London, United KingdomseedTeam: 24
Applied Research Laboratory for Intelligence and Security logo

Applied Research Laboratory for Intelligence and Security

ARLIS is the University of Maryland's Department of Defense University Affiliated Research Center (UARC) dedicated to intelligence and national security, combining AI, behavioral science, and systems engineering to address complex security challenges.

https://www.arlis.umd.edu/
College Park, MarylandmatureTeam: 240
Arb Research logo

Arb Research

Arb Research is a small research consultancy producing rigorous, independent analysis on AI safety, forecasting, and related topics for funders and organizations in the effective altruism ecosystem.

https://arbresearch.com/
establishedTeam: 9
Arbital logo

Arbital

Arbital was a hybrid blogging and wiki platform designed to make complex explanations of AI alignment and mathematics more accessible, founded by Alexei Andreev and Eliezer Yudkowsky. The project was shut down in 2017 and its content was later migrated to LessWrong.

https://arbital.greaterwrong.com/explore/ai_alignment/
winding-downTeam: 4
Arcadia Impact logo

Arcadia Impact

Arcadia Impact is a London-based nonprofit that empowers individuals to pursue high-impact careers tackling global challenges, with a focus on AI safety research, governance, and talent development.

https://www.arcadiaimpact.org/
London, United KingdomestablishedTeam: 7

Arizona State University

Arizona State University is a major public research university and one of the largest in the United States, with significant programs in AI governance, responsible innovation, and governance of emerging technologies.

https://www.asu.edu/
Tempe, Arizona, USAmature
Arkose logo

Arkose

Arkose was an AI safety field-building nonprofit that supported experienced machine learning professionals to become involved in technical AI safety research through personalized advisory calls, curated resources, and expert introductions. The organization closed in June 2025 due to lack of funding.

https://arkose.org/
Berkeley, California, USwinding-downTeam: 3
Ashgro logo

Ashgro

Ashgro is a 501(c)(3) public charity that provides Model A fiscal sponsorship to AI safety projects, handling their accounting, HR, legal compliance, and grant management so project leads can focus on their mission.

https://www.ashgro.org/
Wilmington, DEestablishedTeam: 3
Association for Long Term Existence and Resilience (ALTER) logo

Association for Long Term Existence and Resilience (ALTER)

An Israeli academic research and advocacy nonprofit focused on reducing catastrophic and existential risks through AI safety research, biosecurity policy, and standards development.

https://alter.org.il/
IsraelearlyTeam: 4
Astera Neuro & AGI logo

Astera Neuro & AGI

Astera's Neuro & AGI program is an in-house research effort that draws on neuroscience to develop safe and aligned artificial general intelligence, operating under the Astera Institute founded by Jed McCaleb.

https://astera.org/neuro-agi/
Berkeley, CAestablished
Astral Codex Ten (ACX) logo

Astral Codex Ten (ACX)

Astral Codex Ten is Scott Alexander's Substack blog covering reasoning, science, AI, medicine, ethics, and effective altruism, and the home of the ACX Grants program that funds high-impact projects.

https://www.astralcodexten.com/
San Francisco Bay Area, CAestablishedTeam: 1

Astralis Foundation

A European multi-donor foundation that seeds and scales high-impact initiatives for the secure and beneficial development of AI. Astralis unites funders, experts, and entrepreneurs to steer AI toward beneficial outcomes through grantmaking, strategic guidance, and network-building.

https://astralisfoundation.org/
Stockholm, Sweden / London, UKearlyTeam: 6

Athena Mentorship Program for Women

Athena is a hybrid mentorship program for women in technical AI alignment research, combining remote mentorship with an in-person retreat to build skills, networks, and representation in the field.

https://researchathena.org/
Remote (retreats held in Oxford, UK)early

Atlas Computing

Atlas Computing is a 501(c)(3) nonprofit that maps neglected AI safety risks, sources expert founders, and prototypes solutions to scale human control over advanced AI capabilities.

https://atlascomputing.org/
San Francisco, CAearlyTeam: 3
Augur logo

Augur

An AI research consultancy providing foresight and strategy across the frontier AI supply chain, focusing on hardware and software supply chains, strategic AI use cases, and control and ownership of AI systems.

https://augurai.net/
Washington, DC, United StatesseedTeam: 1

Balsa Policy Institute Inc

A nonpartisan 501(c)(3) nonprofit think tank that funds academic research, drafts legislation, and builds the evidence base for neglected federal policy reforms, with a primary focus on repealing the Jones Act.

https://www.balsaresearch.com/
New York, NYearlyActively fundraisingTeam: 3
Basis Research Institute logo

Basis Research Institute

A nonprofit applied research organization building universal reasoning engines grounded in probabilistic programming and causal inference to advance society's ability to solve intractable scientific and societal problems.

https://www.basis.ai/
New York, NYestablishedTeam: 28
Beijing Institute of AI Safety and Governance (Beijing-AISI) logo

Beijing Institute of AI Safety and Governance (Beijing-AISI)

Beijing-AISI is a Beijing municipal government-backed research institute dedicated to AI safety evaluations, governance frameworks, and safety standards for large language models and AI systems.

https://beijing.ai-safety-and-governance.institute/
Beijing, Chinaestablished

Beneficial AI Foundation (BAIF)

A US nonprofit founded by Max Tegmark and Meia Chita-Tegmark to place AI safety on a solid quantitative foundation. BAIF funds research, fellowships, and university partnerships aimed at ensuring advanced AI systems remain safe and beneficial.

https://www.beneficialaifoundation.org/
Wilmington, DE (registered); Cambridge, MA (operations)earlyTeam: 14

Berkeley Center for Responsible, Decentralized Intelligence

UC Berkeley's multidisciplinary research center advancing AI safety, agentic AI, and decentralization technology to empower a responsible digital economy.

https://rdi.berkeley.edu/
Berkeley, CAestablishedTeam: 23
Berkeley Existential Risk Initiative logo

Berkeley Existential Risk Initiative

A US-based public charity that collaborates with university research groups working to reduce existential risk by providing them with free operational services and support.

https://www.existence.org/
Covina, CAestablishedTeam: 4
Berryville Institute of Machine Learning logo

Berryville Institute of Machine Learning

BIML is an independent nonprofit research institute focused on machine learning security, specifically the work of building security into ML systems at the design level.

https://berryvilleiml.com/
Berryville, Virginia, USAearlyTeam: 4
BlueDot Impact logo

BlueDot Impact

BlueDot Impact is a nonprofit talent accelerator that runs free cohort-based courses to train professionals in AI safety, AI governance, and biosecurity. It is the leading pipeline for building the workforce needed to safely navigate transformative AI.

https://bluedot.org/
London, UKestablishedTeam: 7

Boston Astral Codex Ten

A local rationalist community meetup group in the Boston area organized around Scott Alexander's Astral Codex Ten blog. The group hosts informal social gatherings and occasional structured discussions in Cambridge and Somerville.

https://linktr.ee/bostonacx
Cambridge, MA, USpre-seed

Boston University

Boston University is a large private research university in Boston, Massachusetts with over 37,000 students, 17 schools and colleges, and more than $554 million in annual research expenditures. It hosts AI safety and alignment student programs and has received Open Philanthropy funding for AI safety-relevant research.

https://www.bu.edu/
Boston, MassachusettsmatureTeam: 10674
Bounded Regret logo

Bounded Regret

Bounded Regret is the personal research blog of Jacob Steinhardt, Associate Professor at UC Berkeley, covering AI safety, machine learning, forecasting, and philosophy.

https://bounded-regret.ghost.io/
Berkeley, California, USATeam: 1

Brian Christian

Brian Christian is an American author and researcher whose books — including The Alignment Problem (2020) — have helped communicate AI safety and alignment challenges to broad audiences. He is also pursuing a DPhil in psychology at Oxford, researching human preferences to inform AI alignment.

https://brianchristian.org/
San Francisco, CA / UKestablished

Brown University AI Governance Lab

A research center at Brown University focused on AI governance, policy, and socially responsible computing, housed within the Center for Technological Responsibility, Reimagination and Redesign (CNTR) at the Data Science Institute.

https://cntr.brown.edu/
Providence, Rhode Island, United StatesearlyTeam: 15

Cadenza Labs

A SERI MATS research team that received joint LTFF funding in 2023 to investigate dishonesty detection in advanced AI systems, building on the Discovering Latent Knowledge paper. The team went on to co-found Cadenza Labs, an AI safety research group focused on interpretability and LLM lie detection.

https://cadenzalabs.org/
Europe (Prague / London / Germany)seedTeam: 4

Cambridge AI Safety Hub

A Cambridge-based hub bringing together students and professionals to reduce existential risks from advanced AI systems through education, research mentorship, and community-building.

https://caish.org/
Cambridge, England, UKearly
Cambridge Boston Alignment Initiative logo

Cambridge Boston Alignment Initiative

CBAI is a Cambridge, MA-based 501(c)(3) nonprofit that runs research fellowships and technical bootcamps to grow the pipeline of AI safety researchers, and fiscally sponsors student AI safety groups at Harvard and MIT.

https://www.cbai.ai/
Cambridge, MassachusettsearlyTeam: 7

Cambridge Effective Altruism

Cambridge Effective Altruism is a community group at the University of Cambridge that helps students and local residents explore how to have the most positive impact through their careers and charitable giving. It runs fellowships, discussion groups, and career support programs, and was the seedbed for BlueDot Impact.

https://www.eacambridge.org/
Cambridge, UKestablishedTeam: 2
Campaign for AI Safety (CAS) logo

Campaign for AI Safety (CAS)

An Australian grassroots advocacy organization founded in 2023 to increase public understanding of AI existential risk and push for strong laws to halt dangerous AI development. It merged with the Existential Risk Observatory in 2024.

https://campaignforaisafety.org/
Australiawinding-down

Can We Secure AI With Formal Methods?

A newsletter by Quinn Dougherty that bridges formal methods researchers and AI security practitioners, covering developments in formal verification applied to AI safety.

https://newsletter.for-all.dev/
Berkeley, CAseedTeam: 1

Carnegie Endowment for International Peace

A major Washington, DC-based think tank founded in 1910 that produces independent policy research on international security, democracy, and governance, with a growing program on AI safety and technology governance.

https://carnegieendowment.org/
Washington, DCmatureTeam: 300
Carnegie Mellon University logo

Carnegie Mellon University

Carnegie Mellon University is a leading private research university in Pittsburgh, Pennsylvania, widely regarded as one of the world's top institutions for AI and computer science research. It hosts multiple AI safety and governance programs spanning technical research, policy, and applied AI security.

https://www.cmu.edu/
Pittsburgh, Pennsylvania, USAmatureTeam: 8000
Catalyze Impact logo

Catalyze Impact

A global nonprofit incubator that helps founders launch and scale AI safety, security, and resilience organizations by providing mentorship, co-founder matching, and access to seed funding networks.

https://catalyze-impact.org/
Colorado, United StatesearlyTeam: 4

Catherine Brewer

AI governance researcher and grantmaker working on AI safety capacity-building, previously funded by Open Philanthropy to support Oxford's AI safety community.

https://catherinebrewer.github.io/
United KingdomseedTeam: 1

Cavendish Labs

A 501(c)(3) nonprofit research organization in Cavendish, Vermont focused on AI safety and pandemic prevention, operating as a residential research community where researchers live and work together.

https://cavendishlabs.org/
Cavendish, Vermont, USAearly
Center for a New American Security logo

Center for a New American Security

CNAS is a Washington, DC-based bipartisan think tank that develops national security and defense policy, with a dedicated Technology & National Security program focused on AI, compute governance, and great power competition.

https://www.cnas.org/
Washington, DCmatureTeam: 87
Center for AI Policy logo

Center for AI Policy

A nonpartisan advocacy organization that worked with the US Congress to develop and promote legislation addressing catastrophic risks from advanced AI systems.

https://www.centeraipolicy.org/
Washington, DCwinding-downTeam: 10
Center for AI Risk Management & Alignment (CARMA) logo

Center for AI Risk Management & Alignment (CARMA)

CARMA is a research and policy think tank working to lower the risks to humanity and the biosphere from transformative AI through integrated risk management, policy research, and technical safety work.

https://carma.org/
Berkeley, CA, US (virtual-first)earlyTeam: 5
Center for AI Safety logo

Center for AI Safety

A nonprofit research organization that works to reduce societal-scale risks from artificial intelligence through safety research, field-building, and advocacy.

https://safe.ai/
San Francisco, CaliforniaestablishedTeam: 25

Center for AI Safety Action Fund

The 501(c)(4) advocacy arm of the Center for AI Safety, dedicated to advancing bipartisan public policies that maintain U.S. leadership in AI and protect against AI-related national security threats.

https://action.safe.ai/
San Francisco, CaliforniaearlyTeam: 2
Center for AI Standards and Innovation (CAISI) logo

Center for AI Standards and Innovation (CAISI)

CAISI is the U.S. government's primary point of contact for AI testing and research within NIST, focused on developing voluntary AI standards and conducting evaluations of frontier AI systems. It was renamed from the U.S. AI Safety Institute in June 2025.

https://www.nist.gov/caisi
Gaithersburg, MD, USAmature

Center for Applied Rationality

A nonprofit that runs immersive workshops teaching rationality techniques drawn from cognitive science, behavioral economics, and decision theory, with a focus on improving thinking for people working on high-impact problems including AI safety.

https://www.rationality.org/
Berkeley, CaliforniaestablishedActively fundraisingTeam: 8

Center for Applied Utilitarianism

A London-based AI strategy think tank led by Dr. Hauke Hillebrandt, conducting independent research on AI policy, AI governance, and global catastrophic risks.

London, United KingdomearlyTeam: 1
Center for Human-Compatible AI logo

Center for Human-Compatible AI

A research center at UC Berkeley dedicated to developing the foundations of provably beneficial AI systems, ensuring that advanced AI remains aligned with human values and preferences.

https://humancompatible.ai/
Berkeley, California, USAestablishedTeam: 27

Center for Humane Technology

A nonprofit dedicated to ensuring that today's most consequential technologies, including AI and social media, actually serve humanity by exposing misaligned incentives and advocating for systemic change through policy, litigation, and public awareness.

https://www.humanetech.com/
San Francisco, California, United StatesestablishedTeam: 20
Center for International Security and Cooperation logo

Center for International Security and Cooperation

Stanford University's interdisciplinary research center tackling critical security challenges, including AI governance, nuclear risk, biosecurity, and emerging technology policy.

https://cisac.fsi.stanford.edu/
Stanford, CAmatureTeam: 62
Center for Law and AI Risk logo

Center for Law and AI Risk

CLAIR is building the field of Law and AI Safety, producing and promoting legal scholarship on reducing catastrophic and existential risks from advanced artificial intelligence.

https://clair-ai.org/
earlyTeam: 3
Center for Long-Term Cybersecurity logo

Center for Long-Term Cybersecurity

UC Berkeley's Center for Long-Term Cybersecurity (CLTC) is a research and collaboration hub advancing future-oriented cybersecurity research, policy, and education, with a growing focus on AI safety governance and risk management for frontier AI systems.

https://cltc.berkeley.edu/
Berkeley, CAmature

Center for Responsible Innovation

A Washington, DC-based 501(c)(3) nonprofit that conducts AI policy research, develops actionable legislative proposals, and educates U.S. policymakers on responsible innovation. It is the research and education arm of the Americans for Responsible Innovation family of organizations.

https://www.centerforresponsibleinnovation.us/
Washington, DCearly

Center for Security and Emerging Technology

Georgetown University think tank providing decision-makers with data-driven analysis on the security implications of emerging technologies.

https://cset.georgetown.edu/
Washington, DCmatureTeam: 62

Center for Strategic and International Studies

CSIS is a major Washington, DC-based bipartisan think tank that conducts policy research on national security, international affairs, and emerging technologies including AI. Its Wadhwani AI Center focuses specifically on the governance, geopolitics, and national security implications of artificial intelligence.

https://www.csis.org/
Washington, DCmatureTeam: 206
Center on Long-Term Risk logo

Center on Long-Term Risk

A research organization focused on reducing risks of astronomical suffering (s-risks) from advanced AI, with emphasis on conflict prevention and cooperation between transformative AI systems.

https://longtermrisk.org/
London, United KingdomestablishedActively fundraisingTeam: 9

Centre for AI Security and Access

CASA is a research organization working to ensure the benefits of AI can be widely and equitably distributed globally without compromising essential security, with a focus on Global Majority countries.

https://casa-ai.org/
Brussels, BelgiumseedActively fundraisingTeam: 2
Centre for Effective Altruism (CEA) logo

Centre for Effective Altruism (CEA)

CEA builds and supports the global effective altruism community through conferences, online platforms, local group support, grantmaking, and community health programs, helping people use evidence and reason to address the world's most pressing problems.

https://www.centreforeffectivealtruism.org/
Oxford, UKmatureTeam: 66

Centre for Enabling EA Learning & Research

CEEALAR (formerly the EA Hotel) is a residential fellowship in Blackpool, UK that provides free or subsidized accommodation, meals, and stipends to individuals working on effective altruism projects, with a focus on AI safety research.

https://www.ceealar.org/
Blackpool, United KingdomestablishedActively fundraisingTeam: 2
Centre for Future Generations (CFG) logo

Centre for Future Generations (CFG)

CFG is an independent think-and-do tank based in Brussels that helps policymakers anticipate and govern powerful emerging technologies including advanced AI, biotechnology, climate interventions, and neurotechnology.

https://cfg.eu/
Brussels, Belgiumestablished

Centre for International Governance Innovation

CIGI is an independent, non-partisan Canadian think tank that produces research and policy recommendations on international governance challenges, with a dedicated program focused on managing global-scale risks from advanced AI systems.

https://www.cigionline.org/
Waterloo, Ontario, CanadamatureTeam: 168
Centre for Long-Term Resilience (CLTR) logo

Centre for Long-Term Resilience (CLTR)

The legal entity behind the Centre for Long-Term Resilience (CLTR), a UK-based independent think tank working to transform global resilience to extreme risks, particularly in AI safety and biosecurity.

https://www.longtermresilience.org/
London, United KingdomestablishedTeam: 19
Centre for the Governance of AI logo

Centre for the Governance of AI

GovAI is an independent nonprofit research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI, by producing rigorous research on AI governance and fostering talent in the field.

https://www.governance.ai/
London, United KingdomestablishedTeam: 54

Centre for the Study of Existential Risk

An interdisciplinary research centre at the University of Cambridge dedicated to studying and mitigating existential and global catastrophic risks, with major focus areas in AI safety, biological risks, and environmental risks.

https://www.cser.ac.uk/
Cambridge, United Kingdomestablished
CeSIA logo

CeSIA

The French Center for AI Safety (Centre pour la Securite de l'IA) is a Paris-based non-profit think tank and research center working to reduce risks from artificial intelligence through education, technical research, and policy advocacy in France and Europe.

https://www.cesia.org/en
Paris, FranceearlyTeam: 11

China AI Safety & Development Association (CnAISDA)

China's self-described counterpart to the AI Safety Institutes of other countries, launched in February 2025 to represent China in international AI safety governance conversations. It operates as a networked coalition of eight leading Chinese research institutions rather than a standalone organization.

https://cnaisi.cn/
Beijing, Chinaearly
ChinaTalk logo

ChinaTalk

ChinaTalk is a podcast and newsletter covering China, technology, and US policy, founded by Jordan Schneider. It serves as a hybrid think tank and media outlet providing non-partisan analysis on US-China relations and emerging technology.

https://www.chinatalk.media/
Washington, D.C. / San Francisco, CAearlyTeam: 6

Civic AI Security Program (CivAI)

A nonprofit that educates policymakers, civil society, and the public about AI capabilities and dangers through interactive live software demonstrations.

https://civai.org/
Berkeley, CAearlyTeam: 2
Coefficient Giving logo

Coefficient Giving

Coefficient Giving (formerly Open Philanthropy) is a major philanthropic grantmaker that directs funding toward high-impact causes including AI safety, global health, biosecurity, and farm animal welfare. It is the primary grantmaking vehicle for Dustin Moskovitz and Cari Tuna's philanthropy through Good Ventures.

https://coefficientgiving.org/
San Francisco, CAmatureTeam: 150
Cold Takes logo

Cold Takes

Cold Takes is Holden Karnofsky's personal blog covering AI safety, longtermism, and existential risk, most notably the 'Most Important Century' thesis arguing that transformative AI makes the 21st century uniquely pivotal for humanity's long-run trajectory.

https://www.cold-takes.com/
San Francisco, CAmatureTeam: 1

Collective Action for Existential Safety (CAES)

Collective Action for Existential Safety (CAES) catalyzes coordinated action to reduce existential risks from AI, nuclear weapons, and engineered pandemics. It is an initiative of the Center for Existential Safety, a newly-formed U.S. nonprofit.

https://existentialsafety.org/
San Francisco, CApre-seedTeam: 2

Collective Intelligence Project

A nonprofit R&D lab that develops collective intelligence tools and governance models to steer transformative AI development toward better outcomes through democratic public input.

https://www.cip.org/
San Francisco, CA, USAestablishedTeam: 11

Collider

Collider is a coworking and community space in New York City for AI safety and other high-impact professionals to work, collaborate, and convene.

https://collider.nyc/
New York, NYearlyTeam: 3

Columbia University

Columbia University is an Ivy League research university in New York City with significant AI safety, governance, and policy research activity across multiple schools and centers.

https://www.columbia.edu/
New York City, New York, USAmature

Compassion in Machine Learning

CaML researches how synthetic pretraining data can shift AI systems towards greater compassion and moral open-mindedness regarding all sentient beings, including animals and potential digital minds.

https://www.compassionml.com/
seedActively fundraisingTeam: 6
Computational and Biological Learning Lab (CBL) logo

Computational and Biological Learning Lab (CBL)

A research group at the University of Cambridge's Department of Engineering that uses engineering approaches to understand the brain and develop artificial learning systems, with strengths in Bayesian and probabilistic machine learning.

https://cbl.eng.cam.ac.uk/
Cambridge, United KingdomestablishedTeam: 65
Computational Rational Agents Laboratory (CORAL) logo

Computational Rational Agents Laboratory (CORAL)

A research group developing mathematical theory for computationally bounded agents to provide rigorous, scalable solutions to the AI alignment problem.

https://coral-research.org/
IsraelearlyTeam: 2

Conjecture

London-based for-profit AI safety company working on Cognitive Emulation, an approach to building controllable, bounded AI systems that reason transparently.

https://www.conjecture.dev/
London, UKestablishedTeam: 13

Consequence Foundries

An early-stage project in the existential risk reduction space that received a $168,000 general support grant from Jaan Tallinn via the Survival and Flourishing Fund in 2022, with Convergence Analysis serving as fiscal sponsor.

seed
Constellation logo

Constellation

Constellation is a nonprofit research center in Berkeley that supports AI safety work through fellowships, an incubator, and a collaborative coworking space hosting researchers and organizations across the field.

https://www.constellation.org/
Berkeley, CAestablishedTeam: 23

Contramont Research

Contramont Research is a nonprofit AI safety lab that studies where safety and security evaluation methods break down, using cryptographic model organisms to expose fundamental limitations of existing techniques.

https://contramont.org/
Lexington, MAseed
ControlAI logo

ControlAI

ControlAI is a nonprofit advocacy organization working to keep humanity in control of advanced AI by pushing governments to prohibit the development of artificial superintelligence.

https://controlai.com/
London, UKearly
Convergence Analysis logo

Convergence Analysis

An international AI x-risk strategy think tank that conducts scenario research and governance analysis to mitigate risks from transformative AI technologies.

https://www.convergenceanalysis.org/
Sacramento, CA, USAearlyTeam: 9
Cooperative AI Foundation logo

Cooperative AI Foundation

The Cooperative AI Foundation (CAIF) is a UK-registered charity that funds and supports research to improve the cooperative intelligence of advanced AI systems for the benefit of humanity.

https://www.cooperativeai.com/
Exeter, England, UKestablishedTeam: 6

Coordinal Research

Coordinal Research builds automation tools to accelerate AI safety and alignment research. The organization develops AI-powered scaffolds and workflows that help researchers conduct alignment experiments faster and at greater scale.

https://coordinal.org/
seedTeam: 2

Coordination Project

A small project fiscally sponsored by the Center for Applied Rationality (CFAR), funded by SFF for general support in the 2023-H2 grant round.

early

Cornell University

A private Ivy League research university in Ithaca, New York, with multiple faculty and labs engaged in AI safety, alignment, and responsible AI research, serving as the institutional home and fiscal recipient for SFF-funded work.

https://www.cornell.edu/
Ithaca, New York, United Statesmature
Cyborgism logo

Cyborgism

Cyborgism is an AI safety research agenda and community proposing that human-AI collaboration systems — where humans are cognitively augmented by LLMs rather than replaced by autonomous AI agents — can accelerate alignment research while preserving human control.

https://cyborgism.wiki/
seed
Czech Association for Effective Altruism (CZEA) logo

Czech Association for Effective Altruism (CZEA)

Czech national organization promoting effective altruism through community building, events, and project incubation, with a particular focus on AI safety and high-impact careers.

https://efektivni-altruismus.cz/
Prague, Czech Republicwinding-downTeam: 3

Daniel Dewey

Independent AI safety researcher and former Open Philanthropy program officer, focused on existential risks from advanced AI and deep learning.

https://www.danieldewey.net/
earlyTeam: 1

Daniel Kang

Assistant Professor at UIUC researching dangerous capabilities of AI agents, with a focus on cybersecurity benchmarks and AI safety evaluations used by frontier labs and governments.

https://ddkang.github.io/
Urbana-Champaign, IL, USAestablished

Decode Research

An AI safety research infrastructure nonprofit that builds open-source tools and platforms to accelerate mechanistic interpretability research, including Neuronpedia and SAELens.

https://www.decoderesearch.org/
San Francisco, CAearlyTeam: 4
DeepSeek logo

DeepSeek

DeepSeek is a Chinese AI research laboratory founded in 2023 that develops frontier large language models, including the DeepSeek-V3 and DeepSeek-R1 series, notable for achieving competitive performance at dramatically lower reported compute costs.

https://www.deepseek.com/
Hangzhou, ChinamatureTeam: 160

Dioptra

Dioptra is a volunteer AI safety research community founded by Joshua Clymer that builds evaluations for advanced AI systems.

seedTeam: 21

Distill Prize for Clarity in Machine Learning

An annual award of $10,000 recognizing outstanding work communicating and clarifying ideas in machine learning. Logistics are administered by the Open Philanthropy Project.

https://distill.pub/prize/
winding-down
Don't Worry about the Vase logo

Don't Worry about the Vase

Don't Worry About the Vase is Zvi Mowshowitz's influential blog and Substack newsletter covering AI safety, AI developments, rationality, and policy, with over 32,000 subscribers.

https://thezvi.substack.com/
New York, NYestablishedTeam: 1
Donations List Website logo

Donations List Website

A public database tracking philanthropic donations by individuals and foundations in the effective altruism and rationality communities. It is a personal project by Vipul Naik, hosted at donations.vipulnaik.com.

https://donations.vipulnaik.com/
Berkeley, CaliforniaearlyTeam: 1
Doom Debates logo

Doom Debates

Doom Debates is a podcast and debate show hosted by Liron Shapira focused on high-stakes debates about AI existential risk. Its mission is to raise mainstream awareness of potential extinction from AGI and build social infrastructure for high-quality public discourse on the topic.

https://lironshapira.substack.com/
earlyActively fundraisingTeam: 2
Dovetail logo

Dovetail

A small agent foundations research group using foundational mathematics to develop rigorous understanding of AI agents and their safety properties.

https://dovetailresearch.org/
Remote (US / UK)earlyTeam: 6
Dr Waku logo

Dr Waku

Dr Waku is a pseudonymous AI safety educator who creates YouTube videos, a Substack newsletter, and other content explaining AI alignment risks and AI security to general audiences.

https://drwaku.substack.com/
London, UKpre-seedTeam: 1
Dwarkesh Podcast logo

Dwarkesh Podcast

A long-form interview podcast by Dwarkesh Patel featuring deeply researched conversations with leading AI researchers, scientists, historians, and economists on topics including AI safety, AGI timelines, and the future of technology.

https://www.dwarkesh.com/
San Francisco, CAestablishedTeam: 2

EA Infrastructure Fund

An expert-managed grantmaking fund that supports projects building the effective altruism community's capacity, including community building, prioritization research, epistemic infrastructure, events, and fundraising for effective charities.

https://funds.effectivealtruism.org/funds/ea-community
Oxford, United KingdomestablishedActively fundraising

EA Netherlands

EA Netherlands (Effectief Altruïsme Nederland) is the national effective altruism community-building organization for the Netherlands, running introductory programs, supporting local groups, and hosting major EA events.

https://effectiefaltruisme.nl/en
Amsterdam, NetherlandsestablishedTeam: 2

Earendil

Earendil is a hardware security startup that builds tamper response systems for AI compute infrastructure, including GPU clusters, to support hardware-enabled governance and compliance verification for AI development.

https://earendil.ai/
Tustin, California, United Statesseed

Economics of Transformative AI

A research initiative at the University of Virginia, led by Professor Anton Korinek, that produces and disseminates cutting-edge economic research to help society navigate the transition to transformative AI and guide it toward shared prosperity.

https://www.econtai.org/
Charlottesville, Virginia, United StatesearlyTeam: 3
Effective Altruism Domains logo

Effective Altruism Domains

EA Domains (ea.domains) is a project that acquires and holds internet domain names relevant to effective altruism, AI safety, and existential risk, then offers them free to legitimate EA-aligned projects to prevent domain squatting.

https://ea.domains/
pre-seed

Effective Altruism Geneva

Effective Altruism Geneva is a Swiss nonprofit community group based in Geneva that builds a local network of effective altruists and fosters high-impact careers in AI safety, policy, and global health.

https://eageneva.org/
Geneva, Switzerlandestablished

Effective Altruism Israel

Effective Altruism Israel is a Tel Aviv-based nonprofit that builds and supports the Israeli effective altruism community, helping people maximize their social impact through career guidance, education, and effective giving programs.

https://www.effective-altruism.org.il/
Tel Aviv, IsraelestablishedTeam: 7
Effective Institutions Project logo

Effective Institutions Project

A global working group that seeks out and incubates high-impact strategies to improve institutional decision-making, with a primary focus on AI governance and existential risk reduction.

https://effectiveinstitutionsproject.org/
Tarrytown, NYearlyActively fundraisingTeam: 7
Effective Thesis logo

Effective Thesis

A nonprofit that helps university students choose high-impact thesis topics and launch research careers focused on the world's most pressing problems, including AI safety, biosecurity, animal welfare, and global health.

https://www.effectivethesis.org/
Brno, Czech RepublicestablishedTeam: 5
Effective Ventures Foundation logo

Effective Ventures Foundation

Effective Ventures Foundation (UK) is the umbrella charity that provided fiscal sponsorship and operational infrastructure for major effective altruism organizations including 80,000 Hours, Giving What We Can, and the Centre for Effective Altruism. It is currently winding down as its sponsored projects spin out to become independent entities.

https://ev.org/
Oxford, England, UKwinding-downTeam: 109

Effektiv Altruism Sverige (EA Sweden)

Effective Altruism Sweden is a Stockholm-based nonprofit that builds the Swedish effective altruism community through career coaching, fellowship programs, and project incubation. Founded in 2016, it is one of the most established national EA organizations globally.

https://www.effektivaltruism.org/
Stockholm, SwedenestablishedTeam: 4

Egg Syntax (Jesse Davis)

Independent AI safety and alignment researcher focused on technical research to reduce existential risk from advanced AI, particularly around LLM interpretability and the nature of LLM internal representations.

https://www.novonon.com/
Asheville, NC, USAseedTeam: 1

Egor Krasheninnikov

AI safety researcher who worked at the Krueger AI Safety Lab at the University of Cambridge, focusing on training helpful AI systems and understanding out-of-context reasoning in large language models.

London, United KingdomseedTeam: 1

Eisenstat Research Directions

Sam Eisenstat's independent AI alignment research program, focused on mathematical foundations of agency, logical uncertainty, concept formation (condensation theory), and causal modeling at different levels of abstraction.

https://www.sameisenstat.net/
Berkeley, CA, USAearlyTeam: 1
Electronic Frontier Foundation logo

Electronic Frontier Foundation

EFF is the leading nonprofit defending civil liberties in the digital world, championing user privacy, free expression, and innovation through litigation, policy work, and technology development.

https://www.eff.org/
San Francisco, CAmatureActively fundraisingTeam: 125
EleutherAI logo

EleutherAI

EleutherAI is a nonprofit AI research institute focused on interpretability, alignment, and open-source foundation model research. It is best known for creating GPT-NeoX, the Pythia model suite, and The Pile dataset.

https://www.eleuther.ai/
Alexandria, VA, USAestablishedTeam: 15
ELLIS Institute Tübingen logo

ELLIS Institute Tübingen

Europe's first ELLIS Institute, based in Tübingen, Germany, conducting pioneering fundamental AI research with dedicated groups in AI safety, alignment, and robust machine learning.

https://institute-tue.ellis.eu/
Tübingen, GermanyestablishedTeam: 130

Encode

Youth-led AI policy nonprofit that advances AI safety, governance, and accountability through nonpartisan legislative advocacy and public education, headquartered in Washington, DC.

https://encodeai.org/
Washington, DCearlyTeam: 9

Epistea

A Prague-based nonprofit umbrella organization that creates, runs, and supports projects in existential security, epistemics, rationality, and effective altruism, providing fiscal sponsorship, operations infrastructure, and community spaces.

https://epistea.org/

Epistemic Garden

An R&D lab building tools to map how ideas spread online, helping communities understand their information landscape and defend against coordinated manipulation.

https://www.epistemic.garden/
Lisbon, PortugalseedTeam: 3
Epoch AI logo

Epoch AI

Epoch AI is a nonprofit research institute that tracks and forecasts the trajectory of artificial intelligence by analyzing trends in compute, data, algorithmic efficiency, and capabilities. It produces leading databases and quantitative models to help policymakers, researchers, and funders understand the pace and impact of AI progress.

https://epoch.ai/
San Francisco, CAestablishedActively fundraisingTeam: 21

Equilibria Network

Equilibria Network is a collective intelligence research organization studying how coordination mechanisms affect group outcomes, with a focus on multi-agent AI safety and democratic resilience.

https://eq-network.org/
Uppsala, SwedenearlyTeam: 3
EquiStamp logo

EquiStamp

EquiStamp is a Public Benefit Corporation that provides evaluation implementation, data annotation, red/blue teaming, and operational support so AI safety researchers can focus on research rather than logistics.

https://www.equistamp.com/
Lewes, DE, United StatesearlyTeam: 20

ERA

ERA (Existential Risk Alliance) is a Cambridge-based nonprofit running a fully funded annual fellowship to train researchers and entrepreneurs working on AI safety and governance.

https://erafellowship.org/
Cambridge, United KingdomestablishedTeam: 14
Ergo Impact logo

Ergo Impact

Ergo Impact finds, funds, and scales promising people and solutions to the world's most pressing problems by providing ambitious philanthropists a rigorous, high-leverage approach to deploying capital at scale.

https://ergoimpact.org/
San Francisco, CAearlyTeam: 4
ETH Zürich logo

ETH Zürich

ETH Zürich (Swiss Federal Institute of Technology) is one of the world's leading technical universities, hosting several prominent AI safety and security research groups including the SPY Lab and SRI Lab.

https://ethz.ch/
Zurich, SwitzerlandmatureTeam: 13600

ETH Zurich Fondation (USA)

The US fundraising arm of the ETH Zurich Foundation, enabling American donors to make tax-deductible gifts that support research, teaching, and talent at ETH Zurich in Switzerland.

https://ethz-foundation-usa.org/
New York, NYestablished

EthicsNet Creed.Space

A nonprofit creating crowdsourced datasets of prosocial behaviors to train ethical AI systems, and building the Creed.Space platform for personalized constitutional AI alignment.

https://creed.space/
Redditch, Worcestershire, United KingdomearlyTeam: 10
European AI Office logo

European AI Office

The EU's official AI regulatory body within the European Commission, responsible for implementing and enforcing the EU AI Act, particularly for general-purpose AI models.

https://digital-strategy.ec.europa.eu/en/policies/ai-office
Brussels, BelgiummatureTeam: 125
European Network for AI Safety (ENAIS) logo

European Network for AI Safety (ENAIS)

ENAIS connects AI safety researchers, field-builders, and policymakers across Europe to improve coordination and reduce the fragmentation of the continent's AI safety ecosystem.

https://www.enais.co/
Europe (distributed, 13+ countries)earlyTeam: 9

Evitable

Evitable is a nonprofit that informs and organizes the public to confront societal-scale risks from AI and put an end to the reckless race to develop superintelligence.

https://evitable.com/
San Francisco Bay Area, CApre-seedActively fundraisingTeam: 3
Existential Risk Observatory logo

Existential Risk Observatory

A Dutch foundation that works to reduce existential risk by informing the public debate through media engagement, policy advocacy, research, and public events.

https://www.existentialriskobservatory.org/
Amsterdam, NetherlandsearlyTeam: 10

Explainable

Explainable backs content creators shaping how the world understands AI, running fellowships and campaigns to communicate AI safety research to broader audiences.

https://explainable.media/
San Francisco, CAearly

FABRIC

A nonprofit educational organization that runs immersive rationality and AI-focused camps for mathematically talented young people, including ESPR, PAIR, and ASPR.

https://www.fabric.camp/
Prague, Czech Republicestablished

Faculty AI

Faculty AI is a London-based applied AI company that builds decision intelligence products and services for public and private sector clients, with a strong focus on responsible and safe AI deployment.

https://faculty.ai/
London, UKmatureTeam: 400
FAR AI logo

FAR AI

FAR.AI is an AI safety research nonprofit that conducts technical research on robustness, alignment, and model evaluation, while building the AI safety field through workshops, fellowships, and grantmaking.

https://www.far.ai/
Berkeley, CAestablishedTeam: 41

Flourishing Future Foundation

A 501(c)(3) nonprofit that accelerates neglected approaches to AI alignment by providing researchers with engineering teams, compute resources, and operational infrastructure.

https://www.flourishingfuturefoundation.org/
Marina del Rey, CAearlyActively fundraisingTeam: 4
Forecasting Research Institute logo

Forecasting Research Institute

FRI advances the science of forecasting to improve decision-making on high-stakes issues including AI risk, nuclear risk, and biosecurity. It was co-founded by superforecasting pioneer Philip Tetlock.

https://forecastingresearch.org/
Claymont, Delaware, USAestablishedTeam: 18
Foresight Institute logo

Foresight Institute

A nonprofit research organization founded in 1986 that advances frontier science and technology for the benefit of life, with focus areas spanning secure AI, nanotechnology, longevity biotechnology, neurotechnology, and existential hope.

https://foresight.org/
San Francisco, California, USAestablishedActively fundraisingTeam: 15

Forethought

A research nonprofit based in Oxford, UK, focused on how to navigate the transition to a world with superintelligent AI systems, tackling neglected questions in AI macrostrategy.

https://www.forethought.org/
Oxford, United KingdomearlyActively fundraisingTeam: 14
Formation Research logo

Formation Research

Formation Research is a UK-based not-for-profit that researches lock-in risk — the danger that negative features of the world, such as authoritarian power structures or AI-enabled totalitarianism, become permanently entrenched — and develops interventions to minimize it.

https://www.formationresearch.com/
Penryn, Cornwall, UKpre-seedTeam: 2
Foundation for American Innovation logo

Foundation for American Innovation

A center-right tech policy think tank, formerly the Lincoln Network, that bridges Silicon Valley and Washington to advance AI safety policy, technology governance, and pro-innovation reform.

https://www.thefai.org/
Washington, DCestablishedTeam: 30

Foxglove

A UK-based nonprofit that uses strategic litigation, investigation, and campaigning to hold governments and Big Tech companies accountable for technology-related harms, including discriminatory algorithms, worker exploitation, and data privacy abuses.

https://www.foxglove.org.uk/
London, United KingdomestablishedTeam: 7
Friedrich Schiller University Jena logo

Friedrich Schiller University Jena

Friedrich Schiller University Jena is a major German research university that hosts the LAMALab, a research group led by Dr. Kevin Jablonka focused on AI-accelerated materials discovery and LLM benchmarking in chemistry.

https://www.jcsm.uni-jena.de/en/800/jablonka-kevin
Jena, GermanyestablishedTeam: 11
From AI to ZI logo

From AI to ZI

A Substack blog by PhD mathematician Robert Huben documenting his Open Philanthropy-funded year of AI safety research and writing, covering mechanistic interpretability, AI risk, and related topics.

https://aizi.substack.com/
winding-downTeam: 1
Frontier AI Safety Research (FAIR) logo

Frontier AI Safety Research (FAIR)

Argentine nonprofit conducting interdisciplinary research to advance frontier AI safety, embedded within the Laboratory of Innovation and Artificial Intelligence at the University of Buenos Aires.

https://fair-uba.com/
Buenos Aires, ArgentinaearlyTeam: 12

Funding for AI Alignment Projects Working With Deep Learning Systems

A grant program run by Open Philanthropy (now Coefficient Giving) that awarded $16.6 million to AI alignment research projects working with deep learning systems, sourced through a 2021 public RFP.

https://coefficientgiving.org/funds/navigating-transformative-ai/
San Francisco, CAmature

Future Impact Group (FIG) Fellowship

FIG runs a part-time, remote-first 12-week research fellowship connecting early-to-mid-career researchers with experienced project leads working on AI safety, AI governance, and AI sentience.

https://futureimpact.group/
Oxford, United KingdomearlyTeam: 4
Future Matters logo

Future Matters

Future Matters is a nonprofit strategy consultancy and think tank based in Berlin that helps organizations working on climate protection, AI governance, and biosecurity create effective policy and social change.

https://future-matters.org/
Berlin, GermanyestablishedTeam: 13

Future of Humanity Foundation

A UK-registered charity established in 2020 to support the work of the Future of Humanity Institute at the University of Oxford by hiring researchers and support staff, providing operational support, and disbursing grants. Dissolved in May 2024 following FHI's closure.

London, United Kingdomwinding-downTeam: 1
Future of Humanity Institute (FHI) logo

Future of Humanity Institute (FHI)

FHI was a pioneering multidisciplinary research institute at the University of Oxford, founded by Nick Bostrom in 2005 to study existential risks and big-picture questions about humanity's long-term future. It closed in April 2024 after 19 years.

https://www.futureofhumanityinstitute.org/
Oxford, UKwinding-downTeam: 40
Future of Life Foundation (FLF) logo

Future of Life Foundation (FLF)

An organizational incubator that launches new nonprofits and projects working to steer transformative technology away from extreme large-scale risks. FLF identifies gaps in the AI safety ecosystem, recruits founders, and provides seed funding and operational support to new ventures.

https://www.flf.org/
Campbell, CA, USAestablishedTeam: 5

Future of Life Institute

A nonprofit organization working to steer transformative technologies -- particularly AI, biotechnology, and nuclear weapons -- away from extreme large-scale risks and towards benefiting life.

https://futureoflife.org/
Boston, MA, USAmatureTeam: 30

FutureSearch

FutureSearch is an AI forecasting startup that deploys teams of LLM agents to research, analyze, and forecast across structured data, emphasizing legible reasoning behind predictions.

https://futuresearch.ai/
San Francisco, CAseedTeam: 10

General Purpose AI Policy Lab

A French nonprofit research organization working alongside government institutions to address the security and international coordination challenges posed by general-purpose AI development.

https://gpaipolicylab.org/
Paris, FranceearlyTeam: 9

generative.ink

generative.ink is the personal research and creative platform of Janus (also known as "moire" and "@repligate"), a pseudonymous AI safety researcher known for the Simulators framework and the Loom human-AI collaboration tool.

https://generative.ink/
San Francisco, CApre-seedTeam: 1
Geneva Centre for Security Policy logo

Geneva Centre for Security Policy

The Geneva Centre for Security Policy (GCSP) is an international foundation that advances peace, security, and international cooperation through education, diplomatic dialogue, and policy research. It hosts over 1,100 course participants annually and conducts research on emerging security challenges including AI governance and autonomous weapons.

https://www.gcsp.ch/
Geneva, Switzerlandmature

Geodesic Research

Geodesic Research is a technical AI safety organization based in Cambridge, UK, focused on implementing and measuring pre- and post-training methods to improve model safety and alignment.

https://www.geodesicresearch.org/
Cambridge, UKearlyTeam: 7

George Mason University

George Mason University is a large public research university in Fairfax, Virginia, notable in the AI safety and governance space for housing the Mercatus Center and for faculty research on AI scenarios and policy.

https://www.gmu.edu/
Fairfax, Virginia, USAmatureTeam: 8900

Georgetown University

Georgetown University is a major private Jesuit research university in Washington, D.C. that hosts several programs relevant to AI safety and governance, including the Center for Security and Emerging Technology (CSET), the McCourt School's Tech & Public Policy program, and the Law School's Institute for Technology Law & Policy.

https://www.georgetown.edu/
Washington, D.C., USAmatureTeam: 7019
GiveWiki logo

GiveWiki

GiveWiki is a crowdsourced charity evaluator and donation recommendation platform that aggregates expert donor track records to surface high-impact philanthropic projects, with a primary focus on AI safety.

https://givewiki.org/
Bassersdorf, Switzerland (founder); incorporated in Delaware, USAearlyTeam: 2
Giving What We Can logo

Giving What We Can

Giving What We Can (GWWC) is a community of effective givers that promotes the 10% Pledge, encouraging people to commit at least 10% of their income to the most impactful charities. Founded in 2009, it has grown to over 12,000 members who have collectively donated more than $500 million.

https://www.givingwhatwecan.org/
London, United KingdomestablishedActively fundraisingTeam: 16
Global AI Moratorium (GAIM) logo

Global AI Moratorium (GAIM)

Calling on policymakers to implement a global moratorium on large AI training runs until alignment is solved.

https://moratorium.ai/
early

Global Catastrophic Risk Institute

A nonprofit, nonpartisan think tank founded in 2011 that conducts research and policy work on risks that could significantly harm or destroy human civilization, including AI, nuclear war, climate change, and asteroid impacts.

https://gcri.org/
Remote (United States)establishedTeam: 2
Global Challenges Project (GCP) logo

Global Challenges Project (GCP)

GCP runs intensive three-day residential workshops for university students to explore foundational arguments around risks from advanced AI and biotechnology, helping them identify careers in catastrophic risk reduction.

https://www.globalchallengesproject.org/
Berkeley, CA, USAestablishedTeam: 3
Global Partnership on AI (GPAI) logo

Global Partnership on AI (GPAI)

GPAI is an international intergovernmental initiative of 44 member countries that promotes the responsible development and use of artificial intelligence, grounded in human rights, inclusion, and democratic values. In July 2024, GPAI merged with the OECD's AI work under a unified GPAI brand hosted at the OECD in Paris.

https://oecd.ai/en/gpai
Paris, Francemature
Global Priorities Institute (GPI) logo

Global Priorities Institute (GPI)

GPI was an interdisciplinary research center at the University of Oxford (2018-2025) that conducted foundational academic research on how to do the most good. It used philosophy, economics, and psychology to investigate global priorities and existential risk.

https://www.globalprioritiesinstitute.org/
Oxford, United Kingdomwinding-downTeam: 7
Global Shield logo

Global Shield

An international advocacy organization devoted to reducing global catastrophic risk from all threats and hazards, working with governments worldwide to enact policies that address existential and catastrophic risks.

https://www.globalshieldpolicy.org/
Washington, DC, United StatesearlyTeam: 7

GoalsRL

GoalsRL was a one-day academic workshop on goal specifications for reinforcement learning, held in 2018 jointly at ICML, IJCAI, and AAMAS. It brought together researchers to address challenges in reward engineering and explore alternatives to hand-designed scalar rewards.

https://sites.google.com/view/goalsrl
Stockholm, Swedenwinding-downTeam: 5

Good Ancestors Policy

An Australian charity that conducts policy research and advocates for government action to reduce catastrophic and existential risks, with a focus on AI safety, pandemic prevention, and disaster preparedness.

https://www.goodancestors.org.au/
Canberra, AustraliaearlyTeam: 4
Good Impressions logo

Good Impressions

Good Impressions is a grant-funded digital marketing agency that applies for-profit growth techniques to help effective nonprofits, think tanks, and foundations maximize engagement with their work.

https://www.goodimpressionsmedia.com/
Toronto, Canada (remote)earlyTeam: 8
Goodfire logo

Goodfire

Goodfire is an AI interpretability research lab that builds tools to understand and design the internal mechanisms of neural networks. Their flagship product, Ember, gives engineers direct, programmable access to AI model internals.

https://www.goodfire.ai/
San Francisco, CAearlyTeam: 51
Google DeepMind logo

Google DeepMind

Google DeepMind is Alphabet's primary AI research lab, formed in 2023 by merging DeepMind and Google Brain, working toward artificial general intelligence that benefits humanity.

https://deepmind.google/
London, United KingdommatureTeam: 5600
Gradient Institute logo

Gradient Institute

Gradient Institute is an independent Australian nonprofit research organisation advancing safe and responsible AI through rigorous science-based research, practical guidance, and policy engagement.

https://www.gradientinstitute.org/
Sydney, AustraliaestablishedTeam: 9
Gray Swan AI logo

Gray Swan AI

Gray Swan AI is an AI safety and security company that builds tools to assess vulnerabilities in AI deployments and develop more robust, attack-resistant AI models. It was founded in 2024 by Carnegie Mellon University researchers who pioneered automated jailbreaking research.

https://www.grayswan.ai/
Pittsburgh, PAseed
Guide Labs logo

Guide Labs

Guide Labs builds interpretable AI systems and foundation models that humans can reliably understand, audit, and steer. Their flagship model, Steerling-8B, is the first inherently interpretable large language model at scale.

https://www.guidelabs.ai/
San Francisco, CAseedTeam: 9

Halcyon Futures

Halcyon Futures is a nonprofit incubator and grant fund that identifies exceptional leaders and helps them launch ambitious new organizations focused on AI safety and global resilience.

https://halcyonfutures.org/
Santa Monica, CAestablishedTeam: 3

Harmony Intelligence

Harmony Intelligence is an AI safety research and engineering company that reduces catastrophic AI risk through frontier model evaluations, red teaming, and AI-powered defensive cybersecurity products.

https://www.harmonyintelligence.com/
Sydney, AustraliaseedTeam: 10
Harvard University logo

Harvard University

Harvard University is a leading private research university with several prominent programs advancing AI safety, AI governance, and AI interpretability research, including the Kempner Institute, Berkman Klein Center, and Harvard AI Safety Team.

https://www.harvard.edu/
Cambridge, Massachusetts, USAmature

Hebrew University of Jerusalem

A leading Israeli research university home to the Governance of AI Lab (GOAL), which conducts cross-disciplinary research on AI governance, legal alignment, and the safe development of advanced AI systems.

https://en.huji.ac.il/
Jerusalem, IsraelmatureTeam: 2700

Heron

Working to bridge the gap between frontier AI models and the level of cybersecurity they need by connecting professionals to high-leverage opportunities in AI security.

https://www.heronsec.ai/
Tel Aviv, Israelseed
High Impact Professionals (HIP) logo

High Impact Professionals (HIP)

High Impact Professionals (HIP) helps experienced mid-career and senior professionals transition into high-impact roles and commit to effective giving across global health, animal welfare, and global catastrophic risk reduction. Through its Impact Accelerator Program, Talent Directory, and HIP Pledge Club, HIP channels professional talent and financial resources toward the most pressing global problems.

https://www.highimpactprofessionals.org/
London, United KingdomestablishedTeam: 4

HitRecord

Joseph Gordon-Levitt's collaborative media platform, which established a dedicated AI safety arm (HitRecord AI Safety Project LLC and AI Safety Digital Media Fund) to use storytelling and public engagement to address AI risks.

https://hitrecord.org/
Los Angeles, CA, USAestablished

Hofvarpnir Studios

Hofvarpnir Studios is a nonprofit that builds and maintains GPU compute clusters to support academic AI safety research. It provides high-performance computing infrastructure to researchers who would otherwise lack access to the resources needed to study and advance AI safety.

https://hofvarpnir.ai/
Bridgeport, CAestablishedTeam: 3

Holtman Systems Research

A solo-researcher company founded by Koen Holtman that conducts AI safety research and participates in the creation of European AI safety standards in support of the EU AI Act.

https://holtmansystemsresearch.nl/
Eindhoven, NetherlandsearlyTeam: 1

Horizon Events

Horizon Events is a Canadian non-profit that advances AI safety R&D by organizing high-impact events, including the AI Safety Unconference series and monthly Guaranteed Safe AI Seminars.

https://horizonomega.org/
Montreal, CanadaseedTeam: 1
How to pursue a career in technical AI alignment logo

How to pursue a career in technical AI alignment

A career guide written by Charlie Rogers-Smith for people familiar with AI alignment arguments who are considering direct work in the field. Published on the EA Forum and LessWrong in June 2022.

https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment
Human-aligned AI Summer School logo

Human-aligned AI Summer School

An annual 4-day academic summer school held in Prague focused on teaching AI alignment research frameworks to PhD students, ML researchers, and advanced students.

https://humanaligned.ai/
Prague, Czech RepublicseedTeam: 6

Humans in Control

Humans in Control is a nonpartisan grassroots movement working to protect people and future generations from the risks of unchecked AI through advocacy, coalition-building, and state-level policy campaigns.

https://humansincontrol.org/
Berkeley, California, USApre-seedTeam: 1

Iliad

An umbrella organization for applied mathematics research in AI alignment, now operating under the name Iliad. Organizes the ILIAD conference series, runs fellowship and intensive programs, incubates research organizations, and manages scientific publishing.

https://www.iliad.ac/
London, United KingdomearlyTeam: 14

ILINA Program

An African-led research program dedicated to building talent, generating impactful research, and shaping policy to advance AI safety, based in Nairobi, Kenya.

https://www.ilinaprogram.org/
Nairobi, KenyaearlyTeam: 5

Impact Academy Limited

A nonprofit that runs fellowships and educational programs to develop expert, mission-aligned talent for AI safety research and governance.

https://www.impactacademy.org/
London, United Kingdomearly
Impact Ops logo

Impact Ops

Impact Ops is an operations consultancy that delivers specialist finance, recruitment, entity setup, and systems support to high-impact nonprofits, helping them scale and flourish.

https://impact-ops.org/
London, UKestablishedTeam: 11

Imperial College London

Imperial College London is a world-leading research university specialising in science, technology, engineering, medicine, and business, with significant programs in AI safety, trustworthy AI, and long-term AI risk research.

https://www.imperial.ac.uk/
London, United KingdommatureTeam: 8783
Import AI logo

Import AI

Import AI is a weekly newsletter by Jack Clark (co-founder of Anthropic) covering cutting-edge AI research and its societal implications, read by over 116,000 subscribers.

https://jack-clark.net/
San Francisco, CAmatureTeam: 1

Institute for Advanced Consciousness Studies

A 501(c)(3) research laboratory in Santa Monica, CA that uses neuroimaging, neuromodulation, VR/AR, and altered states to study consciousness, with an AI safety research program on preventing antisocial AI through artificial empathy.

https://advancedconsciousness.org/
Santa Monica, CAestablishedTeam: 15

Institute for AI Policy and Strategy

A nonpartisan think tank that produces policy research on the implications of advanced AI systems, covering frontier security, compute governance, and international AI strategy to equip policymakers for high-magnitude AI risks.

https://www.iaps.ai/
Washington, DC, United StatesestablishedTeam: 28

Institute for Law & AI (LawAI)

An independent legal research think tank, now operating as the Institute for Law & AI, that conducts foundational research and advises governments on the legal and governance challenges posed by artificial intelligence.

https://law-ai.org/
Boston, MA, USAestablishedTeam: 30
Institute for Security and Technology logo

Institute for Security and Technology

A 501(c)(3) nonpartisan think tank that bridges technology and national security policy, with major programs addressing ransomware, frontier AI security, and the catastrophic risks posed by emerging technologies to nuclear stability.

https://securityandtechnology.org/
Oakland, CA, USAestablishedTeam: 33
Intelligence Rising logo

Intelligence Rising

Intelligence Rising is a strategic AI futures roleplay simulation that lets decision-makers experience the tensions and risks of competitive AI development. It is a project of Technology Strategy Roleplay, a UK registered charity.

https://www.intelligencerising.org/
London, UKestablishedTeam: 3

International AI Governance Alliance (IAIGA)

IAIGA is a Geneva-based non-profit initiative working to establish a supranational AI governance body and legally-binding global treaty to ensure AI safety and equitable distribution of AI-derived benefits.

https://www.iaiga.org/
Geneva, Switzerlandseed
International Association for Safe & Ethical AI (IASEAI) logo

International Association for Safe & Ethical AI (IASEAI)

IASEAI is an independent nonprofit that works to ensure AI systems operate safely and ethically by shaping policy, promoting research, and building a global community around AI safety.

https://www.iaseai.org/
San Diego, CA, USAearlyTeam: 5
International Conference on Learning Representations logo

International Conference on Learning Representations

ICLR is one of the world's premier annual academic conferences dedicated to deep learning and representation learning research. It was founded in 2013 by Yann LeCun and Yoshua Bengio.

https://iclr.cc/
Rio de Janeiro, Brazil (2026 venue; conference rotates annually)mature

International Conference on Machine Learning

ICML is the premier annual academic conference for machine learning research, bringing together researchers from academia and industry worldwide. It is organized by the International Machine Learning Society (IMLS), a 501(c)(3) nonprofit.

https://icml.cc/
San Diego, CA, USAmature
International Dialogues on AI Safety (IDAIS) logo

International Dialogues on AI Safety (IDAIS)

A high-level international dialogue series that brings together leading AI scientists and governance experts to build consensus on managing extreme risks from frontier AI systems.

https://idais.ai/
San Francisco Bay Area, USAearlyTeam: 10
International Institute of Information Technology Hyderabad logo

International Institute of Information Technology Hyderabad

IIIT Hyderabad is India's first and leading research-focused IIIT, a not-for-profit public-private partnership university specializing in computer science and AI. It hosts the Responsible and Safe AI Systems course, supported by Open Philanthropy, and is a major hub for AI and machine learning research in India.

https://www.iiit.ac.in/
Hyderabad, IndiamatureTeam: 243

Jacob Steinhardt

Associate Professor of Statistics and EECS at UC Berkeley and Co-founder & CEO of Transluce, researching how to make machine learning systems understood by and aligned with humans.

https://jsteinhardt.stat.berkeley.edu/
Berkeley, CaliforniaestablishedTeam: 1

Jennifer Lin

Independent AI safety researcher known for critical analysis of AI timelines and LLM capabilities, with work funded by Open Philanthropy and recognized in the EA community.

https://scholar.google.com/citations?hl=en&user=4EQGl1AAAAAJ&view_op=list_works&sortby=pubdate
earlyTeam: 1

Jeremy Rubinoff

Individual AI safety community builder based in Toronto who received Open Philanthropy funding to organize an AI safety retreat in 2023.

Toronto, Ontario, CanadaseedTeam: 1

Jérémy Scheurer

AI safety researcher specializing in evaluations for deceptive capabilities, scheming, and situational awareness in frontier language models. Research Scientist in the Evaluations Team at Apollo Research.

Zurich, SwitzerlandestablishedTeam: 1

Johns Hopkins University

Johns Hopkins University hosts AI safety-relevant research led by Prof. Anqi (Angie) Liu, whose group focuses on machine learning for trustworthy AI, including distributionally robust learning and uncertainty quantification under distribution shift.

https://anqiliu-ai.github.io/
Baltimore, MDestablishedTeam: 9

Juniper Ventures

Juniper Ventures is a pre-seed venture capital firm that invests in startups explicitly working to make AI secure and beneficial for humanity.

https://juniperventures.xyz/
San Francisco, CAearlyTeam: 8
JUSTICE logo

JUSTICE

A UK legal reform charity that advances access to justice, human rights, and the rule of law through research, advocacy, and strategic court interventions, with a dedicated workstream on AI governance and rights-based frameworks for AI deployment.

https://justice.org.uk/
London, United KingdommatureTeam: 19
Kairos Project logo

Kairos Project

Kairos is a US nonprofit that accelerates talent into AI safety and policy by running university group support programs and research mentorship fellowships.

https://kairos-project.org/
San Francisco, CAearlyTeam: 3
Krueger AI Safety Lab (KASL) logo

Krueger AI Safety Lab (KASL)

An AI safety research group led by David Krueger at the University of Cambridge's Computational and Biological Learning Lab (2021-2024), focused on technical AI alignment, deep learning safety, and reducing existential risk from advanced AI.

https://www.kasl.ai/
Cambridge, United KingdomestablishedTeam: 15

Laboratory for Social Minds at Carnegie Mellon University

An interdisciplinary research lab at Carnegie Mellon University, directed by Simon DeDeo, that studies complex social systems through mathematical modeling and empirical investigation to better understand humanity's past, present, and future.

https://sites.santafe.edu/~simon/
Pittsburgh, Pennsylvania, USAestablished

Langsikt - Centre for Long-Term Policy

A Norwegian non-profit think tank working to make policymaking more long-term, with a focus on AI governance, pandemic preparedness, biotechnology risks, and institutional reforms to represent future generations.

https://www.langsikt.no/
Oslo, NorwayearlyTeam: 10

Lausanne AI Alignment

A student-led AI safety group at EPFL in Lausanne, Switzerland that organizes bootcamps, hackathons, reading groups, and research projects to advance the field of AI safety and alignment.

https://lausanne.aisafety.ch/
Lausanne, SwitzerlandseedTeam: 5

LawZero

LawZero is a nonprofit AI safety research organization founded by Yoshua Bengio to develop safe-by-design AI systems that cannot act autonomously or pursue hidden goals.

https://lawzero.org/
Montreal, Quebec, CanadaearlyTeam: 15

Leaf: Dilemmas and Dangers in AI

Leaf runs online fellowships for exceptional teenagers (ages 15-19) to explore how they can have the most positive impact, including through a flagship course on AI safety called Dilemmas and Dangers in AI.

https://leaf.courses/
United StatesearlyActively fundraisingTeam: 4

Leap Labs

Leap Labs builds AI-powered interpretability tools to accelerate scientific discovery by finding patterns in complex datasets that humans and standard methods miss.

https://www.leap-labs.com/
London, United KingdomseedTeam: 10

Lee Foster

Lee Foster is an AI security researcher and the Co-Founder and CEO of Aspect Labs who received Open Philanthropy funding in 2024 to build an LLM Misuse Database documenting real-world instances of large language model misuse.

https://www.aspectlabs.ai/
Raleigh-Durham-Chapel Hill, NC, USAseed
Legal Advocates for Safe Science and Technology (LASST) logo

Legal Advocates for Safe Science and Technology (LASST)

A nonprofit that uses legal advocacy, including amicus briefs, impact litigation, and policy engagement, to mitigate catastrophic risks from advanced AI systems and biotechnology.

https://lasst.org/
New York, NY (fully remote)earlyActively fundraisingTeam: 3

Legal Safety Lab

A Dutch foundation (stichting) that uses legal expertise and advocacy within Europe to promote safer development and deployment of frontier technologies including AI, biotechnology, and nuclear technology.

https://legalsafetylab.org/
Amsterdam, Netherlandsseed
LessWrong logo

LessWrong

A community blog and forum devoted to refining the art of human rationality, with major focus areas including AI alignment, cognitive biases, decision-making, and effective altruism.

https://www.lesswrong.com/
Berkeley, CAestablishedActively fundraisingTeam: 8

Lethal Intelligence

Lethal Intelligence is an AI risk awareness media project producing original explainer films, podcasts, and social media content about the existential dangers of advanced AI systems.

https://lethalintelligence.ai/
pre-seedTeam: 1
Leverhulme Centre for the Future of Intelligence (CFI) logo

Leverhulme Centre for the Future of Intelligence (CFI)

The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre at the University of Cambridge that explores the nature, ethics, and impact of artificial intelligence. It brings together researchers from machine learning, philosophy, social science, and other fields to address both near-term and long-term challenges posed by AI.

https://www.lcfi.ac.uk/
Cambridge, UKestablishedTeam: 51
Lightcone Infrastructure logo

Lightcone Infrastructure

A nonprofit that builds infrastructure for the rationality and AI safety communities, running LessWrong, the AI Alignment Forum, and the Lighthaven campus in Berkeley, CA.

https://www.lightconeinfrastructure.com/
Berkeley, CA, USAestablishedActively fundraisingTeam: 8
Lightspeed Grants logo

Lightspeed Grants

Lightspeed Grants is a fast-turnaround grantmaking program run by Lightcone Infrastructure, providing rapid funding for projects aimed at reducing existential risk and improving humanity's long-term future.

https://lightspeedgrants.org/
Berkeley, Californiaestablished
Lionheart Ventures logo

Lionheart Ventures

Lionheart Ventures is a seed-stage venture capital firm investing in transformative artificial intelligence and frontier mental health technologies to mitigate civilizational risk.

https://www.lionheart.vc/
San Francisco, CaliforniaestablishedTeam: 17

Live Theory

An AI safety research initiative developing new adaptive theoretical frameworks and AI interface designs to keep human sensemaking at pace with rapidly advancing AI systems.

https://groundless.ai/
IndiaseedTeam: 5
London AI Safety Research (LASR) Labs logo

London AI Safety Research (LASR) Labs

LASR Labs is a 13-week intensive technical AI safety research program in London that places researchers in supervised teams to produce peer-reviewed papers. It is operated by Arcadia Impact and focuses on reducing the risk of loss of control to advanced AI.

https://www.lasrlabs.org/
London, United KingdomearlyTeam: 3
London Initiative for Safe AI logo

London Initiative for Safe AI

LISA is a London-based charity that serves as a hub and infrastructure provider for the AI safety ecosystem, hosting resident organizations, training programs, and independent researchers.

https://www.safeai.org.uk/
London, UKestablishedTeam: 13

Lone Pine Games, LLC

Lone Pine Games is a one-person indie game studio run by Conor Sullivan in Tempe, Arizona. It received a $100,000 Long-Term Future Fund grant in 2022 to develop a video game explaining the AI Stop Button Problem to the public and STEM professionals.

https://lonepine.games/
Tempe, Arizona, USAseedTeam: 1

Long Term Future Fund

An EA Funds grantmaker focused on mitigating global catastrophic risks, especially from advanced AI, by making grants of 4-6 figures mostly to individuals working on existential risk reduction.

https://funds.effectivealtruism.org/funds/far-future
established

Long-Term Future Fund

An expert-managed grantmaking fund within EA Funds that distributes millions annually to reduce global catastrophic risks, with a primary focus on AI safety research, biosecurity, and other existential risk mitigation work.

https://funds.effectivealtruism.org/funds/far-future
Oxford, United KingdomestablishedTeam: 11
Longview Philanthropy logo

Longview Philanthropy

An independent, expert-led philanthropic advisory that helps major donors direct funding toward reducing catastrophic and existential risks, with a core focus on AI safety, biosecurity, and nuclear weapons policy.

https://www.longview.org/
London, United KingdomestablishedTeam: 29

Luthien

Luthien is a Seattle-based nonprofit building production-ready AI control infrastructure that assumes AI models may act adversarially and prevents misaligned systems from achieving harmful goals.

https://luthienresearch.org/
Seattle, WAseedTeam: 2

Machine Intelligence and Normative Theory Lab (MINT Lab)

A research lab at the intersection of philosophy and AI safety, using philosophical and computational methods to study AI alignment, governance, and normative competence, founded and directed by Seth Lazar at Johns Hopkins University and the Australian National University.

https://mintresearch.org/
Baltimore, Maryland / Canberra, AustraliaestablishedTeam: 30
Machine Intelligence Research Institute logo

Machine Intelligence Research Institute

A pioneering AI safety nonprofit that conducts research and public outreach to help prevent human extinction from the development of artificial superintelligence, with a current focus on policy advocacy and communications.

https://intelligence.org/
Berkeley, CaliforniaestablishedTeam: 24
Machine Learning for Alignment Bootcamp (MLAB) logo

Machine Learning for Alignment Bootcamp (MLAB)

MLAB is an intensive in-person bootcamp run by Redwood Research that trains technically skilled programmers in the machine learning engineering skills needed to work on AI alignment research.

https://github.com/redwoodresearch/mlab
Berkeley, CAwinding-down

Machine Learning for Socio-technical Systems Lab

A university research lab at the University of Rhode Island directed by Dr. Sarah M Brown, studying how machine learning interacts with complex socio-technical systems, with a focus on fairness of automated decision-making and AI safety evaluation.

https://ml4sts.com/
Kingston, RI, USAearlyTeam: 11
Macroscopic Ventures logo

Macroscopic Ventures

Swiss nonprofit funder making grants and investments to reduce suffering risks from catastrophic AI misuse, AI conflict, and other large-scale harms. Formerly known as Center for Emerging Risk Research (CERR) and Polaris Ventures.

https://macroscopic.org/
Basel, SwitzerlandmatureTeam: 14

Macrostrategy Research Initiative

A nonprofit research organization founded by Nick Bostrom to study how present-day actions influence humanity's long-term future, with a focus on existential risk, AI safety, and AGI governance.

https://www.macrostrategy.co.uk/
London, United KingdomearlyTeam: 3
Manifold Markets logo

Manifold Markets

The world's largest social prediction market platform, where anyone can create and trade on prediction markets for any topic using play money called Mana.

https://manifold.markets/
San Francisco, CaliforniaseedTeam: 8
Manifund logo

Manifund

A philanthropic platform and 501(c)(3) nonprofit that facilitates regranting, impact certificates, and crowdfunding for charitable projects, with a primary focus on AI safety and effective altruism cause areas.

https://manifund.org/
San Francisco, CAestablishedTeam: 3

Massachusetts Institute of Technology

MIT is a private research university in Cambridge, Massachusetts, widely recognized as a global leader in science, engineering, and technology research, including AI safety and alignment.

https://mit.edu/
Cambridge, MassachusettsmatureTeam: 17490

Mathematical Metaphysics Institute

A nonprofit research institute that seeks to develop mathematically rigorous foundations for metaphysics, using category theory to formalize insights from contemplative traditions, with applications to AI alignment and trustworthy AI.

https://www.mathematicalmetaphysics.org/
Sanford, North CarolinaearlyTeam: 10

Matthew Kenney

Individual AI safety researcher and founder of the Algorithmic Research Group, focused on benchmarking AI agents' capacity for autonomous research and development.

https://www.algorithmicresearchgroup.com/
Durham, NCseed

Meaning Alignment Institute

A nonprofit research institute that develops methods to align AI systems, markets, and democratic institutions with what people genuinely value, using an approach they call full-stack alignment.

https://www.meaningalignment.org/
Berlin, Germany and San Francisco, CAearlyTeam: 4
Median Group logo

Median Group

A small nonprofit research organization studying global catastrophic risks, best known for its insight-based AI timelines model and research on the feasibility of training AGI via deep reinforcement learning.

https://mediangroup.org/
San Francisco, CaliforniaestablishedTeam: 7

MentaLeap

MentaLeap is an Israel-based AI safety research group focused on mechanistic interpretability, applying neuroscience and cybersecurity expertise to reverse-engineer neural networks and reduce risks from advanced AI systems.

https://mentaleap.ai/
IsraelseedTeam: 7

Meridian Cambridge

Meridian Cambridge is an independent research and incubation hub in Cambridge, UK focused on AI safety, biosecurity, frontier-risk policy, and institutional design. Formerly Effective Altruism Cambridge CIC, it hosts the Cambridge AI Safety Hub, biosecurity and governance hubs, research labs, and fellowships.

https://www.meridiancambridge.org/
Cambridge, United KingdomestablishedTeam: 12
Meta Charity Funders logo

Meta Charity Funders

Meta Charity Funders (MCF) is a donor funding circle that pools capital and expertise to support EA meta charities - organizations working one level removed from direct impact. Members each commit $100,000 or more annually and coordinate through biannual open grant rounds.

https://www.metacharityfunders.com/
Stockholm, Swedenestablished
Metaculus logo

Metaculus

An online forecasting platform and aggregation engine that harnesses collective intelligence to produce calibrated predictions on questions of global importance, including AI timelines, biosecurity, nuclear risk, and climate change.

https://www.metaculus.com/
Santa Cruz, CaliforniaestablishedTeam: 28

Michigan State University

Michigan State University's Department of Computer Science and Engineering (CSE) conducts AI safety research, notably through the OPTML group's work on trustworthy machine learning and LLM unlearning.

https://engineering.msu.edu/about/departments/cse
East Lansing, Michigan, USAmatureTeam: 45

Mila

Mila is the Quebec Artificial Intelligence Institute, the world's largest academic research center for deep learning, founded by Turing Award winner Yoshua Bengio. It brings together over 1,400 researchers and professors to advance AI for the benefit of all, with responsible and safe AI as a core strategic priority.

https://mila.quebec/en
Montreal, Quebec, CanadamatureTeam: 1200

Miles's Substack

The personal newsletter of Miles Brundage, former Head of Policy Research at OpenAI, covering independent AI policy research and governance.

https://milesbrundage.substack.com/
San Francisco Bay Area, CAestablishedTeam: 1

Mindstream Project

Mindstream Project operates the Buddhism & AI Initiative, a collaborative effort to bring together Buddhist communities, technologists, and contemplative researchers to help shape the future of artificial intelligence.

https://www.engagedbuddhists.ai/
London, England, United KingdomseedTeam: 5

Missing Measures

A pre-launch organization fiscally sponsored by Lightcone Infrastructure and funded by the Survival and Flourishing Fund in 2025.

https://missingmeasures.com/
early
MIT AI Risk Repository logo

MIT AI Risk Repository

A comprehensive, living database of over 1,700 AI risks extracted from published frameworks and organized through causal and domain taxonomies, maintained as a program within MIT FutureTech.

https://airisk.mit.edu/
Cambridge, MAestablishedTeam: 7
MIT Algorithmic Alignment Group logo

MIT Algorithmic Alignment Group

A research group at MIT CSAIL developing algorithmic frameworks, techniques, and policies to make AI systems safe and socially beneficial. Led by Associate Professor Dylan Hadfield-Menell.

https://algorithmicalignment.csail.mit.edu/
Cambridge, MA, USAestablishedTeam: 19

MIT FutureTech

MIT FutureTech is an interdisciplinary research group at MIT CSAIL studying the economic and technical foundations of progress in computing and AI. The group produces rigorous insights on AI trends, risks, and impacts to inform policy, industry, and scientific funding decisions.

https://futuretech.mit.edu/
Cambridge, MA, USAestablishedTeam: 110
ML Alignment & Theory Scholars (MATS) logo

ML Alignment & Theory Scholars (MATS)

MATS (ML Alignment & Theory Scholars) is the largest AI safety research fellowship and talent pipeline, running intensive 12-week research programs that pair fellows with leading AI alignment mentors in Berkeley and London.

https://www.matsprogram.org/
Berkeley, California, USAestablishedActively fundraisingTeam: 44

ML Safety Newsletter

A free newsletter publishing curated summaries of recent machine learning safety research, run by Dan Hendrycks and contributors associated with the Center for AI Safety.

https://newsletter.mlsafety.org/
San Francisco, CAestablishedTeam: 4
ML4Good logo

ML4Good

ML4Good runs intensive, fully-funded in-person bootcamps to train motivated people for careers in AI safety, covering both technical and governance tracks.

https://ml4good.org/
United KingdomearlyTeam: 2
Model Evaluation & Threat Research (METR) logo

Model Evaluation & Threat Research (METR)

METR is a research nonprofit that develops scientific methods to evaluate whether frontier AI systems could pose catastrophic risks to society, working with leading AI labs on pre-deployment safety assessments.

https://metr.org/
Berkeley, California, USAestablishedTeam: 30
Modeling Cooperation logo

Modeling Cooperation

A research project that uses game theory and computational modeling to reduce catastrophic risks from competition in the development of transformative AI.

https://www.modelingcooperation.com/
Switzerland (distributed team)earlyActively fundraisingTeam: 7

Modulo Research

Modulo Research is a UK-based AI safety research organization that conducts empirical evaluations of large language models and develops datasets to advance scalable oversight research.

https://www.moduloresearch.com/
Cambridge, United Kingdomearly
Mox logo

Mox

Mox is San Francisco's largest AI safety coworking and community space, providing workspace, events, and fellowships for researchers and organizations working on high-impact problems.

https://moxsf.com/
San Francisco, CAearlyActively fundraisingTeam: 5

MSEP Project

The Molecular Systems Engineering Platform (MSEP) is a free, open-source software tool conceived by nanotechnology pioneer Eric Drexler for designing and simulating atomically precise nanomechanical systems.

https://msep.one/
NetherlandsearlyTeam: 7
Mythos Ventures logo

Mythos Ventures

Mythos Ventures is an early-stage venture capital firm investing in prosocial technologies and safe AI systems. They back pre-seed and seed-stage founders building AGI-resilient, positive-impact companies.

https://www.mythos.vc/
San Francisco, CAestablished
National Academies of Sciences, Engineering, and Medicine logo

National Academies of Sciences, Engineering, and Medicine

The National Academies of Sciences, Engineering, and Medicine is the United States' preeminent independent scientific advisory body, providing expert consensus reports to inform government policy on science, engineering, and medicine, including AI safety and governance.

https://www.nationalacademies.org/
Washington, DCmatureTeam: 1000

National Science Foundation

The National Science Foundation (NSF) is an independent US federal agency that funds basic research and education across all non-medical fields of science and engineering, including substantial investment in AI safety-relevant research.

https://www.nsf.gov/
Alexandria, Virginia, USAmatureTeam: 2100

Neel Nanda

Neel Nanda is the Mechanistic Interpretability Team Lead at Google DeepMind and creator of TransformerLens, the primary open-source library for mechanistic interpretability research.

https://www.neelnanda.io/
London, United KingdomestablishedTeam: 1

New York University

New York University is a major private research university in New York City, home to several AI safety-relevant research groups including the NYU Alignment Research Group and the Center for Responsible AI.

https://www.nyu.edu/
New York City, NYmature

Nice Light

Nice Light is a London-based documentary film production company that produces films on the risks of advanced AI for broad public audiences.

London, England, UKearly
Non-Trivial logo

Non-Trivial

Non-Trivial runs free online research fellowships for talented young people ages 14-20 to develop impactful projects on the world's most pressing problems. The program offers mentorship, scholarships up to $10,000, and a global peer community.

https://www.non-trivial.org/
London, United KingdomearlyTeam: 3
Nonlinear logo

Nonlinear

A nonprofit AI safety organization that researches, funds, and seeds high-impact interventions to reduce existential risk from artificial intelligence, operating key programs including the Nonlinear Network funding platform and the Nonlinear Library podcast.

https://www.nonlinear.org/
Remote / NomadicearlyTeam: 5

Northeastern University

Northeastern University is a private R1 research university in Boston, Massachusetts, home to notable AI safety and mechanistic interpretability research through its Khoury College of Computer Sciences and Institute for Experiential AI.

https://www.northeastern.edu/
Boston, MAmatureTeam: 8641
NYU Alignment Research Group (ARG) logo

NYU Alignment Research Group (ARG)

An academic research group at New York University doing empirical work with language models to address longer-term safety concerns about highly capable AI systems.

https://wp.nyu.edu/arg/
New York, NYestablishedTeam: 6
Observatorio de Riesgos Catastróficos Globales logo

Observatorio de Riesgos Catastróficos Globales

A scientific diplomacy organization working to improve global catastrophic risk governance in Spanish-speaking countries, with focus areas spanning AI regulation, pandemic biosecurity, food security, and risk management systems.

https://www.riesgoscatastroficosglobales.com/
Madrid, SpainearlyTeam: 11

Obsolete

Reporting and analysis on capitalism, great power competition, and the race to build machine superintelligence by freelance journalist Garrison Lovely.

https://www.obsolete.pub/
Brooklyn, New York, USAseedTeam: 1

Odyssean Institute

A UK-based research and advocacy think tank that combines complexity modelling, expert elicitation, and democratic deliberation to improve policymaking around existential and catastrophic risks.

https://www.odysseaninstitute.org/
Lytham St. Annes, Lancashire, United KingdomearlyTeam: 11

Open Phil AI Fellowship

A fellowship program by Open Philanthropy that funds PhD students in AI and machine learning to pursue research aimed at reducing catastrophic risks from advanced AI systems.

https://coefficientgiving.org/ai-fellowship/
San Francisco, CAwinding-down

Open Philanthropy Technology Policy Fellowship

A fellowship program run by Open Philanthropy that placed individuals in US government, Congressional, and think tank roles focused on AI and biosecurity policy. The program has since concluded.

https://coefficientgiving.org/open-philanthropy-technology-policy-fellowship/
Washington, DCwinding-down
OpenAI logo

OpenAI

OpenAI is an AI research and deployment company working to ensure that artificial general intelligence benefits all of humanity. It is the creator of ChatGPT, GPT-4, and a wide range of frontier AI models.

https://openai.com/
San Francisco, CAmatureTeam: 7216
OpenBook logo

OpenBook

OpenBook is a searchable database of approximately 4,000 effective altruism grants from major EA funders, built to make funding flows in the EA ecosystem transparent and discoverable. The project is no longer actively maintained.

https://openbook.fyi/
Medford, MA, USAwinding-downTeam: 1
OpenMined logo

OpenMined

OpenMined is a 501(c)(3) nonprofit building open-source privacy-preserving AI infrastructure that enables secure computation across siloed data. Their tools allow AI auditors and researchers to evaluate proprietary AI systems without requiring direct access to sensitive models or data.

https://openmined.org/
New York, NY (remote-first)establishedTeam: 54

Oregon State University

Oregon State University is a public research university in Corvallis, Oregon, whose hardware security research group contributed to AI compute governance through the Survival and Flourishing Fund's FlexHEG (Flexible Hardware-Enabled Guarantees) program.

https://oregonstate.edu/
Corvallis, Oregon, United Statesmature
Orthogonal logo

Orthogonal

A non-profit AI alignment research organization focused on agent foundations, pursuing formal goal alignment approaches that would scale to superintelligence.

https://orxl.org/
Europeearly
Ought logo

Ought

Ought was a nonprofit AI alignment research lab that developed factored cognition approaches and built Elicit, an AI research assistant, before spinning Elicit off as an independent public benefit corporation in 2023.

https://ought.org/
San Francisco, CAwinding-down

Oxford AI Safety Initiative

OAISI is a student- and researcher-led community at the University of Oxford committed to reducing catastrophic risks from advanced AI. It runs technical and governance programmes to support existing researchers and introduce new Oxford talent to AI safety work.

https://oaisi.org/
Oxford, United KingdomearlyTeam: 5

Oxford China Policy Lab

A non-partisan, interdisciplinary research group based at the University of Oxford that produces policy-relevant research to mitigate global risks stemming from US-China great power competition, with a particular focus on artificial intelligence and emerging technologies.

https://oxfordchinapolicylab.org/
Oxford, United KingdomearlyTeam: 12

Oxford Martin AI Governance Initiative

A research initiative at the University of Oxford's Martin School that combines technical AI expertise with deep policy analysis to understand and mitigate lasting risks from AI through governance research, decision-maker education, and training future technology governance leaders.

https://aigi.ox.ac.uk/
Oxford, United KingdomearlyTeam: 11

P.H.I

P.H.I. (Prompt Human Inc.) was the individual research entity of Quentin Feuillade-Montixi, a French AI safety researcher focused on model psychology and LLM evaluation.

Paris, Francewinding-downTeam: 1

Palisade Research

Nonprofit investigating cyber offensive AI capabilities and the controllability of frontier AI models to help humanity avoid permanent disempowerment by strategic AI agents.

https://palisaderesearch.org/
Berkeley, CAearlyActively fundraisingTeam: 15

Panoplia Laboratories

Panoplia Laboratories (now operating as Active Site) is a nonprofit that evaluates the risks and capabilities of AI-driven biology through wet lab research, and develops broad-spectrum antivirals for pandemic preparedness.

https://www.panoplialabs.org/
Cambridge, MA, USAearlyTeam: 5
Partnership on AI (PAI) logo

Partnership on AI (PAI)

Partnership on AI is a global multi-stakeholder nonprofit that brings together industry, civil society, and academia to address the social implications of AI and promote responsible development and deployment.

https://partnershiponai.org/
San Francisco, CAestablishedTeam: 40
Paul Christiano's Blog logo

Paul Christiano's Blog

Personal AI alignment blog by Paul Christiano, covering technical approaches to making AI systems safe, honest, and beneficial. The archive remains a key reference in the field.

https://ai-alignment.com/
winding-downTeam: 1

Pause House

Pause House is a residential community in Blackpool, UK, that provides free housing and stipends to activists working toward a global pause on AGI development.

https://gregcolbourn.substack.com/p/pause-house-blackpool
Blackpool, England, UKseedTeam: 1
PauseAI logo

PauseAI

PauseAI is a global grassroots movement advocating for an immediate pause on the development of frontier AI systems until their safety can be demonstrated and they can be kept under democratic control.

https://pauseai.info/
Amsterdam, NetherlandsearlyActively fundraisingTeam: 5

PEAKS

PEAKS is a coworking space in Zurich, Switzerland for professionals working on Effective Altruism and AI Safety research.

https://peaks-office.ch/
Zurich, Switzerlandearly

Penn State University

Penn State University hosts AI safety research led by Prof. Rui Zhang, whose group received Open Philanthropy funding to develop methods for detecting and mitigating sandbagging in AI systems.

https://ryanzhumich.github.io/
University Park, Pennsylvaniaestablished

PIBBSS

A nonprofit research organization that runs interdisciplinary fellowship and affiliate programs bringing researchers from complex systems sciences (neuroscience, ecology, economics, physics, and others) to work on AI safety and alignment research.

https://pibbss.ai/fellowship/
Pivotal Research Fellowship logo

Pivotal Research Fellowship

Pivotal Research runs a 9-week in-person research fellowship in London for early-career researchers working on AI safety, AI governance, and biosecurity. Fellows work alongside mentors from leading organizations to produce impactful research and launch careers in reducing global catastrophic risks.

https://www.pivotal-research.org/
London, UKearlyTeam: 8

Planned Obsolescence

A Substack newsletter by Ajeya Cotra exploring AI capabilities, timelines, and the societal implications of increasingly autonomous AI systems.

https://www.planned-obsolescence.org/
Team: 1

Plurality Institute

A nonprofit research hub that develops and experiments with plural technologies to strengthen democracy and support human cooperation at scale, bridging computer science, political science, and philosophy.

https://www.plurality.institute/
San Francisco, CaliforniaearlyTeam: 10

Poseidon Research

Poseidon Research is an independent AI safety laboratory conducting deep technical research in interpretability, control, and secure monitoring to make advanced AI systems transparent, trustworthy, and governable.

https://poseidonresearch.org/
New York, NYearly

Pour Demain

A Swiss non-profit think tank that develops evidence-based policy proposals on AI safety, biosecurity, and emerging technologies, bridging science, politics, and civil society for Switzerland and beyond.

https://www.pourdemain.ngo/
Basel, SwitzerlandearlyTeam: 8

Practical AI Alignment and Interpretability Research Group

A remote, non-profit research group focused on mechanistic interpretability of deep learning models, developing causal abstraction frameworks, open-source course materials, and mentorship programs for the AI safety community.

https://prair.group/
Remotewinding-down
Preamble Windfall Foundation logo

Preamble Windfall Foundation

The Preamble Windfall Foundation is a small Pittsburgh-based 501(c)(3) that supports animal welfare research and philanthropy guidance, notably through the Planetary Animal Welfare Survey (PAWS) project.

https://preambleforgood.org/
Pittsburgh, PAseed

Princeton University

Princeton University is a leading Ivy League research institution that conducts significant AI safety and AI governance research through several interdisciplinary centers and initiatives.

https://www.princeton.edu/
Princeton, NJ, USAmatureTeam: 8000

Probably Good

Probably Good is a nonprofit that helps individuals build high-impact careers through free, evidence-based guides, 1-on-1 advising, and a curated job board.

https://probablygood.org/
establishedTeam: 7

Psychosecurity Ethics @ EURAIO

A program within EURAIO (European Responsible Artificial Intelligence Office) that convenes expert summits and develops frameworks to address AI-driven psychological manipulation and protect civil liberties from autonomy-eroding AI systems.

https://www.psychosecurity.ai/
Leuven, BelgiumearlyTeam: 3

Purdue University

Purdue University is a major public research university in West Lafayette, Indiana, whose computer science department has received AI safety funding for research on language model robustness and adversarial deception detection.

https://www.purdue.edu/
West Lafayette, IndianamatureTeam: 10000

Quantified Uncertainty Research Institute

A nonprofit research organization that builds open-source tools and conducts research on forecasting, epistemics, and uncertainty quantification to improve decision-making for the long-term future of humanity.

https://quantifieduncertainty.org/
Berkeley, Californiaearly

RadicalxChange Foundation Ltd.

A nonprofit foundation promoting democratic innovation, plural technology, and new governance mechanisms such as quadratic voting and funding to enable more equitable and participatory collective decision-making.

https://www.radicalxchange.org/
Moraga, California, USAestablishedTeam: 5

RAISEimpact

RAISEimpact is a consulting program that helps AI safety organizations strengthen their management, leadership, and organizational culture to amplify their effectiveness.

https://www.raiseimpact.org/
early

RAND Corporation

A major nonprofit policy research organization that, through its Center on AI, Security, and Technology (CAST) and Global and Emerging Risks division, conducts influential research on AI safety, frontier model security, AI governance, and existential risk policy.

https://www.rand.org/
Santa Monica, California, USAmatureTeam: 1850
Rational Animations logo

Rational Animations

Rational Animations is a YouTube channel producing high-quality animated videos about AI safety, rationality, and effective altruism to reach mainstream audiences.

https://www.rationalanimations.com/
Remote (incorporated in Dover, DE)establishedTeam: 40

Rationality Meetups

Coordinates and supports rationality-focused community meetup groups worldwide, serving as a hub for ACX (Astral Codex Ten), LessWrong, and broader rationality community organizers.

https://www.rationalitymeetups.org/
United StatesearlyTeam: 1
Redwood Research logo

Redwood Research

A nonprofit AI safety research lab that pioneers threat assessment and mitigation techniques for advanced AI systems, with a current focus on AI control protocols and detecting strategic deception in language models.

https://www.redwoodresearch.org/
Berkeley, CAestablishedTeam: 12

Research on AI & International Relations

A research project fiscally sponsored by Convergence Analysis, focused on studying how AI technologies affect international relations, global governance, and geopolitical dynamics.

Responsible AI Collaborative

The Responsible AI Collaborative (TheCollab) is a nonprofit that maintains the AI Incident Database (AIID), the leading public repository of documented real-world AI harms and near-harms.

https://incidentdatabase.ai/
Los Angeles, CAearlyTeam: 4
Rethink Priorities logo

Rethink Priorities

A research-focused think-and-do tank that conducts empirical research across animal welfare, global health and development, AI, and other cause areas to uncover high-impact, neglected opportunities for improving the lives of humans and animals.

https://rethinkpriorities.org/
San Francisco, California (fully remote)establishedTeam: 61

Rice, Hadley, Gates & Manuel LLC

Rice, Hadley, Gates & Manuel (RHGM) is an international strategic consulting firm founded by former senior U.S. national security officials that helps companies navigate emerging markets and technology policy. Through Open Philanthropy funding, the firm has conducted research on AI accident risk and technology competition between the U.S. and China.

https://www.rhgm.com/
Menlo Park, CAmature
RiesgosIA.org logo

RiesgosIA.org

RiesgosIA.org is a Spanish-language non-profit providing open-access tools and educational resources on AI safety and governance, primarily serving Spanish-speaking communities.

https://riesgosia.org/
SpainearlyTeam: 2

Rising Tide

Blog by Helen Toner (Director of Strategy at CSET and former OpenAI board member) offering analysis on navigating the transition to advanced AI systems.

https://helentoner.substack.com/
Washington, DCTeam: 1
Safe AI Forum logo

Safe AI Forum

A US 501(c)(3) nonprofit dedicated to advancing international cooperation to reduce extreme AI risks, best known for running the International Dialogues on AI Safety (IDAIS) series that convenes leading scientists from around the world.

https://saif.org/
San Francisco Bay Area, USAearlyTeam: 9
Safe Superintelligence Inc. (SSI) logo

Safe Superintelligence Inc. (SSI)

Safe Superintelligence Inc. (SSI) is an AI research company founded by Ilya Sutskever focused solely on building safe superintelligence, with no other products or commercial distractions.

https://ssi.inc/
Palo Alto, CAestablishedTeam: 20
SaferAI logo

SaferAI

A French nonprofit that develops AI risk management frameworks, independently rates AI companies' safety practices, and contributes to international AI governance standards.

https://www.safer-ai.org/
Paris, FranceearlyTeam: 11

Sage Future

Sage builds tools to improve forecasting skills and public understanding of AI capabilities, with the goal of reducing global catastrophic risks from emerging technologies.

https://sage-future.org/
Dover, DE, USAearlyTeam: 4

Samotsvety Forecasting

Samotsvety is an elite team of superforecasters applying rigorous probability analysis to high-stakes questions in AI risk, nuclear risk, and existential risk. They are widely regarded as one of the best forecasting teams in the world.

https://samotsvety.org/
establishedTeam: 15

Saturn Data

Saturn Data builds FPGA-accelerated servers for high-memory, high-bandwidth workloads and has received funding to prototype flexible hardware-enabled governors (FlexHEGs) for AI compute governance.

https://saturndata.com/
San Francisco Bay Area, CaliforniaseedTeam: 2
Saving Humanity from Homo Sapiens (SHfHS) logo

Saving Humanity from Homo Sapiens (SHfHS)

SHfHS is a small philanthropic foundation that identifies and funds researchers and organizations working on existential risk reduction. It acts as a funding intermediary rather than conducting direct research.

http://shfhs.org/
seedTeam: 3

Science of Trustworthy AI

A research funding program run by Schmidt Sciences that supports foundational technical research on understanding, predicting, and controlling risks from frontier AI systems. The program funds academic and nonprofit researchers working on AI safety science, evaluation methodology, and oversight of advanced AI.

https://www.schmidtsciences.org/trustworthy-ai/
New York, NYestablished

Secure AI Project

A nonprofit that develops and advocates for pragmatic policies to reduce the risk of severe harm from advanced AI, promoting transparency, accountability, and safe development through state and federal legislation.

https://secureaiproject.org/
San Francisco, CaliforniaearlyTeam: 7
SecureBio logo

SecureBio

A biosecurity nonprofit working to protect humanity against catastrophic pandemics through AI risk evaluation, pathogen-agnostic early warning surveillance, and DNA synthesis screening.

https://securebio.org/
Cambridge, MA, USAestablishedTeam: 41

SeedAI

SeedAI is a Washington, D.C. nonprofit working at the intersection of AI policy and practical application, helping policymakers and communities across the U.S. understand, adopt, and shape AI responsibly.

https://www.seedai.org/
Washington, D.C.earlyTeam: 8

Seldon Labs

An AI security accelerator and research lab based in San Francisco that invests in and supports early-stage startups building infrastructure for safe AGI deployment.

https://seldonlab.com/
San Francisco, CaliforniaseedTeam: 5

Sentience Institute

A nonprofit think tank researching the expansion of humanity's moral circle, with a primary focus on digital minds and the moral status of AI systems.

https://www.sentienceinstitute.org/
New York, NY, USAestablishedTeam: 6

Sentinel

A foresight and emergency response nonprofit that monitors global catastrophic risks using AI-augmented analysis and expert forecasters, publishing weekly risk briefings and maintaining a reserve team for rapid crisis response.

https://sentinel-team.org/
Remote (distributed team)earlyTeam: 15
Siliconversations logo

Siliconversations

Siliconversations is a YouTube channel that creates animated videos explaining AI safety risks and existential risk from advanced AI to general audiences. It is run by a former quantum scientist who became a full-time content creator.

https://www.youtube.com/@Siliconversations
earlyActively fundraisingTeam: 1
Simon Institute for Longterm Governance logo

Simon Institute for Longterm Governance

A Geneva-based think tank that fosters international cooperation on governing frontier AI by conducting research, facilitating dialogue between technical and policy communities, and training diplomats and civil servants.

https://simoninstitute.ch/
Geneva, SwitzerlandestablishedTeam: 9

Simon McGregor

Simon McGregor is a complex adaptive systems researcher at the University of Sussex who works on formal theories of agency and cognition, and organizes workshops bridging AI safety and artificial life research.

Brighton, United KingdomseedTeam: 1

Simplex

AI safety research organization applying computational mechanics from physics and computational neuroscience to build a rigorous science of intelligence, with a focus on understanding the internal representations and emergent behavior of neural networks.

https://www.simplexaisafety.com/
Emeryville, CA, USAearly

Sincxpress Education

Sincxpress Education is a STEM education company founded by Dr. Mike X Cohen that produces online courses and textbooks on applied mathematics, deep learning, and mechanistic interpretability for AI safety. Its courses have reached over 300,000 learners worldwide.

https://sincxpress.com/
Bucharest, RomaniaearlyTeam: 1

Singapore AI Safety Hub

Singapore's first civil society organization for AI safety, providing a co-working space, events, and community hub for researchers and professionals working on AI safety governance, technical research, and field-building in Asia.

https://www.aisafety.sg/
SingaporeseedTeam: 5

SLT Summit organizers

Organizers of the Singular Learning Theory and Alignment Summit, a conference series connecting mathematical foundations of learning theory with AI alignment research.

https://singularlearningtheory.com/
Berkeley, CAearlyTeam: 4

Softmax

Softmax is an AI alignment research startup developing the science of organic alignment through multi-agent reinforcement learning. Founded by Emmett Shear, Adam Goldstein, and David Bloomin, the company studies how agents learn to cooperate, share goals, and form collectively intelligent systems.

https://softmax.com/
San Francisco, CAseedTeam: 10

SPARC

SPARC is a free two-week summer program for mathematically gifted high school students, teaching applied rationality, decision theory, and AI safety to cultivate a generation of thoughtful technical leaders.

https://www.sparc.camp/
Hayward, CAestablished

Species

Species is a YouTube channel run by Drew Spartz that produces high-effort mini-documentaries educating a general audience about AI risk and the implications of advancing AGI.

https://www.youtube.com/@AISpecies
San Francisco, CAearly

Stanford Existential Risks Initiative

A Stanford University initiative that hosts and promotes academic scholarship on existential risks, running research fellowships, conferences, courses, and discussion groups focused on AI, nuclear war, pandemics, and climate change.

https://seri.stanford.edu/
Stanford, CaliforniaestablishedTeam: 13

Stanford University

Stanford University is a leading research university hosting several AI safety-relevant programs, including the Human-Centered AI Institute (HAI), the Existential Risks Initiative (SERI), the Center for International Security and Cooperation (CISAC), and the Center for AI Safety.

https://www.stanford.edu/
Stanford, California, USAmatureTeam: 19705

Steve Byrnes's Brain-Like AGI Safety

Steve Byrnes is a physicist and Research Fellow at Astera Institute working on AI safety through a neuroscience-informed lens, focusing on alignment challenges specific to future brain-like AGI systems.

https://sjbyrnes.com/
Boston, MAestablishedTeam: 1

Stiftung Neue Verantwortung

interface (formerly Stiftung Neue Verantwortung) is a Berlin-based independent think tank producing technology policy analysis and ideas for European policymakers and the public.

https://www.interface-eu.org/
Berlin, Germanyestablished
Stop AGI logo

Stop AGI

Stop AGI is a project and website launched by Andrea Miotti in April 2023 to communicate the extinction risks of artificial general intelligence to the public and propose policy solutions to prevent its development.

https://stop.ai/
London, UKearly

Stop AI

Stop AI is a grassroots activist organization that uses non-violent civil disobedience and public advocacy to demand a permanent, enforceable global ban on the further development of frontier AI technology.

https://www.stopai.info/
Oakland, CAearlyTeam: 5

Straumli

Straumli is an AI safety company that offers managed auditing and self-serve evaluations to help AI developers identify misuse risks and ship safer models faster.

https://straumli.ai/
Bucharest, RomaniaearlyTeam: 5

Study and Training Related to AI Policy Careers

An Open Philanthropy grant program providing scholarship and career development funding for individuals pursuing careers in AI governance and policy.

https://www.openphilanthropy.org/funding-for-study-and-training-related-to-ai-policy-careers/
San Francisco, CAestablished
Successif logo

Successif

Successif helps mid-career and senior professionals transition into high-impact careers in AI safety and governance through free personalized advising, workshops, and job market research.

https://www.successif.org/
Towson, Maryland, USearlyTeam: 15

Supervised Program for Alignment Research

SPAR is a part-time, remote research fellowship that pairs aspiring AI safety and policy researchers with experienced mentors for 3-month research projects. It is one of the largest AI safety research fellowships by participant count.

https://sparai.org/
Remote (Kairos headquartered in San Francisco, CA)earlyTeam: 3

Surge AI

Surge AI is a data labeling and AI training data company that provides high-quality human annotation, RLHF datasets, and adversarial red-teaming services to frontier AI labs including Anthropic, OpenAI, Google, Microsoft, and Meta.

https://surgehq.ai/
San Francisco, CAestablishedTeam: 121
Survival and Flourishing Fund (SFF) logo

Survival and Flourishing Fund (SFF)

A major philanthropic fund that organizes grant applications and evaluates them using the S-Process algorithm to direct Jaan Tallinn's giving toward organizations working to ensure humanity's long-term survival and flourishing. It is the second-largest funder of AI safety after Open Philanthropy.

https://survivalandflourishing.fund/
matureTeam: 11

Swiss AI Safety Summer Camp

A free in-person bootcamp in Switzerland introducing students and early-career researchers to AI safety through technical and conceptual coursework. The camp covers alignment, mechanistic interpretability, and governance tracks.

https://www.aisafetycamp.ch/
Melchtal, SwitzerlandseedTeam: 8

Talos Network

A German nonprofit that cultivates the next generation of European AI policy leaders through its flagship Talos Fellowship, combining training, a Brussels policymaking summit, and paid placements at leading think tanks and policy organizations.

https://www.talosnetwork.org/

TamperSec

A hardware security startup developing tamper-proof enclosures for AI chips to prevent physical attacks on AI hardware and enable international AI governance through verifiable compliance mechanisms.

https://tampersec.com/
GermanyseedTeam: 4

Tarbell Center for AI Journalism

A nonprofit supporting journalism that helps society navigate the emergence of increasingly advanced AI, through fellowships, grants, and its own publication Transformer.

https://www.tarbellcenter.org/
Claymont, Delaware, USAestablishedActively fundraisingTeam: 9
Team Shard logo

Team Shard

Team Shard is a small alignment research collective led by Alex Turner (TurnTrout) that studies how reinforcement learning induces values in trained agents, with the goal of learning to reliably instill human-compatible values in AI systems.

https://turntrout.com/team-shard
Berkeley, California, USAearlyTeam: 5

Technical Alignment Impossibility Proofs

An independent research project focused on proving formal impossibility results in AI alignment using theoretical computer science methods, led by Alexander Bistagne as a Ronin Institute Fellow.

Los Angeles, California, United StatesseedTeam: 1

Technical Alignment Research Accelerator (TARA)

TARA is a free 14-week part-time technical AI safety training program for Python programmers in the Asia-Pacific region, enabling participants to develop AI safety research skills without relocating or leaving their jobs.

https://www.taraprogram.org/
Sydney, AustraliaearlyTeam: 2

Technical University of Munich

Technical University of Munich (TUM) is one of Europe's leading research universities, with significant AI safety and reliable AI research programs including the Konrad Zuse School of Excellence in Reliable AI (relAI).

https://www.tum.de/
Munich, GermanymatureTeam: 12000

Technion - Israel Institute of Technology

Israel's oldest and largest research university, founded in 1912, with particular strength in computer science, engineering, and AI research. It ranks first in Europe and second globally for AI research output.

https://www.technion.ac.il/en/
Haifa, IsraelmatureTeam: 4535

The AI Governance Archive (TAIGA)

TAIGA is a private platform for qualified AI governance researchers to share non-public research, coordinate efforts, and find collaborators. It serves as a centralized hub to improve the efficiency and effectiveness of the transformative AI strategy and governance research community.

https://www.taigarchive.com/
earlyTeam: 4
The AI Policy Network (AIPN) logo

The AI Policy Network (AIPN)

AIPN is a bipartisan 501(c)(4) advocacy organization that lobbies the U.S. federal government to enact policies preparing America for the emergence of AGI and advanced AI systems. It brings together government leaders, technology policy experts, and technical researchers to champion human control of transformative AI.

https://theaipn.org/
Washington, DCearlyTeam: 8

The AI Policy Podcast

A biweekly podcast from CSIS's Wadhwani AI Center hosted by Gregory C. Allen, covering AI policy, regulation, national security, and geopolitics.

https://www.csis.org/podcasts/ai-policy-podcast
Washington, DCestablished

The AI Risk Network (ARN)

A Baltimore-based nonprofit media platform that produces podcasts, videos, and social content to bring AI extinction risk into mainstream public conversation.

https://www.guardrailnow.org/
Baltimore, MD, USAearlyTeam: 11
The AI Whistleblower Initiative (AIWI) logo

The AI Whistleblower Initiative (AIWI)

An independent nonprofit supporting whistleblowers at frontier AI companies through expert guidance, legal support, and secure anonymous reporting channels. Now operating as The AI Whistleblower Initiative (AIWI).

https://aiwi.org/
London, United Kingdom / Berlin, Germanyearly
The Alliance for Secure AI Action logo

The Alliance for Secure AI Action

A Washington, D.C.-based 501(c)(3) nonprofit that educates the public, policymakers, and media about the risks of advanced AI and advocates for bipartisan safeguards before AGI arrives.

https://secureainow.org/
Washington, D.C.earlyTeam: 10
The Australian Responsible Autonomous Agents Group logo

The Australian Responsible Autonomous Agents Group

A cross-institutional Australian research collective focused on multi-objective reinforcement learning approaches to AI safety and alignment, with researchers at Federation University, Deakin University, and UNSW.

https://araac.au/
Ballarat and Geelong, Victoria, AustraliaearlyTeam: 12

The Building Capacity Blog

A Substack newsletter by Gergő Gáspár covering fieldbuilding strategy, careers, and marketing for the AI Safety and Effective Altruism communities.

https://fieldbuilding.substack.com/
London, United KingdomearlyTeam: 1
The Cognitive Revolution logo

The Cognitive Revolution

A leading AI podcast hosted by Nathan Labenz that interviews AI builders, researchers, and investors to help leaders make sense of transformative developments in artificial intelligence.

https://www.cognitiverevolution.ai/
Detroit, MI, USAestablishedTeam: 2
The Compendium logo

The Compendium

The Compendium is a living document and website that presents a comprehensive, accessible argument for why artificial general intelligence poses an extinction risk to humanity and what can be done about it.

https://www.thecompendium.ai/
London, United Kingdom
The Future Society logo

The Future Society

A nonprofit organization based in the US and Europe that works to align AI through better governance, developing and advocating for AI governance mechanisms ranging from laws and regulations to voluntary frameworks.

https://thefuturesociety.org/
Boston, MA, USAestablishedTeam: 7

The Goodly Institute

A nonprofit R&D lab (operating as Goodly Labs) that builds collective intelligence tools to combat misinformation, strengthen democratic deliberation, and foster civic engagement through rigorous social science research.

https://www.goodlylabs.org/
Benicia, CAearlyTeam: 14

The Intrinsic Perspective

Erik Hoel's Substack newsletter covering consciousness, AI, science, literature, and cultural commentary, with a focus on bridging disciplinary barriers between the sciences and humanities.

https://www.theintrinsicperspective.com/
Cape Cod, Massachusetts, USAestablishedTeam: 1

The Midas Project

An AI safety advocacy nonprofit that monitors major AI companies' safety policies and conducts public campaigns to pressure the industry toward greater transparency, accountability, and responsible development practices.

https://www.themidasproject.com/
Tulsa, Oklahoma, United StatesseedTeam: 1
The Millennium Project logo

The Millennium Project

A global participatory futures research think tank that produces the annual State of the Future report and tracks 15 Global Challenges facing humanity, with growing focus on AGI governance and existential risk.

https://www.millennium-project.org/
Washington, DC, United StatesestablishedTeam: 6
The Navigation Fund logo

The Navigation Fund

The Navigation Fund is a major philanthropic funder that grants over $60 million annually to high-impact organizations working on climate change, farm animal welfare, criminal justice reform, open science, and AI safety.

https://www.navigation.org/
Berkeley, CAmatureTeam: 12

The Power Law

The Power Law is a Substack newsletter by Peter Wildeford (also known as Peter Hurford) covering AI forecasting, AI policy, national security, and emerging technology.

https://peterwildeford.substack.com/
Washington, DCestablishedTeam: 1

The Society Library

A nonprofit that archives humanity's ideas, ideologies, and world-views through structured debate mapping, with a focus on AI safety, alignment, and democratic governance of AI.

https://www.societylibrary.org/
Orlando, FLearlyActively fundraisingTeam: 4
The Unjournal logo

The Unjournal

A nonprofit that commissions and funds open, expert evaluation and quantitative rating of economics and social science research relevant to global priorities, without the constraints of traditional academic journals.

https://www.unjournal.org/
Remote (US-registered)earlyTeam: 7
The Wilson Center logo

The Wilson Center

The Woodrow Wilson International Center for Scholars is a congressionally chartered, nonpartisan think tank in Washington, DC that bridges the world of ideas and the world of policy through research, analysis, and scholarship on global affairs.

https://www.wilsoncenter.org/
Washington, DCwinding-downTeam: 150

Theorem Labs

Theorem Labs is an AI and programming languages research lab that builds tools to formally verify the correctness of AI-generated code before it ships.

https://theoremlabs.com/
San Francisco, CAseedTeam: 4

Thomas Liao

Individual AI safety researcher who created and maintains the Foundation Model Tracker, a website tracking the release of large AI models. Received a $15,000 grant from Open Philanthropy in 2024 to support this work.

https://thomasliao.com/
Berkeley, California, USAseedTeam: 1

Threading the Needle

A Substack newsletter by Anton Leicht covering the political economy of AI progress, examining how institutions and political incentives interact with rapid technological change.

https://writing.antonleicht.me/
Berlin, GermanyearlyTeam: 1
Timaeus logo

Timaeus

An AI safety research organization applying Singular Learning Theory and developmental interpretability to understand how capabilities and values emerge during neural network training.

https://timaeus.co/
Remote (Berkeley, Melbourne, London)earlyTeam: 16
Tony Blair Institute for Global Change logo

Tony Blair Institute for Global Change

A not-for-profit policy institute that advises governments and political leaders worldwide on strategy, policy, and delivery, with a major focus on AI governance and technology adoption in the public sector.

https://institute.global/
London, UKmatureTeam: 786

Topos Institute

A nonprofit research institute applying category theory, topos theory, and type theory to develop mathematical foundations and open-source tools for collective sense-making, collaborative modeling, and shaping technology for public benefit.

https://topos.institute/
Berkeley, CA, USAestablishedTeam: 19

Touro College & University System

Touro is a large private Jewish university system headquartered in New York City, operating over 38 schools across the US and internationally. It received an Open Philanthropy grant to support Professor Gabriel Weil's legal research on using tort liability to mitigate catastrophic AI risks.

https://www.touro.edu/
New York, NYmature

Training For Good

Training for Good was an EA-incubated organization that upskilled talent for high-impact careers in AI policy and journalism, running the EU Tech Policy Fellowship and the Tarbell Fellowship before spinning both off as independent organizations.

https://www.trainingforgood.com/
London, United Kingdomwinding-down
Trajectory Labs logo

Trajectory Labs

Trajectory Labs is a nonprofit coworking and events space in downtown Toronto dedicated to AI safety research and community building. It provides free workspace, weekly events, and a peer network to grow Toronto's AI safety ecosystem.

https://www.trajectorylabs.org/
Toronto, Ontario, CanadaearlyTeam: 4
Transformative Futures Institute logo

Transformative Futures Institute

A nonprofit research institute applying foresight methods to anticipate and mitigate societal-scale risks from advanced artificial intelligence. TFI produces rigorous research for policymakers and decision-makers working to prevent catastrophic AI outcomes.

https://transformative.org/
Wichita, KSearlyTeam: 6
Transluce logo

Transluce

Transluce is an independent nonprofit AI research lab that builds open, scalable technology for understanding AI systems and steering them in the public interest.

https://transluce.org/
San Francisco, CAearlyTeam: 20

TruthfulAI

TruthfulAI is a non-profit AI safety research organization based in Berkeley that studies situational awareness, deception, and hidden reasoning in large language models.

https://truthful.ai/
Berkeley, CaliforniaearlyTeam: 4

UCLA School of Law

A leading U.S. law school that conducts research on AI governance, policy, and safety through its PULSE program and Institute for Technology, Law & Policy.

https://law.ucla.edu/
Los Angeles, CAmature
UK AI Security Institute (UK AISI) logo

UK AI Security Institute (UK AISI)

UK government research organization that tests frontier AI systems, advances AI safety science, and informs policymakers about the risks and capabilities of advanced AI.

https://www.aisi.gov.uk/
London, UKmatureTeam: 100

Ulyssean PBC

Ulyssean builds integrated hardware and software to secure the data center infrastructure where frontier AI models are trained and deployed, protecting AI model weights against state-sponsored and intelligence-grade threats.

https://ulyssean.com/
San Francisco, CA, USAseed

Université de Montréal

Canada's second-largest research university by research volume, and the institutional home of leading AI safety researchers including Yoshua Bengio and David Krueger. UdeM anchors Montreal's position as a global hub for AI research and responsible AI development.

https://www.umontreal.ca/
Montreal, Quebec, CanadamatureTeam: 10000

University of British Columbia

Jeff Clune's AI safety and alignment research lab at UBC's Department of Computer Science, focused on deep learning, AI interpretability, and open-ended AI systems.

https://www.cs.ubc.ca/people/jeff-clune
Vancouver, BC, Canadaestablished

University of California, Berkeley

UC Berkeley is a leading public research university and one of the world's foremost hubs for AI safety research, hosting CHAI, BAIR, CLTC, and other major centers focused on beneficial and safe AI development.

https://humancompatible.ai/
Berkeley, CAmatureTeam: 27

University of California, San Diego

UC San Diego is a major public research university conducting AI safety-relevant research including LLM persuasion evaluation, trustworthy machine learning, and safe autonomous systems.

https://ucsd.edu/
La Jolla, California, USAmatureTeam: 41773

University of California, Santa Barbara

UC Santa Barbara is a major public research university whose Center for Responsible Machine Learning conducts AI safety-adjacent research on fairness, bias, transparency, and the societal impacts of AI systems.

https://ml.ucsb.edu/
Santa Barbara, CaliforniamatureTeam: 75

University of California, Santa Cruz

UC Santa Cruz is a public research university whose Baskin School of Engineering conducts AI safety-relevant research, including adversarial robustness work supported by Open Philanthropy.

https://www.ucsc.edu/
Santa Cruz, CaliforniamatureTeam: 4465
University of Cambridge logo

University of Cambridge

One of the world's oldest and most prestigious universities, founded in 1209, and a major hub for AI safety and existential risk research through centers such as CSER and the Leverhulme Centre for the Future of Intelligence.

https://www.cam.ac.uk/
Cambridge, United KingdommatureTeam: 13113

University of Chicago

A leading private research university on Chicago's South Side that hosts several AI safety and existential risk research programs, including the Existential Risk Laboratory (XLab), the Chicago Human+AI Lab, and the Harris School's Technology and Society Initiative.

https://www.uchicago.edu/
Chicago, ILmature
University of Illinois Urbana-Champaign logo

University of Illinois Urbana-Champaign

A major public research university hosting several prominent AI safety research groups, including work on formal neural network verification, adversarial robustness, and AI agent security benchmarks.

https://siebelschool.illinois.edu/
Urbana-Champaign, Illinois, USAmature

University of Louisville (Dr. Roman Yampolskiy's Research Group (Cybersecurity Lab))

A university research lab at the University of Louisville directed by Dr. Roman Yampolskiy, one of the founders of the field of AI safety, conducting research on the theoretical limits of AI controllability, AI containment, and cybersecurity.

https://faculty.cse.louisville.edu/roman/
Louisville, Kentucky, USAestablished
University of Maryland logo

University of Maryland

The University of Maryland, College Park is a flagship public research university conducting extensive AI safety, trustworthy AI, and responsible AI research through multiple interdisciplinary institutes and centers.

https://umd.edu/
College Park, MarylandmatureTeam: 10716

University of Massachusetts Amherst

UMass Amherst is a public research university whose AI safety-relevant work is centered in the SCALAR Lab, led by Associate Professor Scott Niekum, which focuses on safe and aligned machine learning and robotics.

https://people.cs.umass.edu/~sniekum/
Amherst, MA, USAestablished

University of Michigan

A major public research university in Ann Arbor, Michigan, hosting faculty conducting AI safety and alignment research funded by organizations including Open Philanthropy.

https://umich.edu/
Ann Arbor, MImature

University of Minnesota, Twin Cities

A major public research university and Minnesota's only land-grant institution, home to AI and NLP research relevant to AI safety including benchmarking of LLM capabilities on high-stakes professional tasks.

https://twin-cities.umn.edu/
Minneapolis, MinnesotamatureTeam: 25000

University of Oxford

One of the world's oldest and most prestigious research universities, Oxford has been a central hub for AI safety and existential risk research through institutions like the Future of Humanity Institute (FHI) and the Oxford Martin AI Governance Initiative (AIGI).

https://www.ox.ac.uk/
Oxford, United KingdommatureTeam: 16905

University of Pavia

One of the world's oldest universities, home to the Center for Reasoning, Normativity and AI (CERNAI), which conducts AI safety and alignment research led by Prof. Federico Faroldi.

https://en.unipv.it/en
Pavia, Italymature

University of Pennsylvania

An Ivy League research university in Philadelphia with multiple programs relevant to AI safety, including formal verification of autonomous systems, AI governance research, and AGI international security analysis.

https://www.upenn.edu/
Philadelphia, PAmature

University of Southern California

Major private research university in Los Angeles that received SFF flexHEGs funding for hardware-enabled AI governance research, and hosts multiple labs and centers working on AI safety, alignment, and responsible AI development.

https://www.usc.edu/
Los Angeles, California, USAmature

University of Texas at Austin

A major public research university whose AI safety-relevant work is centered on the AI+Human Objectives Initiative (AHOI) and Scott Aaronson's computational-complexity-meets-alignment research group, both supported by Open Philanthropy.

https://utexas.edu/
Austin, Texas, USAmature
University of Toronto logo

University of Toronto

The University of Toronto is home to the Schwartz Reisman Institute for Technology and Society, a leading interdisciplinary research institute dedicated to ensuring that advanced AI develops safely, ethically, and in the public interest.

https://srinstitute.utoronto.ca/
Toronto, Ontario, Canadaestablished

University of Toronto & University of Michigan

A cross-institutional AI safety research collaboration between Zhijing Jin's Jinesis AI Lab at the University of Toronto and Rada Mihalcea's Language and Information Technologies (LIT) Lab at the University of Michigan, focused on multi-agent LLM safety, causal reasoning, and AI alignment.

https://zhijing-jin.com/home/
Toronto, Canada & Ann Arbor, Michigan, USAearly
University of Tübingen logo

University of Tübingen

One of Germany's oldest and most prestigious research universities, founded in 1477 and designated a University of Excellence, hosting leading AI and machine learning research groups including the Tübingen AI Center and the Cluster of Excellence in Machine Learning.

https://uni-tuebingen.de/en/
Tübingen, GermanymatureTeam: 8310

University of Utah

The ARIA Lab (Aligned, Robust, and Interactive Autonomy Lab) at the University of Utah, led by Professor Daniel S. Brown, conducts research on human-AI alignment, reward learning, and AI safety. The lab develops algorithms and theory to enable AI systems to safely learn from and interact with humans.

https://aria-lab.cs.utah.edu/
Salt Lake City, Utahestablished

University of Virginia

The University of Virginia is a major public research university in Charlottesville, Virginia, with faculty and programs conducting AI safety and alignment research.

https://www.virginia.edu/
Charlottesville, Virginia, USAmatureTeam: 30000

University of Washington

A major public research university in Seattle with significant AI research programs, including responsible AI and AI safety-relevant work through its Paul G. Allen School of Computer Science & Engineering and the RAISE center.

https://www.washington.edu/
Seattle, Washington, USAmature

University of Waterloo

A leading Canadian research university founded in 1957, home to AI safety-relevant research programs including technical AI safety grants from Coefficient Giving and CIFAR's Canadian AI Safety Institute program.

https://uwaterloo.ca/
Waterloo, Ontario, Canadamature

University of Wisconsin–Madison

A major public research university in Madison, Wisconsin, home to AI safety relevant research including interpretability work in the Statistics department and student-led AI safety initiatives.

https://www.wisc.edu/
Madison, Wisconsin, USAmatureTeam: 27293

Upgradable

Upgradable is an applied research lab and life optimization service that helps effective altruists, AI safety researchers, and existential risk advocates lead more impactful lives.

https://www.upgradable.org/
San Francisco, CAearlyTeam: 5

Usman Anwar

AI safety researcher who completed his PhD at Cambridge's Computational and Biological Learning lab, focusing on alignment and monitorability of large language models.

https://uzman-anwar.github.io/
Cambridge, England, United KingdomestablishedTeam: 1

Vanderbilt University

A private research university in Nashville, Tennessee, that received SFF Fairness Track funding for research related to AI fairness, algorithmic equity, and the societal implications of AI systems.

https://www.vanderbilt.edu/
Nashville, Tennessee, USAmature
Victoria Krakovna's Blog logo

Victoria Krakovna's Blog

Personal blog of Victoria Krakovna, Senior Research Scientist at Google DeepMind and co-founder of the Future of Life Institute, covering AI alignment research and related topics.

https://vkrakovna.wordpress.com/
London, United KingdomTeam: 1

Virtue AI

Virtue AI is an AI-native security and compliance platform that helps enterprises secure their AI systems and agents against threats like prompt injection, hallucinations, and data poisoning. It was founded in 2024 by leading AI safety researchers Bo Li, Dawn Song, Carlos Guestrin, and Sanmi Koyejo.

https://www.virtueai.com/
San Francisco, CAearlyTeam: 24
Vista Institute for AI Policy logo

Vista Institute for AI Policy

The Vista Institute for AI Policy builds AI law and policy as an academic field and develops talent for careers in AI governance, with a focus on promoting risk-mitigating U.S. regulation.

https://vistainstituteai.org/
Washington, DCearly

Wavefront Security

Wavefront Security provides at-cost cybersecurity services to nonprofits and policy organizations in the AI safety, biosecurity, and global catastrophic risk space.

https://www.wavefrontsecurity.com/
California, USAearly
WhiteBox Research logo

WhiteBox Research

WhiteBox Research is a Manila-based nonprofit that trains early-career researchers in mechanistic interpretability and AI safety, with a focus on building research capacity in Southeast Asia.

https://www.whiteboxresearch.org/
Quezon City, PhilippinesearlyTeam: 5

Worcester Polytechnic Institute & University of Massachusetts Amherst

A collaborative hardware security research effort between WPI and UMass Amherst focused on developing tamper-detection and verification mechanisms for semiconductor chips, with applications to AI governance and hardware-enabled guarantees.

Worcester, MA & Amherst, MA, United States

Workshop Labs

Workshop Labs is a public benefit corporation building billions of personalized, privacy-preserving AI models with a mission to keep humans empowered as AI advances.

https://workshoplabs.ai/
San Francisco, CApre-seedTeam: 8

World Economic Forum

The World Economic Forum is an international non-governmental organization that convenes global leaders from business, government, academia, and civil society to address major challenges including AI governance and emerging technology risks.

https://www.weforum.org/
Cologny, Geneva, SwitzerlandmatureTeam: 990

Wytham Abbey

A historic manor house near Oxford acquired by Effective Ventures Foundation in 2022 as a dedicated conference and retreat venue for the AI safety and effective altruism communities, which operated for approximately two years before being sold in 2025.

https://www.wythamabbey.org/
Wytham, Oxford, Englandwinding-downTeam: 5
xAI logo

xAI

Elon Musk's AI company, founded in 2023, focused on building maximally truth-seeking AI and understanding the nature of the universe. Creator of Grok, an AI chatbot integrated with X (formerly Twitter).

https://x.ai/
Palo Alto, CAmatureTeam: 1200

Yale University

Yale University is a private Ivy League research university in New Haven, Connecticut, home to several AI safety and governance research programs, including the Schmidt Program on AI and National Power, the Center for Algorithms, Data, and Market Design (CADMY), and the Digital Ethics Center.

https://www.yale.edu/
New Haven, ConnecticutmatureTeam: 18893