Organizations
5050
5050 is a free 12-14 week company-builder program run by Fifty Years that helps scientists, researchers, and engineers become deep-tech startup founders, with a dedicated AI safety track.
https://www.fiftyyears.com/5050/ai
80,000 Hours
80,000 Hours is a nonprofit that provides free research, career advice, and a job board to help people find careers that effectively tackle the world's most pressing problems, with a current focus on AI safety.
https://80000hours.org/
AAAI/ACM Conference on Artificial Intelligence, Ethics and Society
AIES is a peer-reviewed academic conference series jointly organized by AAAI and ACM that brings together a multidisciplinary community to examine the ethical, social, and policy dimensions of artificial intelligence.
https://www.aies-conference.com/ACM FAccT
The ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT) is a premier peer-reviewed academic conference that brings together researchers and practitioners to investigate fairness, accountability, and transparency in socio-technical systems.
https://facctconference.org/ACX Atlanta
ACX Atlanta (The Atlanta Moloch Slayers) is a monthly in-person meetup group for rationalists and readers of the Slate Star Codex and Astral Codex Ten blogs in Atlanta, Georgia.
https://acxatlanta.com/Adam Jermyn
Adam Jermyn is a physicist and AI safety researcher at Anthropic, working on neural network interpretability and inner alignment. He previously conducted independent AI alignment research after transitioning from a career in computational astrophysics.
https://adamjermyn.com/Advanced Research + Invention Agency (ARIA)
ARIA is a UK government research funding agency that backs high-risk, high-reward R&D in underexplored areas, including a major £59 million programme on formal mathematical safety guarantees for AI systems.
https://aria.org.uk/
AE Studio
AE Studio is a bootstrapped technology studio and AI alignment research organization that funds neglected safety research from its software consulting profits. Their work spans brain-computer interfaces, self-other overlap fine-tuning to reduce LLM deception, and consciousness research.
https://www.ae.studio/Aether
Aether is an independent research lab focused on LLM agent safety, conducting technical research on the alignment, control, and evaluation of large language model agents.
https://aether-ai-research.org/Agent Foundations Field Network
AFFINE (Agent Foundations FIeld NEtwork) runs intensive superintelligence alignment seminars and fellowships to upskill promising newcomers in agent foundations and AI alignment research.
https://www.affi.ne/AGI Inherent Non-Safety
A research project developing non-maximizing, aspiration-based designs for AI agents that avoid objective function maximization, arguing that such optimization is inherently unsafe in sufficiently capable AGI systems.
https://pik-gane.github.io/satisfia/
AI & Democracy Foundation
The AI & Democracy Foundation accelerates innovation, evaluation, and adoption of deliberative, democratic, human-centered governance and alignment systems for and with AI, serving as both a nonprofit funder and advisor to philanthropic organizations, AI companies, civil society, and governments.
https://aidemocracyfoundation.org/
AI Alignment Awards
AI Alignment Awards is a prize contest program that awards up to $100,000 for novel research progress on core AI alignment problems. It is a project of the Players Philanthropy Fund, funded by Open Philanthropy.
https://www.alignmentawards.com/AI Alignment Forum
A curated online hub for researchers to discuss technical AI alignment research, operated by Lightcone Infrastructure. It serves as the primary venue for sharing and coordinating cutting-edge alignment ideas across organizations including MIRI, OpenAI, DeepMind, CHAI, and others.
https://www.alignmentforum.org/
AI Alignment Foundation (AIAF)
A 501(c)(3) nonprofit that funds, accelerates, and advocates for AI alignment research by providing engineering teams, compute, and infrastructure to researchers pursuing neglected approaches.
https://www.aialignmentfoundation.org/AI Alignment Slack
A large community Slack workspace for AI safety researchers, practitioners, and enthusiasts to connect, collaborate, and discuss alignment-related topics in real time.
https://ai-alignment.slack.com/
AI Digest
AI Digest creates interactive explainers and demos to help policymakers and the public understand AI capabilities and their effects, operated as a project of Sage Future, a US 501(c)(3) charity.
https://theaidigest.org/
AI Explained
AI Explained is a London-based YouTube channel by a creator known as Philip that provides hype-free coverage of AI developments, capabilities, and safety topics for a general audience.
https://www.youtube.com/@AIExplainedYTAI Forensics
A European non-profit that investigates influential and opaque algorithms, holding major tech platforms accountable through independent technical audits and free software auditing tools.
https://aiforensics.org/AI Futures Project
A nonprofit research organization that develops detailed scenario forecasts of advanced AI trajectories to inform policymakers, researchers, and the public.
https://www.aifutures.org/
AI Governance & Safety Canada (AIGS Canada)
AIGS Canada is a nonpartisan Canadian not-for-profit working to ensure that advanced AI is safe and beneficial for all, by catalysing Canadian leadership in AI governance and safety.
https://aigs.ca/
AI Governance and Safety Institute (AIGSI)
A small nonprofit conducting outreach, education, and advocacy to improve institutional responses to existential risk from advanced AI. Led by Mikhail Samin and based in London.
https://aigsi.org/
AI Impacts
A research project that investigates decision-relevant questions about the future of artificial intelligence, including AI timelines, expert forecasts, and the potential societal impacts of advanced AI systems.
https://aiimpacts.org/
AI Lab Watch
A project that tracks and evaluates frontier AI companies on their safety practices through a weighted scorecard, focusing on actions labs should take to avert extreme risks from advanced AI.
https://ailabwatch.org/AI Objectives Institute
A nonprofit R&D lab working to ensure that AI and future economic systems are built and deployed with genuine human objectives at their core, through research, open-source tools, and broad public input.
https://ai.objectives.institute/AI Policy Bulletin
AI Policy Bulletin is a peer-reviewed digital magazine publishing policy-relevant perspectives on frontier AI governance, aimed at informing policymakers and the broader AI policy community.
https://www.aipolicybulletin.org/
AI Policy Institute
A research and advocacy nonprofit that conducts public opinion polling on AI risks and advocates for government policies to mitigate catastrophic risks from frontier AI technology.
https://theaipi.org/
AI Prospects
AI Prospects is a Substack publication by K. Eric Drexler exploring how advanced AI will transform society and what strategic options humanity has for navigating this transition safely.
https://aiprospects.substack.com/AI Risk Explorer (AIRE)
AI Risk Explorer (AIRE) is an online platform that monitors large-scale AI risks across cyber offense, biological risk, loss of control, and manipulation, providing curated evidence and actionable insights for policymakers and researchers.
https://www.airiskexplorer.com/AI Risk Mitigation Fund
A nonprofit grantmaking fund that supports technical AI safety research, AI governance policy, and training programs for new AI safety researchers to reduce catastrophic risks from advanced AI.
https://www.airiskfund.com/AI Risk: Why Care?
An interactive public education tool that explains AI existential risk to general audiences using a personalized AI chatbot, operated by the AI Governance and Safety Institute (AIGSI) and AI Safety and Governance Fund (AISGF).
https://whycare.aisgf.us/AI Safety Argentina
AI Safety Argentina (AISAR) is a 6-month research scholarship program based at the University of Buenos Aires that connects Argentine students with mentors to conduct AI safety research.
https://scholarship.aisafety.ar/en/
AI Safety Asia (AISA)
A global non-profit building AI safety governance capacity across Asia through policy research, training, and multi-stakeholder dialogue, starting in Southeast Asia.
https://www.aisafety.asia/AI Safety at the Frontier
A monthly newsletter curating and summarizing the most important AI safety research papers focused on frontier models, written by Johannes Gasteiger of Anthropic's Alignment Science team.
https://aisafetyfrontier.substack.com/AI Safety Australia and New Zealand
AI Safety ANZ builds and supports a community of AI safety researchers and advocates across Australia and New Zealand, empowering careers and local field-building to mitigate catastrophic AI risks.
https://www.aisafetyanz.com.au/
AI Safety Awareness Project
A 501(c)(3) nonprofit that educates the American public and traditional societal institutions about AI safety through free in-person workshops nationwide.
https://aisafetyawarenessproject.org/AI Safety Camp
A non-profit initiative that runs an online, part-time research program connecting early-career researchers with experienced AI safety mentors to collaborate on concrete projects aimed at reducing existential risk from AI.
https://www.aisafety.camp/
AI Safety Communications Centre
The AI Safety Communications Centre (AISCC) connects journalists to AI safety experts and resources, helping improve media coverage of AI risks and safety issues.
https://aiscc.org/
AI Safety Events & Training
Weekly newsletter listing newly announced AI safety events and training programs, both online and in-person.
https://aisafetyeventsandtraining.substack.com/AI Safety for Fleshy Humans
An interactive educational web series by Nicky Case explaining AI safety concepts to general audiences through accessible comics and interactive explainers.
https://aisafety.dance/AI Safety Foundation
A Canadian registered charity that increases public and scientific awareness of AI's catastrophic risks through education and research.
https://www.aisfoundation.ai/AI Safety Funding
A newsletter listing newly announced funding opportunities for individuals and organizations working to reduce existential risk from AI.
https://aisafetyfunding.substack.com/AI Safety Hub
AI Safety Hub was a UK-based field-building organization that ran the Safety Labs research programme, matching early-career researchers with experienced AI safety mentors to produce publishable research.
https://www.aisafetyhub.org/AI Safety Hungary
AI Safety Hungary is a Budapest-based nonprofit that runs educational programs and career support to help Hungarian students and professionals enter the AI safety field.
https://www.aishungary.com/
AI Safety in China
A bi-weekly newsletter by Concordia AI covering technical AI safety research, governance, and policy developments in China, aimed at bridging the knowledge gap between China's AI safety ecosystem and the global community.
https://aisafetychina.substack.com/
AI Safety Initiative at Georgia Tech (AISI)
AISI is a student-led community at Georgia Tech working to ensure AI is developed safely, running fellowships, research projects, and policy programs across technical and governance tracks.
https://www.aisi.dev/
AI Safety Map Anki Deck
An Anki flashcard deck of 167 cards covering the main organizations, projects, and programs in the AI safety ecosystem, designed for learning via spaced repetition.
https://ankiweb.net/shared/info/1103716634AI Safety Nudge Competition
A one-time behavioral nudge initiative run in October 2022 that used a prize draw to encourage people to complete self-defined AI safety goals and overcome procrastination.

AI Safety Quest
AI Safety Quest is a fully volunteer-based organization that helps people navigate the AI safety ecosystem through personalized advising calls, cohort learning, and mentorship matching.
https://aisafety.quest/AI Safety Support
AI Safety Support was a community-building project that reduced existential risk from AI by providing career resources, networking, mentorship, and operational support to early-career, independent, and transitioning AI safety researchers.
https://www.aisafetysupport.org/AI Safety Tactical Opportunities Fund (AISTOF)
A pooled multi-donor charitable fund that rapidly deploys grants to reduce catastrophic risks from advanced AI, covering technical alignment, governance, and evaluations.
https://manifoldmarkets.notion.site/AI-Safety-Tactical-Opportunities-Fund-AISTOF-1bf54492ea7a80fcb088fd431b6b10b4
AI Safety Takes
A personal Substack newsletter by AI safety researcher Daniel Paleka covering recent AI safety research papers and technical developments.
https://newsletter.danielpaleka.com/
AI Safety Videos
A curated resource page listing where to find AI safety video content, maintained by the AISafety.info project (Stampy's AI Safety Info), founded by Rob Miles.
https://aisafety.info/questions/2222AI Scholarships
A scholarship program through which Open Philanthropy provided direct funding support to individual AI safety researchers for tuition, living expenses, and related costs during their degree programs.

AI Standards Lab
An independent nonprofit and affiliated research company dedicated to accelerating the development of AI safety standards and risk management frameworks, with a focus on EU AI Act standards and global AI safety engineering.
https://aistandardslab.org/AI Timeline
An open-source interactive timeline of major AI events from the 2020s, documenting the road to AGI. No longer actively maintained.
https://ai-timeline.org/
AI Watch
A database and website maintained by Issa Rice that tracks people, organizations, and products in the AI safety and alignment field.
https://aiwatch.issarice.com/
AI X-risk Research Podcast (AXRP)
AXRP is a podcast hosted by Daniel Filan featuring in-depth interviews with AI safety researchers about their published work and how it might reduce the risk of AI causing an existential catastrophe.
https://axrp.net/
AI-Plans
AI-Plans is a platform for discovering, critiquing, and advancing AI alignment strategies, hosting a contributable compendium of alignment plans and running community research events.
https://ai-plans.com/AI: Futures and Responsibility Programme
A collaborative research programme between the Leverhulme Centre for the Future of Intelligence and the Centre for the Study of Existential Risk at the University of Cambridge, focused on the global risks, governance, and long-term safety of advanced AI.
https://www.ai-far.org/
AI2050
AI2050 is a philanthropic initiative of Schmidt Sciences that funds exceptional researchers worldwide working on the hard problems required for AI to be hugely beneficial to society by 2050.
https://ai2050.schmidtsciences.org/
AISafety.com
AISafety.com is a curated resource hub for the AI safety ecosystem, providing newcomers and practitioners with organized directories of courses, communities, events, jobs, funders, and more. It is the flagship platform of Alignment Ecosystem Development (AED), led by Søren Elverlin.
https://www.aisafety.com/AISafety.info
A comprehensive, community-written interactive FAQ about AI existential safety, founded by Rob Miles and hosted at aisafety.info.
https://aisafety.info/AISafety.info (Robert Miles)
AI safety education through YouTube videos and an interactive FAQ website (aisafety.info), making alignment concepts accessible to broad audiences.
https://aisafety.info/Algorithmic Research Group
An AI safety research lab studying how software and industrial systems recursively improve themselves, building benchmarks and evaluation frameworks to understand the behavior and limits of self-improving AI systems.
https://www.algorithmicresearchgroup.com/Ali Merali
Ali Merali is an Economics PhD candidate at Yale University researching how AI model scaling affects real-world economic productivity. He received Open Philanthropy funding to run randomized controlled trials estimating the economic impact of LLM scale.
https://economics.yale.edu/people/ali-merali
Aligned AI
Oxford-based AI safety company developing concept extrapolation technology to enable AI systems to generalize human values and intent beyond their training data.
https://buildaligned.ai/Alignment Ecosystem Development
An AI safety field-building nonprofit that builds and maintains digital infrastructure to grow and improve the AI safety ecosystem, including AISafety.com, AISafety.info, and approximately 15 other projects.
https://alignment.dev/
Alignment of Complex Systems Research Group
An interdisciplinary research group based at Charles University in Prague studying multi-agent systems composed of humans and advanced AI, focused on understanding and mitigating systemic risks from AI integration into human institutions.
https://acsresearch.org/
Alignment Research Center
A nonprofit research organization focused on theoretical AI alignment research, developing formal mechanistic explanations of neural network behavior to ensure future ML systems are aligned with human interests.
https://www.alignment.org/Alignment Research Engineer Accelerator
ARENA is a 4-5 week intensive ML engineering bootcamp in London that trains technically skilled individuals to contribute to AI safety research. It covers deep learning fundamentals, mechanistic interpretability, reinforcement learning, and model evaluations.
https://www.arena.education/All-Party Parliamentary Group for Future Generations
A cross-party group in the UK Parliament that works to make the welfare of future generations salient to policymakers, combating political short-termism on issues like catastrophic risks, climate change, and emerging technology.
https://www.appgfuturegenerations.com/Alliance to Feed the Earth in Disasters (ALLFED)
ALLFED is a nonprofit that researches and develops resilient food solutions to ensure humanity can be fed during global catastrophes such as nuclear winter, supervolcano eruptions, or events that disable critical infrastructure.
https://allfed.info/
Americans for Responsible Innovation
Americans for Responsible Innovation (ARI) is a bipartisan 501(c)(4) nonprofit that advocates for thoughtful AI governance frameworks in the United States. It works to help policymakers develop policies that protect the public from AI-related harms while maintaining American technological leadership.
https://ari.us/
Amodo Design
A Sheffield-based hardware engineering consultancy focused on differential technology development across AI safety, biosecurity, humane tech, and accelerating science.
https://amododesign.com/Amplifying AI Safety
An AI safety project fiscally sponsored by Epistea, z.s., a Czech umbrella organization for existential security and epistemics projects based in Prague.
An Overview of the AI Safety Funding Situation
A research article by Stephen McAleese providing a comprehensive overview of the AI safety funding landscape, published on the EA Forum and LessWrong.
https://forum.effectivealtruism.org/posts/XdhwXppfqrpPL2YDX/an-overview-of-the-ai-safety-funding-situationAndrew Lohn
Andrew Lohn is a Senior Fellow at Georgetown's Center for Security and Emerging Technology (CSET), where he leads the CyberAI Project examining the intersection of artificial intelligence and cybersecurity. His research focuses on how AI shifts the cyber offense-defense balance and the security vulnerabilities inherent in AI systems.
https://cset.georgetown.edu/staff/andrew-lohn/Angela Aristizábal
Angela Aristizábal is a Colombian researcher and Program Director of the ITAM AI Futures Fellowship, focused on building research capacity in Latin America around catastrophic and existential risks from advanced AI.
https://aifuturesfellowship.org/
Anthropic
Anthropic is an AI safety company and public benefit corporation building reliable, interpretable, and steerable AI systems, best known for developing the Claude family of large language models.
https://www.anthropic.com/
Apart Research
An independent AI safety research organization that accelerates AI safety talent development and produces impactful research through hackathons, structured fellowships, and collaborative research programs.
https://apartresearch.com/Apollo Fellowship
An EA-aligned summer debate camp at Oxford University for high school and first-year college students, combining competitive debate training with Effective Altruism concepts including AI safety and global catastrophic risk.
https://www.apollofellowship.com/
Apollo Research
Apollo Research is an AI safety organization that develops evaluations and tools to detect and mitigate deceptive alignment (scheming) in frontier AI systems.
https://www.apolloresearch.ai/
Applied Research Laboratory for Intelligence and Security
ARLIS is the University of Maryland's Department of Defense University Affiliated Research Center (UARC) dedicated to intelligence and national security, combining AI, behavioral science, and systems engineering to address complex security challenges.
https://www.arlis.umd.edu/
Arb Research
Arb Research is a small research consultancy producing rigorous, independent analysis on AI safety, forecasting, and related topics for funders and organizations in the effective altruism ecosystem.
https://arbresearch.com/
Arbital
Arbital was a hybrid blogging and wiki platform designed to make complex explanations of AI alignment and mathematics more accessible, founded by Alexei Andreev and Eliezer Yudkowsky. The project was shut down in 2017 and its content was later migrated to LessWrong.
https://arbital.greaterwrong.com/explore/ai_alignment/
Arcadia Impact
Arcadia Impact is a London-based nonprofit that empowers individuals to pursue high-impact careers tackling global challenges, with a focus on AI safety research, governance, and talent development.
https://www.arcadiaimpact.org/Arizona State University
Arizona State University is a major public research university and one of the largest in the United States, with significant programs in AI governance, responsible innovation, and governance of emerging technologies.
https://www.asu.edu/
Arkose
Arkose was an AI safety field-building nonprofit that supported experienced machine learning professionals to become involved in technical AI safety research through personalized advisory calls, curated resources, and expert introductions. The organization closed in June 2025 due to lack of funding.
https://arkose.org/
Ashgro
Ashgro is a 501(c)(3) public charity that provides Model A fiscal sponsorship to AI safety projects, handling their accounting, HR, legal compliance, and grant management so project leads can focus on their mission.
https://www.ashgro.org/
Association for Long Term Existence and Resilience (ALTER)
An Israeli academic research and advocacy nonprofit focused on reducing catastrophic and existential risks through AI safety research, biosecurity policy, and standards development.
https://alter.org.il/
Astera Neuro & AGI
Astera's Neuro & AGI program is an in-house research effort that draws on neuroscience to develop safe and aligned artificial general intelligence, operating under the Astera Institute founded by Jed McCaleb.
https://astera.org/neuro-agi/
Astral Codex Ten (ACX)
Astral Codex Ten is Scott Alexander's Substack blog covering reasoning, science, AI, medicine, ethics, and effective altruism, and the home of the ACX Grants program that funds high-impact projects.
https://www.astralcodexten.com/Astralis Foundation
A European multi-donor foundation that seeds and scales high-impact initiatives for the secure and beneficial development of AI. Astralis unites funders, experts, and entrepreneurs to steer AI toward beneficial outcomes through grantmaking, strategic guidance, and network-building.
https://astralisfoundation.org/Athena Mentorship Program for Women
Athena is a hybrid mentorship program for women in technical AI alignment research, combining remote mentorship with an in-person retreat to build skills, networks, and representation in the field.
https://researchathena.org/Atlas Computing
Atlas Computing is a 501(c)(3) nonprofit that maps neglected AI safety risks, sources expert founders, and prototypes solutions to scale human control over advanced AI capabilities.
https://atlascomputing.org/
Augur
An AI research consultancy providing foresight and strategy across the frontier AI supply chain, focusing on hardware and software supply chains, strategic AI use cases, and control and ownership of AI systems.
https://augurai.net/Balsa Policy Institute Inc
A nonpartisan 501(c)(3) nonprofit think tank that funds academic research, drafts legislation, and builds the evidence base for neglected federal policy reforms, with a primary focus on repealing the Jones Act.
https://www.balsaresearch.com/
Basis Research Institute
A nonprofit applied research organization building universal reasoning engines grounded in probabilistic programming and causal inference to advance society's ability to solve intractable scientific and societal problems.
https://www.basis.ai/
Beijing Institute of AI Safety and Governance (Beijing-AISI)
Beijing-AISI is a Beijing municipal government-backed research institute dedicated to AI safety evaluations, governance frameworks, and safety standards for large language models and AI systems.
https://beijing.ai-safety-and-governance.institute/Beneficial AI Foundation (BAIF)
A US nonprofit founded by Max Tegmark and Meia Chita-Tegmark to place AI safety on a solid quantitative foundation. BAIF funds research, fellowships, and university partnerships aimed at ensuring advanced AI systems remain safe and beneficial.
https://www.beneficialaifoundation.org/Berkeley Center for Responsible, Decentralized Intelligence
UC Berkeley's multidisciplinary research center advancing AI safety, agentic AI, and decentralization technology to empower a responsible digital economy.
https://rdi.berkeley.edu/
Berkeley Existential Risk Initiative
A US-based public charity that collaborates with university research groups working to reduce existential risk by providing them with free operational services and support.
https://www.existence.org/
Berryville Institute of Machine Learning
BIML is an independent nonprofit research institute focused on machine learning security, specifically the work of building security into ML systems at the design level.
https://berryvilleiml.com/
BlueDot Impact
BlueDot Impact is a nonprofit talent accelerator that runs free cohort-based courses to train professionals in AI safety, AI governance, and biosecurity. It is the leading pipeline for building the workforce needed to safely navigate transformative AI.
https://bluedot.org/Boston Astral Codex Ten
A local rationalist community meetup group in the Boston area organized around Scott Alexander's Astral Codex Ten blog. The group hosts informal social gatherings and occasional structured discussions in Cambridge and Somerville.
https://linktr.ee/bostonacxBoston University
Boston University is a large private research university in Boston, Massachusetts with over 37,000 students, 17 schools and colleges, and more than $554 million in annual research expenditures. It hosts AI safety and alignment student programs and has received Open Philanthropy funding for AI safety-relevant research.
https://www.bu.edu/
Bounded Regret
Bounded Regret is the personal research blog of Jacob Steinhardt, Associate Professor at UC Berkeley, covering AI safety, machine learning, forecasting, and philosophy.
https://bounded-regret.ghost.io/Brian Christian
Brian Christian is an American author and researcher whose books — including The Alignment Problem (2020) — have helped communicate AI safety and alignment challenges to broad audiences. He is also pursuing a DPhil in psychology at Oxford, researching human preferences to inform AI alignment.
https://brianchristian.org/Brown University AI Governance Lab
A research center at Brown University focused on AI governance, policy, and socially responsible computing, housed within the Center for Technological Responsibility, Reimagination and Redesign (CNTR) at the Data Science Institute.
https://cntr.brown.edu/Cadenza Labs
A SERI MATS research team that received joint LTFF funding in 2023 to investigate dishonesty detection in advanced AI systems, building on the Discovering Latent Knowledge paper. The team went on to co-found Cadenza Labs, an AI safety research group focused on interpretability and LLM lie detection.
https://cadenzalabs.org/Cambridge AI Safety Hub
A Cambridge-based hub bringing together students and professionals to reduce existential risks from advanced AI systems through education, research mentorship, and community-building.
https://caish.org/
Cambridge Boston Alignment Initiative
CBAI is a Cambridge, MA-based 501(c)(3) nonprofit that runs research fellowships and technical bootcamps to grow the pipeline of AI safety researchers, and fiscally sponsors student AI safety groups at Harvard and MIT.
https://www.cbai.ai/Cambridge Effective Altruism
Cambridge Effective Altruism is a community group at the University of Cambridge that helps students and local residents explore how to have the most positive impact through their careers and charitable giving. It runs fellowships, discussion groups, and career support programs, and was the seedbed for BlueDot Impact.
https://www.eacambridge.org/
Campaign for AI Safety (CAS)
An Australian grassroots advocacy organization founded in 2023 to increase public understanding of AI existential risk and push for strong laws to halt dangerous AI development. It merged with the Existential Risk Observatory in 2024.
https://campaignforaisafety.org/Can We Secure AI With Formal Methods?
A newsletter by Quinn Dougherty that bridges formal methods researchers and AI security practitioners, covering developments in formal verification applied to AI safety.
https://newsletter.for-all.dev/Carnegie Endowment for International Peace
A major Washington, DC-based think tank founded in 1910 that produces independent policy research on international security, democracy, and governance, with a growing program on AI safety and technology governance.
https://carnegieendowment.org/
Carnegie Mellon University
Carnegie Mellon University is a leading private research university in Pittsburgh, Pennsylvania, widely regarded as one of the world's top institutions for AI and computer science research. It hosts multiple AI safety and governance programs spanning technical research, policy, and applied AI security.
https://www.cmu.edu/
Catalyze Impact
A global nonprofit incubator that helps founders launch and scale AI safety, security, and resilience organizations by providing mentorship, co-founder matching, and access to seed funding networks.
https://catalyze-impact.org/Catherine Brewer
AI governance researcher and grantmaker working on AI safety capacity-building, previously funded by Open Philanthropy to support Oxford's AI safety community.
https://catherinebrewer.github.io/Cavendish Labs
A 501(c)(3) nonprofit research organization in Cavendish, Vermont focused on AI safety and pandemic prevention, operating as a residential research community where researchers live and work together.
https://cavendishlabs.org/
Center for a New American Security
CNAS is a Washington, DC-based bipartisan think tank that develops national security and defense policy, with a dedicated Technology & National Security program focused on AI, compute governance, and great power competition.
https://www.cnas.org/
Center for AI Policy
A nonpartisan advocacy organization that worked with the US Congress to develop and promote legislation addressing catastrophic risks from advanced AI systems.
https://www.centeraipolicy.org/
Center for AI Risk Management & Alignment (CARMA)
CARMA is a research and policy think tank working to lower the risks to humanity and the biosphere from transformative AI through integrated risk management, policy research, and technical safety work.
https://carma.org/
Center for AI Safety
A nonprofit research organization that works to reduce societal-scale risks from artificial intelligence through safety research, field-building, and advocacy.
https://safe.ai/Center for AI Safety Action Fund
The 501(c)(4) advocacy arm of the Center for AI Safety, dedicated to advancing bipartisan public policies that maintain U.S. leadership in AI and protect against AI-related national security threats.
https://action.safe.ai/
Center for AI Standards and Innovation (CAISI)
CAISI is the U.S. government's primary point of contact for AI testing and research within NIST, focused on developing voluntary AI standards and conducting evaluations of frontier AI systems. It was renamed from the U.S. AI Safety Institute in June 2025.
https://www.nist.gov/caisiCenter for Applied Rationality
A nonprofit that runs immersive workshops teaching rationality techniques drawn from cognitive science, behavioral economics, and decision theory, with a focus on improving thinking for people working on high-impact problems including AI safety.
https://www.rationality.org/Center for Applied Utilitarianism
A London-based AI strategy think tank led by Dr. Hauke Hillebrandt, conducting independent research on AI policy, AI governance, and global catastrophic risks.

Center for Human-Compatible AI
A research center at UC Berkeley dedicated to developing the foundations of provably beneficial AI systems, ensuring that advanced AI remains aligned with human values and preferences.
https://humancompatible.ai/Center for Humane Technology
A nonprofit dedicated to ensuring that today's most consequential technologies, including AI and social media, actually serve humanity by exposing misaligned incentives and advocating for systemic change through policy, litigation, and public awareness.
https://www.humanetech.com/
Center for International Security and Cooperation
Stanford University's interdisciplinary research center tackling critical security challenges, including AI governance, nuclear risk, biosecurity, and emerging technology policy.
https://cisac.fsi.stanford.edu/
Center for Law and AI Risk
CLAIR is building the field of Law and AI Safety, producing and promoting legal scholarship on reducing catastrophic and existential risks from advanced artificial intelligence.
https://clair-ai.org/
Center for Long-Term Cybersecurity
UC Berkeley's Center for Long-Term Cybersecurity (CLTC) is a research and collaboration hub advancing future-oriented cybersecurity research, policy, and education, with a growing focus on AI safety governance and risk management for frontier AI systems.
https://cltc.berkeley.edu/Center for Responsible Innovation
A Washington, DC-based 501(c)(3) nonprofit that conducts AI policy research, develops actionable legislative proposals, and educates U.S. policymakers on responsible innovation. It is the research and education arm of the Americans for Responsible Innovation family of organizations.
https://www.centerforresponsibleinnovation.us/Center for Security and Emerging Technology
Georgetown University think tank providing decision-makers with data-driven analysis on the security implications of emerging technologies.
https://cset.georgetown.edu/Center for Strategic and International Studies
CSIS is a major Washington, DC-based bipartisan think tank that conducts policy research on national security, international affairs, and emerging technologies including AI. Its Wadhwani AI Center focuses specifically on the governance, geopolitics, and national security implications of artificial intelligence.
https://www.csis.org/
Center on Long-Term Risk
A research organization focused on reducing risks of astronomical suffering (s-risks) from advanced AI, with emphasis on conflict prevention and cooperation between transformative AI systems.
https://longtermrisk.org/Centre for AI Security and Access
CASA is a research organization working to ensure the benefits of AI can be widely and equitably distributed globally without compromising essential security, with a focus on Global Majority countries.
https://casa-ai.org/
Centre for Effective Altruism (CEA)
CEA builds and supports the global effective altruism community through conferences, online platforms, local group support, grantmaking, and community health programs, helping people use evidence and reason to address the world's most pressing problems.
https://www.centreforeffectivealtruism.org/Centre for Enabling EA Learning & Research
CEEALAR (formerly the EA Hotel) is a residential fellowship in Blackpool, UK that provides free or subsidized accommodation, meals, and stipends to individuals working on effective altruism projects, with a focus on AI safety research.
https://www.ceealar.org/
Centre for Future Generations (CFG)
CFG is an independent think-and-do tank based in Brussels that helps policymakers anticipate and govern powerful emerging technologies including advanced AI, biotechnology, climate interventions, and neurotechnology.
https://cfg.eu/Centre for International Governance Innovation
CIGI is an independent, non-partisan Canadian think tank that produces research and policy recommendations on international governance challenges, with a dedicated program focused on managing global-scale risks from advanced AI systems.
https://www.cigionline.org/
Centre for Long-Term Resilience (CLTR)
The legal entity behind the Centre for Long-Term Resilience (CLTR), a UK-based independent think tank working to transform global resilience to extreme risks, particularly in AI safety and biosecurity.
https://www.longtermresilience.org/
Centre for the Governance of AI
GovAI is an independent nonprofit research organization dedicated to helping decision-makers navigate the transition to a world with advanced AI, by producing rigorous research on AI governance and fostering talent in the field.
https://www.governance.ai/Centre for the Study of Existential Risk
An interdisciplinary research centre at the University of Cambridge dedicated to studying and mitigating existential and global catastrophic risks, with major focus areas in AI safety, biological risks, and environmental risks.
https://www.cser.ac.uk/
CeSIA
The French Center for AI Safety (Centre pour la Securite de l'IA) is a Paris-based non-profit think tank and research center working to reduce risks from artificial intelligence through education, technical research, and policy advocacy in France and Europe.
https://www.cesia.org/enChina AI Safety & Development Association (CnAISDA)
China's self-described counterpart to the AI Safety Institutes of other countries, launched in February 2025 to represent China in international AI safety governance conversations. It operates as a networked coalition of eight leading Chinese research institutions rather than a standalone organization.
https://cnaisi.cn/
ChinaTalk
ChinaTalk is a podcast and newsletter covering China, technology, and US policy, founded by Jordan Schneider. It serves as a hybrid think tank and media outlet providing non-partisan analysis on US-China relations and emerging technology.
https://www.chinatalk.media/Civic AI Security Program (CivAI)
A nonprofit that educates policymakers, civil society, and the public about AI capabilities and dangers through interactive live software demonstrations.
https://civai.org/
Coefficient Giving
Coefficient Giving (formerly Open Philanthropy) is a major philanthropic grantmaker that directs funding toward high-impact causes including AI safety, global health, biosecurity, and farm animal welfare. It is the primary grantmaking vehicle for Dustin Moskovitz and Cari Tuna's philanthropy through Good Ventures.
https://coefficientgiving.org/
Cold Takes
Cold Takes is Holden Karnofsky's personal blog covering AI safety, longtermism, and existential risk, most notably the 'Most Important Century' thesis arguing that transformative AI makes the 21st century uniquely pivotal for humanity's long-run trajectory.
https://www.cold-takes.com/Collective Action for Existential Safety (CAES)
Collective Action for Existential Safety (CAES) catalyzes coordinated action to reduce existential risks from AI, nuclear weapons, and engineered pandemics. It is an initiative of the Center for Existential Safety, a newly-formed U.S. nonprofit.
https://existentialsafety.org/Collective Intelligence Project
A nonprofit R&D lab that develops collective intelligence tools and governance models to steer transformative AI development toward better outcomes through democratic public input.
https://www.cip.org/Collider
Collider is a coworking and community space in New York City for AI safety and other high-impact professionals to work, collaborate, and convene.
https://collider.nyc/Columbia University
Columbia University is an Ivy League research university in New York City with significant AI safety, governance, and policy research activity across multiple schools and centers.
https://www.columbia.edu/Compassion in Machine Learning
CaML researches how synthetic pretraining data can shift AI systems towards greater compassion and moral open-mindedness regarding all sentient beings, including animals and potential digital minds.
https://www.compassionml.com/
Computational and Biological Learning Lab (CBL)
A research group at the University of Cambridge's Department of Engineering that uses engineering approaches to understand the brain and develop artificial learning systems, with strengths in Bayesian and probabilistic machine learning.
https://cbl.eng.cam.ac.uk/
Computational Rational Agents Laboratory (CORAL)
A research group developing mathematical theory for computationally bounded agents to provide rigorous, scalable solutions to the AI alignment problem.
https://coral-research.org/Conjecture
London-based for-profit AI safety company working on Cognitive Emulation, an approach to building controllable, bounded AI systems that reason transparently.
https://www.conjecture.dev/Consequence Foundries
An early-stage project in the existential risk reduction space that received a $168,000 general support grant from Jaan Tallinn via the Survival and Flourishing Fund in 2022, with Convergence Analysis serving as fiscal sponsor.

Constellation
Constellation is a nonprofit research center in Berkeley that supports AI safety work through fellowships, an incubator, and a collaborative coworking space hosting researchers and organizations across the field.
https://www.constellation.org/Contramont Research
Contramont Research is a nonprofit AI safety lab that studies where safety and security evaluation methods break down, using cryptographic model organisms to expose fundamental limitations of existing techniques.
https://contramont.org/
ControlAI
ControlAI is a nonprofit advocacy organization working to keep humanity in control of advanced AI by pushing governments to prohibit the development of artificial superintelligence.
https://controlai.com/
Convergence Analysis
An international AI x-risk strategy think tank that conducts scenario research and governance analysis to mitigate risks from transformative AI technologies.
https://www.convergenceanalysis.org/
Cooperative AI Foundation
The Cooperative AI Foundation (CAIF) is a UK-registered charity that funds and supports research to improve the cooperative intelligence of advanced AI systems for the benefit of humanity.
https://www.cooperativeai.com/Coordinal Research
Coordinal Research builds automation tools to accelerate AI safety and alignment research. The organization develops AI-powered scaffolds and workflows that help researchers conduct alignment experiments faster and at greater scale.
https://coordinal.org/Coordination Project
A small project fiscally sponsored by the Center for Applied Rationality (CFAR), funded by SFF for general support in the 2023-H2 grant round.
Cornell University
A private Ivy League research university in Ithaca, New York, with multiple faculty and labs engaged in AI safety, alignment, and responsible AI research, serving as the institutional home and fiscal recipient for SFF-funded work.
https://www.cornell.edu/
Cyborgism
Cyborgism is an AI safety research agenda and community proposing that human-AI collaboration systems — where humans are cognitively augmented by LLMs rather than replaced by autonomous AI agents — can accelerate alignment research while preserving human control.
https://cyborgism.wiki/
Czech Association for Effective Altruism (CZEA)
Czech national organization promoting effective altruism through community building, events, and project incubation, with a particular focus on AI safety and high-impact careers.
https://efektivni-altruismus.cz/Daniel Dewey
Independent AI safety researcher and former Open Philanthropy program officer, focused on existential risks from advanced AI and deep learning.
https://www.danieldewey.net/Daniel Kang
Assistant Professor at UIUC researching dangerous capabilities of AI agents, with a focus on cybersecurity benchmarks and AI safety evaluations used by frontier labs and governments.
https://ddkang.github.io/Decode Research
An AI safety research infrastructure nonprofit that builds open-source tools and platforms to accelerate mechanistic interpretability research, including Neuronpedia and SAELens.
https://www.decoderesearch.org/
DeepSeek
DeepSeek is a Chinese AI research laboratory founded in 2023 that develops frontier large language models, including the DeepSeek-V3 and DeepSeek-R1 series, notable for achieving competitive performance at dramatically lower reported compute costs.
https://www.deepseek.com/Dioptra
Dioptra is a volunteer AI safety research community founded by Joshua Clymer that builds evaluations for advanced AI systems.
Distill Prize for Clarity in Machine Learning
An annual award of $10,000 recognizing outstanding work communicating and clarifying ideas in machine learning. Logistics are administered by the Open Philanthropy Project.
https://distill.pub/prize/
Don't Worry about the Vase
Don't Worry About the Vase is Zvi Mowshowitz's influential blog and Substack newsletter covering AI safety, AI developments, rationality, and policy, with over 32,000 subscribers.
https://thezvi.substack.com/
Donations List Website
A public database tracking philanthropic donations by individuals and foundations in the effective altruism and rationality communities. It is a personal project by Vipul Naik, hosted at donations.vipulnaik.com.
https://donations.vipulnaik.com/
Doom Debates
Doom Debates is a podcast and debate show hosted by Liron Shapira focused on high-stakes debates about AI existential risk. Its mission is to raise mainstream awareness of potential extinction from AGI and build social infrastructure for high-quality public discourse on the topic.
https://lironshapira.substack.com/
Dovetail
A small agent foundations research group using foundational mathematics to develop rigorous understanding of AI agents and their safety properties.
https://dovetailresearch.org/
Dr Waku
Dr Waku is a pseudonymous AI safety educator who creates YouTube videos, a Substack newsletter, and other content explaining AI alignment risks and AI security to general audiences.
https://drwaku.substack.com/
Dwarkesh Podcast
A long-form interview podcast by Dwarkesh Patel featuring deeply researched conversations with leading AI researchers, scientists, historians, and economists on topics including AI safety, AGI timelines, and the future of technology.
https://www.dwarkesh.com/EA Infrastructure Fund
An expert-managed grantmaking fund that supports projects building the effective altruism community's capacity, including community building, prioritization research, epistemic infrastructure, events, and fundraising for effective charities.
https://funds.effectivealtruism.org/funds/ea-communityEA Netherlands
EA Netherlands (Effectief Altruïsme Nederland) is the national effective altruism community-building organization for the Netherlands, running introductory programs, supporting local groups, and hosting major EA events.
https://effectiefaltruisme.nl/enEarendil
Earendil is a hardware security startup that builds tamper response systems for AI compute infrastructure, including GPU clusters, to support hardware-enabled governance and compliance verification for AI development.
https://earendil.ai/Economics of Transformative AI
A research initiative at the University of Virginia, led by Professor Anton Korinek, that produces and disseminates cutting-edge economic research to help society navigate the transition to transformative AI and guide it toward shared prosperity.
https://www.econtai.org/
Effective Altruism Domains
EA Domains (ea.domains) is a project that acquires and holds internet domain names relevant to effective altruism, AI safety, and existential risk, then offers them free to legitimate EA-aligned projects to prevent domain squatting.
https://ea.domains/Effective Altruism Geneva
Effective Altruism Geneva is a Swiss nonprofit community group based in Geneva that builds a local network of effective altruists and fosters high-impact careers in AI safety, policy, and global health.
https://eageneva.org/Effective Altruism Israel
Effective Altruism Israel is a Tel Aviv-based nonprofit that builds and supports the Israeli effective altruism community, helping people maximize their social impact through career guidance, education, and effective giving programs.
https://www.effective-altruism.org.il/
Effective Institutions Project
A global working group that seeks out and incubates high-impact strategies to improve institutional decision-making, with a primary focus on AI governance and existential risk reduction.
https://effectiveinstitutionsproject.org/
Effective Thesis
A nonprofit that helps university students choose high-impact thesis topics and launch research careers focused on the world's most pressing problems, including AI safety, biosecurity, animal welfare, and global health.
https://www.effectivethesis.org/
Effective Ventures Foundation
Effective Ventures Foundation (UK) is the umbrella charity that provided fiscal sponsorship and operational infrastructure for major effective altruism organizations including 80,000 Hours, Giving What We Can, and the Centre for Effective Altruism. It is currently winding down as its sponsored projects spin out to become independent entities.
https://ev.org/Effektiv Altruism Sverige (EA Sweden)
Effective Altruism Sweden is a Stockholm-based nonprofit that builds the Swedish effective altruism community through career coaching, fellowship programs, and project incubation. Founded in 2016, it is one of the most established national EA organizations globally.
https://www.effektivaltruism.org/Egg Syntax (Jesse Davis)
Independent AI safety and alignment researcher focused on technical research to reduce existential risk from advanced AI, particularly around LLM interpretability and the nature of LLM internal representations.
https://www.novonon.com/Egor Krasheninnikov
AI safety researcher who worked at the Krueger AI Safety Lab at the University of Cambridge, focusing on training helpful AI systems and understanding out-of-context reasoning in large language models.
Eisenstat Research Directions
Sam Eisenstat's independent AI alignment research program, focused on mathematical foundations of agency, logical uncertainty, concept formation (condensation theory), and causal modeling at different levels of abstraction.
https://www.sameisenstat.net/
Electronic Frontier Foundation
EFF is the leading nonprofit defending civil liberties in the digital world, championing user privacy, free expression, and innovation through litigation, policy work, and technology development.
https://www.eff.org/
EleutherAI
EleutherAI is a nonprofit AI research institute focused on interpretability, alignment, and open-source foundation model research. It is best known for creating GPT-NeoX, the Pythia model suite, and The Pile dataset.
https://www.eleuther.ai/
ELLIS Institute Tübingen
Europe's first ELLIS Institute, based in Tübingen, Germany, conducting pioneering fundamental AI research with dedicated groups in AI safety, alignment, and robust machine learning.
https://institute-tue.ellis.eu/Encode
Youth-led AI policy nonprofit that advances AI safety, governance, and accountability through nonpartisan legislative advocacy and public education, headquartered in Washington, DC.
https://encodeai.org/Epistea
A Prague-based nonprofit umbrella organization that creates, runs, and supports projects in existential security, epistemics, rationality, and effective altruism, providing fiscal sponsorship, operations infrastructure, and community spaces.
https://epistea.org/Epistemic Garden
An R&D lab building tools to map how ideas spread online, helping communities understand their information landscape and defend against coordinated manipulation.
https://www.epistemic.garden/
Epoch AI
Epoch AI is a nonprofit research institute that tracks and forecasts the trajectory of artificial intelligence by analyzing trends in compute, data, algorithmic efficiency, and capabilities. It produces leading databases and quantitative models to help policymakers, researchers, and funders understand the pace and impact of AI progress.
https://epoch.ai/Equilibria Network
Equilibria Network is a collective intelligence research organization studying how coordination mechanisms affect group outcomes, with a focus on multi-agent AI safety and democratic resilience.
https://eq-network.org/
EquiStamp
EquiStamp is a Public Benefit Corporation that provides evaluation implementation, data annotation, red/blue teaming, and operational support so AI safety researchers can focus on research rather than logistics.
https://www.equistamp.com/ERA
ERA (Existential Risk Alliance) is a Cambridge-based nonprofit running a fully funded annual fellowship to train researchers and entrepreneurs working on AI safety and governance.
https://erafellowship.org/
Ergo Impact
Ergo Impact finds, funds, and scales promising people and solutions to the world's most pressing problems by providing ambitious philanthropists a rigorous, high-leverage approach to deploying capital at scale.
https://ergoimpact.org/
ETH Zürich
ETH Zürich (Swiss Federal Institute of Technology) is one of the world's leading technical universities, hosting several prominent AI safety and security research groups including the SPY Lab and SRI Lab.
https://ethz.ch/ETH Zurich Fondation (USA)
The US fundraising arm of the ETH Zurich Foundation, enabling American donors to make tax-deductible gifts that support research, teaching, and talent at ETH Zurich in Switzerland.
https://ethz-foundation-usa.org/EthicsNet Creed.Space
A nonprofit creating crowdsourced datasets of prosocial behaviors to train ethical AI systems, and building the Creed.Space platform for personalized constitutional AI alignment.
https://creed.space/
European AI Office
The EU's official AI regulatory body within the European Commission, responsible for implementing and enforcing the EU AI Act, particularly for general-purpose AI models.
https://digital-strategy.ec.europa.eu/en/policies/ai-office
European Network for AI Safety (ENAIS)
ENAIS connects AI safety researchers, field-builders, and policymakers across Europe to improve coordination and reduce the fragmentation of the continent's AI safety ecosystem.
https://www.enais.co/Evitable
Evitable is a nonprofit that informs and organizes the public to confront societal-scale risks from AI and put an end to the reckless race to develop superintelligence.
https://evitable.com/
Existential Risk Observatory
A Dutch foundation that works to reduce existential risk by informing the public debate through media engagement, policy advocacy, research, and public events.
https://www.existentialriskobservatory.org/Explainable
Explainable backs content creators shaping how the world understands AI, running fellowships and campaigns to communicate AI safety research to broader audiences.
https://explainable.media/FABRIC
A nonprofit educational organization that runs immersive rationality and AI-focused camps for mathematically talented young people, including ESPR, PAIR, and ASPR.
https://www.fabric.camp/Faculty AI
Faculty AI is a London-based applied AI company that builds decision intelligence products and services for public and private sector clients, with a strong focus on responsible and safe AI deployment.
https://faculty.ai/
FAR AI
FAR.AI is an AI safety research nonprofit that conducts technical research on robustness, alignment, and model evaluation, while building the AI safety field through workshops, fellowships, and grantmaking.
https://www.far.ai/Flourishing Future Foundation
A 501(c)(3) nonprofit that accelerates neglected approaches to AI alignment by providing researchers with engineering teams, compute resources, and operational infrastructure.
https://www.flourishingfuturefoundation.org/
Forecasting Research Institute
FRI advances the science of forecasting to improve decision-making on high-stakes issues including AI risk, nuclear risk, and biosecurity. It was co-founded by superforecasting pioneer Philip Tetlock.
https://forecastingresearch.org/
Foresight Institute
A nonprofit research organization founded in 1986 that advances frontier science and technology for the benefit of life, with focus areas spanning secure AI, nanotechnology, longevity biotechnology, neurotechnology, and existential hope.
https://foresight.org/Forethought
A research nonprofit based in Oxford, UK, focused on how to navigate the transition to a world with superintelligent AI systems, tackling neglected questions in AI macrostrategy.
https://www.forethought.org/
Formation Research
Formation Research is a UK-based not-for-profit that researches lock-in risk — the danger that negative features of the world, such as authoritarian power structures or AI-enabled totalitarianism, become permanently entrenched — and develops interventions to minimize it.
https://www.formationresearch.com/
Foundation for American Innovation
A center-right tech policy think tank, formerly the Lincoln Network, that bridges Silicon Valley and Washington to advance AI safety policy, technology governance, and pro-innovation reform.
https://www.thefai.org/Foxglove
A UK-based nonprofit that uses strategic litigation, investigation, and campaigning to hold governments and Big Tech companies accountable for technology-related harms, including discriminatory algorithms, worker exploitation, and data privacy abuses.
https://www.foxglove.org.uk/
Friedrich Schiller University Jena
Friedrich Schiller University Jena is a major German research university that hosts the LAMALab, a research group led by Dr. Kevin Jablonka focused on AI-accelerated materials discovery and LLM benchmarking in chemistry.
https://www.jcsm.uni-jena.de/en/800/jablonka-kevin
From AI to ZI
A Substack blog by PhD mathematician Robert Huben documenting his Open Philanthropy-funded year of AI safety research and writing, covering mechanistic interpretability, AI risk, and related topics.
https://aizi.substack.com/
Frontier AI Safety Research (FAIR)
Argentine nonprofit conducting interdisciplinary research to advance frontier AI safety, embedded within the Laboratory of Innovation and Artificial Intelligence at the University of Buenos Aires.
https://fair-uba.com/Funding for AI Alignment Projects Working With Deep Learning Systems
A grant program run by Open Philanthropy (now Coefficient Giving) that awarded $16.6 million to AI alignment research projects working with deep learning systems, sourced through a 2021 public RFP.
https://coefficientgiving.org/funds/navigating-transformative-ai/Future Impact Group (FIG) Fellowship
FIG runs a part-time, remote-first 12-week research fellowship connecting early-to-mid-career researchers with experienced project leads working on AI safety, AI governance, and AI sentience.
https://futureimpact.group/
Future Matters
Future Matters is a nonprofit strategy consultancy and think tank based in Berlin that helps organizations working on climate protection, AI governance, and biosecurity create effective policy and social change.
https://future-matters.org/Future of Humanity Foundation
A UK-registered charity established in 2020 to support the work of the Future of Humanity Institute at the University of Oxford by hiring researchers and support staff, providing operational support, and disbursing grants. Dissolved in May 2024 following FHI's closure.
Future of Humanity Institute (FHI)
FHI was a pioneering multidisciplinary research institute at the University of Oxford, founded by Nick Bostrom in 2005 to study existential risks and big-picture questions about humanity's long-term future. It closed in April 2024 after 19 years.
https://www.futureofhumanityinstitute.org/
Future of Life Foundation (FLF)
An organizational incubator that launches new nonprofits and projects working to steer transformative technology away from extreme large-scale risks. FLF identifies gaps in the AI safety ecosystem, recruits founders, and provides seed funding and operational support to new ventures.
https://www.flf.org/Future of Life Institute
A nonprofit organization working to steer transformative technologies -- particularly AI, biotechnology, and nuclear weapons -- away from extreme large-scale risks and towards benefiting life.
https://futureoflife.org/FutureSearch
FutureSearch is an AI forecasting startup that deploys teams of LLM agents to research, analyze, and forecast across structured data, emphasizing legible reasoning behind predictions.
https://futuresearch.ai/General Purpose AI Policy Lab
A French nonprofit research organization working alongside government institutions to address the security and international coordination challenges posed by general-purpose AI development.
https://gpaipolicylab.org/generative.ink
generative.ink is the personal research and creative platform of Janus (also known as "moire" and "@repligate"), a pseudonymous AI safety researcher known for the Simulators framework and the Loom human-AI collaboration tool.
https://generative.ink/
Geneva Centre for Security Policy
The Geneva Centre for Security Policy (GCSP) is an international foundation that advances peace, security, and international cooperation through education, diplomatic dialogue, and policy research. It hosts over 1,100 course participants annually and conducts research on emerging security challenges including AI governance and autonomous weapons.
https://www.gcsp.ch/Geodesic Research
Geodesic Research is a technical AI safety organization based in Cambridge, UK, focused on implementing and measuring pre- and post-training methods to improve model safety and alignment.
https://www.geodesicresearch.org/George Mason University
George Mason University is a large public research university in Fairfax, Virginia, notable in the AI safety and governance space for housing the Mercatus Center and for faculty research on AI scenarios and policy.
https://www.gmu.edu/Georgetown University
Georgetown University is a major private Jesuit research university in Washington, D.C. that hosts several programs relevant to AI safety and governance, including the Center for Security and Emerging Technology (CSET), the McCourt School's Tech & Public Policy program, and the Law School's Institute for Technology Law & Policy.
https://www.georgetown.edu/
GiveWiki
GiveWiki is a crowdsourced charity evaluator and donation recommendation platform that aggregates expert donor track records to surface high-impact philanthropic projects, with a primary focus on AI safety.
https://givewiki.org/
Giving What We Can
Giving What We Can (GWWC) is a community of effective givers that promotes the 10% Pledge, encouraging people to commit at least 10% of their income to the most impactful charities. Founded in 2009, it has grown to over 12,000 members who have collectively donated more than $500 million.
https://www.givingwhatwecan.org/
Global AI Moratorium (GAIM)
Calling on policymakers to implement a global moratorium on large AI training runs until alignment is solved.
https://moratorium.ai/Global Catastrophic Risk Institute
A nonprofit, nonpartisan think tank founded in 2011 that conducts research and policy work on risks that could significantly harm or destroy human civilization, including AI, nuclear war, climate change, and asteroid impacts.
https://gcri.org/
Global Challenges Project (GCP)
GCP runs intensive three-day residential workshops for university students to explore foundational arguments around risks from advanced AI and biotechnology, helping them identify careers in catastrophic risk reduction.
https://www.globalchallengesproject.org/
Global Partnership on AI (GPAI)
GPAI is an international intergovernmental initiative of 44 member countries that promotes the responsible development and use of artificial intelligence, grounded in human rights, inclusion, and democratic values. In July 2024, GPAI merged with the OECD's AI work under a unified GPAI brand hosted at the OECD in Paris.
https://oecd.ai/en/gpai
Global Priorities Institute (GPI)
GPI was an interdisciplinary research center at the University of Oxford (2018-2025) that conducted foundational academic research on how to do the most good. It used philosophy, economics, and psychology to investigate global priorities and existential risk.
https://www.globalprioritiesinstitute.org/
Global Shield
An international advocacy organization devoted to reducing global catastrophic risk from all threats and hazards, working with governments worldwide to enact policies that address existential and catastrophic risks.
https://www.globalshieldpolicy.org/GoalsRL
GoalsRL was a one-day academic workshop on goal specifications for reinforcement learning, held in 2018 jointly at ICML, IJCAI, and AAMAS. It brought together researchers to address challenges in reward engineering and explore alternatives to hand-designed scalar rewards.
https://sites.google.com/view/goalsrlGood Ancestors Policy
An Australian charity that conducts policy research and advocates for government action to reduce catastrophic and existential risks, with a focus on AI safety, pandemic prevention, and disaster preparedness.
https://www.goodancestors.org.au/
Good Impressions
Good Impressions is a grant-funded digital marketing agency that applies for-profit growth techniques to help effective nonprofits, think tanks, and foundations maximize engagement with their work.
https://www.goodimpressionsmedia.com/
Goodfire
Goodfire is an AI interpretability research lab that builds tools to understand and design the internal mechanisms of neural networks. Their flagship product, Ember, gives engineers direct, programmable access to AI model internals.
https://www.goodfire.ai/Google DeepMind
Google DeepMind is Alphabet's primary AI research lab, formed in 2023 by merging DeepMind and Google Brain, working toward artificial general intelligence that benefits humanity.
https://deepmind.google/
Gradient Institute
Gradient Institute is an independent Australian nonprofit research organisation advancing safe and responsible AI through rigorous science-based research, practical guidance, and policy engagement.
https://www.gradientinstitute.org/
Gray Swan AI
Gray Swan AI is an AI safety and security company that builds tools to assess vulnerabilities in AI deployments and develop more robust, attack-resistant AI models. It was founded in 2024 by Carnegie Mellon University researchers who pioneered automated jailbreaking research.
https://www.grayswan.ai/
Guide Labs
Guide Labs builds interpretable AI systems and foundation models that humans can reliably understand, audit, and steer. Their flagship model, Steerling-8B, is the first inherently interpretable large language model at scale.
https://www.guidelabs.ai/Halcyon Futures
Halcyon Futures is a nonprofit incubator and grant fund that identifies exceptional leaders and helps them launch ambitious new organizations focused on AI safety and global resilience.
https://halcyonfutures.org/Harmony Intelligence
Harmony Intelligence is an AI safety research and engineering company that reduces catastrophic AI risk through frontier model evaluations, red teaming, and AI-powered defensive cybersecurity products.
https://www.harmonyintelligence.com/
Harvard University
Harvard University is a leading private research university with several prominent programs advancing AI safety, AI governance, and AI interpretability research, including the Kempner Institute, Berkman Klein Center, and Harvard AI Safety Team.
https://www.harvard.edu/Hebrew University of Jerusalem
A leading Israeli research university home to the Governance of AI Lab (GOAL), which conducts cross-disciplinary research on AI governance, legal alignment, and the safe development of advanced AI systems.
https://en.huji.ac.il/Heron
Working to bridge the gap between frontier AI models and the level of cybersecurity they need by connecting professionals to high-leverage opportunities in AI security.
https://www.heronsec.ai/
High Impact Professionals (HIP)
High Impact Professionals (HIP) helps experienced mid-career and senior professionals transition into high-impact roles and commit to effective giving across global health, animal welfare, and global catastrophic risk reduction. Through its Impact Accelerator Program, Talent Directory, and HIP Pledge Club, HIP channels professional talent and financial resources toward the most pressing global problems.
https://www.highimpactprofessionals.org/HitRecord
Joseph Gordon-Levitt's collaborative media platform, which established a dedicated AI safety arm (HitRecord AI Safety Project LLC and AI Safety Digital Media Fund) to use storytelling and public engagement to address AI risks.
https://hitrecord.org/Hofvarpnir Studios
Hofvarpnir Studios is a nonprofit that builds and maintains GPU compute clusters to support academic AI safety research. It provides high-performance computing infrastructure to researchers who would otherwise lack access to the resources needed to study and advance AI safety.
https://hofvarpnir.ai/Holtman Systems Research
A solo-researcher company founded by Koen Holtman that conducts AI safety research and participates in the creation of European AI safety standards in support of the EU AI Act.
https://holtmansystemsresearch.nl/Horizon Events
Horizon Events is a Canadian non-profit that advances AI safety R&D by organizing high-impact events, including the AI Safety Unconference series and monthly Guaranteed Safe AI Seminars.
https://horizonomega.org/
How to pursue a career in technical AI alignment
A career guide written by Charlie Rogers-Smith for people familiar with AI alignment arguments who are considering direct work in the field. Published on the EA Forum and LessWrong in June 2022.
https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment
Human-aligned AI Summer School
An annual 4-day academic summer school held in Prague focused on teaching AI alignment research frameworks to PhD students, ML researchers, and advanced students.
https://humanaligned.ai/Humans in Control
Humans in Control is a nonpartisan grassroots movement working to protect people and future generations from the risks of unchecked AI through advocacy, coalition-building, and state-level policy campaigns.
https://humansincontrol.org/Iliad
An umbrella organization for applied mathematics research in AI alignment, now operating under the name Iliad. Organizes the ILIAD conference series, runs fellowship and intensive programs, incubates research organizations, and manages scientific publishing.
https://www.iliad.ac/ILINA Program
An African-led research program dedicated to building talent, generating impactful research, and shaping policy to advance AI safety, based in Nairobi, Kenya.
https://www.ilinaprogram.org/Impact Academy Limited
A nonprofit that runs fellowships and educational programs to develop expert, mission-aligned talent for AI safety research and governance.
https://www.impactacademy.org/
Impact Ops
Impact Ops is an operations consultancy that delivers specialist finance, recruitment, entity setup, and systems support to high-impact nonprofits, helping them scale and flourish.
https://impact-ops.org/Imperial College London
Imperial College London is a world-leading research university specialising in science, technology, engineering, medicine, and business, with significant programs in AI safety, trustworthy AI, and long-term AI risk research.
https://www.imperial.ac.uk/
Import AI
Import AI is a weekly newsletter by Jack Clark (co-founder of Anthropic) covering cutting-edge AI research and its societal implications, read by over 116,000 subscribers.
https://jack-clark.net/Institute for Advanced Consciousness Studies
A 501(c)(3) research laboratory in Santa Monica, CA that uses neuroimaging, neuromodulation, VR/AR, and altered states to study consciousness, with an AI safety research program on preventing antisocial AI through artificial empathy.
https://advancedconsciousness.org/Institute for AI Policy and Strategy
A nonpartisan think tank that produces policy research on the implications of advanced AI systems, covering frontier security, compute governance, and international AI strategy to equip policymakers for high-magnitude AI risks.
https://www.iaps.ai/Institute for Law & AI (LawAI)
An independent legal research think tank, now operating as the Institute for Law & AI, that conducts foundational research and advises governments on the legal and governance challenges posed by artificial intelligence.
https://law-ai.org/
Institute for Security and Technology
A 501(c)(3) nonpartisan think tank that bridges technology and national security policy, with major programs addressing ransomware, frontier AI security, and the catastrophic risks posed by emerging technologies to nuclear stability.
https://securityandtechnology.org/
Intelligence Rising
Intelligence Rising is a strategic AI futures roleplay simulation that lets decision-makers experience the tensions and risks of competitive AI development. It is a project of Technology Strategy Roleplay, a UK registered charity.
https://www.intelligencerising.org/International AI Governance Alliance (IAIGA)
IAIGA is a Geneva-based non-profit initiative working to establish a supranational AI governance body and legally-binding global treaty to ensure AI safety and equitable distribution of AI-derived benefits.
https://www.iaiga.org/
International Association for Safe & Ethical AI (IASEAI)
IASEAI is an independent nonprofit that works to ensure AI systems operate safely and ethically by shaping policy, promoting research, and building a global community around AI safety.
https://www.iaseai.org/
International Conference on Learning Representations
ICLR is one of the world's premier annual academic conferences dedicated to deep learning and representation learning research. It was founded in 2013 by Yann LeCun and Yoshua Bengio.
https://iclr.cc/International Conference on Machine Learning
ICML is the premier annual academic conference for machine learning research, bringing together researchers from academia and industry worldwide. It is organized by the International Machine Learning Society (IMLS), a 501(c)(3) nonprofit.
https://icml.cc/
International Dialogues on AI Safety (IDAIS)
A high-level international dialogue series that brings together leading AI scientists and governance experts to build consensus on managing extreme risks from frontier AI systems.
https://idais.ai/
International Institute of Information Technology Hyderabad
IIIT Hyderabad is India's first and leading research-focused IIIT, a not-for-profit public-private partnership university specializing in computer science and AI. It hosts the Responsible and Safe AI Systems course, supported by Open Philanthropy, and is a major hub for AI and machine learning research in India.
https://www.iiit.ac.in/Jacob Steinhardt
Associate Professor of Statistics and EECS at UC Berkeley and Co-founder & CEO of Transluce, researching how to make machine learning systems understood by and aligned with humans.
https://jsteinhardt.stat.berkeley.edu/Jennifer Lin
Independent AI safety researcher known for critical analysis of AI timelines and LLM capabilities, with work funded by Open Philanthropy and recognized in the EA community.
https://scholar.google.com/citations?hl=en&user=4EQGl1AAAAAJ&view_op=list_works&sortby=pubdateJeremy Rubinoff
Individual AI safety community builder based in Toronto who received Open Philanthropy funding to organize an AI safety retreat in 2023.
Jérémy Scheurer
AI safety researcher specializing in evaluations for deceptive capabilities, scheming, and situational awareness in frontier language models. Research Scientist in the Evaluations Team at Apollo Research.
Johns Hopkins University
Johns Hopkins University hosts AI safety-relevant research led by Prof. Anqi (Angie) Liu, whose group focuses on machine learning for trustworthy AI, including distributionally robust learning and uncertainty quantification under distribution shift.
https://anqiliu-ai.github.io/Juniper Ventures
Juniper Ventures is a pre-seed venture capital firm that invests in startups explicitly working to make AI secure and beneficial for humanity.
https://juniperventures.xyz/
JUSTICE
A UK legal reform charity that advances access to justice, human rights, and the rule of law through research, advocacy, and strategic court interventions, with a dedicated workstream on AI governance and rights-based frameworks for AI deployment.
https://justice.org.uk/
Kairos Project
Kairos is a US nonprofit that accelerates talent into AI safety and policy by running university group support programs and research mentorship fellowships.
https://kairos-project.org/
Krueger AI Safety Lab (KASL)
An AI safety research group led by David Krueger at the University of Cambridge's Computational and Biological Learning Lab (2021-2024), focused on technical AI alignment, deep learning safety, and reducing existential risk from advanced AI.
https://www.kasl.ai/Laboratory for Social Minds at Carnegie Mellon University
An interdisciplinary research lab at Carnegie Mellon University, directed by Simon DeDeo, that studies complex social systems through mathematical modeling and empirical investigation to better understand humanity's past, present, and future.
https://sites.santafe.edu/~simon/Langsikt - Centre for Long-Term Policy
A Norwegian non-profit think tank working to make policymaking more long-term, with a focus on AI governance, pandemic preparedness, biotechnology risks, and institutional reforms to represent future generations.
https://www.langsikt.no/Lausanne AI Alignment
A student-led AI safety group at EPFL in Lausanne, Switzerland that organizes bootcamps, hackathons, reading groups, and research projects to advance the field of AI safety and alignment.
https://lausanne.aisafety.ch/LawZero
LawZero is a nonprofit AI safety research organization founded by Yoshua Bengio to develop safe-by-design AI systems that cannot act autonomously or pursue hidden goals.
https://lawzero.org/Leaf: Dilemmas and Dangers in AI
Leaf runs online fellowships for exceptional teenagers (ages 15-19) to explore how they can have the most positive impact, including through a flagship course on AI safety called Dilemmas and Dangers in AI.
https://leaf.courses/Leap Labs
Leap Labs builds AI-powered interpretability tools to accelerate scientific discovery by finding patterns in complex datasets that humans and standard methods miss.
https://www.leap-labs.com/Lee Foster
Lee Foster is an AI security researcher and the Co-Founder and CEO of Aspect Labs who received Open Philanthropy funding in 2024 to build an LLM Misuse Database documenting real-world instances of large language model misuse.
https://www.aspectlabs.ai/
Legal Advocates for Safe Science and Technology (LASST)
A nonprofit that uses legal advocacy, including amicus briefs, impact litigation, and policy engagement, to mitigate catastrophic risks from advanced AI systems and biotechnology.
https://lasst.org/Legal Safety Lab
A Dutch foundation (stichting) that uses legal expertise and advocacy within Europe to promote safer development and deployment of frontier technologies including AI, biotechnology, and nuclear technology.
https://legalsafetylab.org/
LessWrong
A community blog and forum devoted to refining the art of human rationality, with major focus areas including AI alignment, cognitive biases, decision-making, and effective altruism.
https://www.lesswrong.com/Lethal Intelligence
Lethal Intelligence is an AI risk awareness media project producing original explainer films, podcasts, and social media content about the existential dangers of advanced AI systems.
https://lethalintelligence.ai/
Leverhulme Centre for the Future of Intelligence (CFI)
The Leverhulme Centre for the Future of Intelligence (CFI) is an interdisciplinary research centre at the University of Cambridge that explores the nature, ethics, and impact of artificial intelligence. It brings together researchers from machine learning, philosophy, social science, and other fields to address both near-term and long-term challenges posed by AI.
https://www.lcfi.ac.uk/
Lightcone Infrastructure
A nonprofit that builds infrastructure for the rationality and AI safety communities, running LessWrong, the AI Alignment Forum, and the Lighthaven campus in Berkeley, CA.
https://www.lightconeinfrastructure.com/
Lightspeed Grants
Lightspeed Grants is a fast-turnaround grantmaking program run by Lightcone Infrastructure, providing rapid funding for projects aimed at reducing existential risk and improving humanity's long-term future.
https://lightspeedgrants.org/
Lionheart Ventures
Lionheart Ventures is a seed-stage venture capital firm investing in transformative artificial intelligence and frontier mental health technologies to mitigate civilizational risk.
https://www.lionheart.vc/Live Theory
An AI safety research initiative developing new adaptive theoretical frameworks and AI interface designs to keep human sensemaking at pace with rapidly advancing AI systems.
https://groundless.ai/
London AI Safety Research (LASR) Labs
LASR Labs is a 13-week intensive technical AI safety research program in London that places researchers in supervised teams to produce peer-reviewed papers. It is operated by Arcadia Impact and focuses on reducing the risk of loss of control to advanced AI.
https://www.lasrlabs.org/
London Initiative for Safe AI
LISA is a London-based charity that serves as a hub and infrastructure provider for the AI safety ecosystem, hosting resident organizations, training programs, and independent researchers.
https://www.safeai.org.uk/Lone Pine Games, LLC
Lone Pine Games is a one-person indie game studio run by Conor Sullivan in Tempe, Arizona. It received a $100,000 Long-Term Future Fund grant in 2022 to develop a video game explaining the AI Stop Button Problem to the public and STEM professionals.
https://lonepine.games/Long Term Future Fund
An EA Funds grantmaker focused on mitigating global catastrophic risks, especially from advanced AI, by making grants of 4-6 figures mostly to individuals working on existential risk reduction.
https://funds.effectivealtruism.org/funds/far-futureLong-Term Future Fund
An expert-managed grantmaking fund within EA Funds that distributes millions annually to reduce global catastrophic risks, with a primary focus on AI safety research, biosecurity, and other existential risk mitigation work.
https://funds.effectivealtruism.org/funds/far-future
Longview Philanthropy
An independent, expert-led philanthropic advisory that helps major donors direct funding toward reducing catastrophic and existential risks, with a core focus on AI safety, biosecurity, and nuclear weapons policy.
https://www.longview.org/Luthien
Luthien is a Seattle-based nonprofit building production-ready AI control infrastructure that assumes AI models may act adversarially and prevents misaligned systems from achieving harmful goals.
https://luthienresearch.org/Machine Intelligence and Normative Theory Lab (MINT Lab)
A research lab at the intersection of philosophy and AI safety, using philosophical and computational methods to study AI alignment, governance, and normative competence, founded and directed by Seth Lazar at Johns Hopkins University and the Australian National University.
https://mintresearch.org/
Machine Intelligence Research Institute
A pioneering AI safety nonprofit that conducts research and public outreach to help prevent human extinction from the development of artificial superintelligence, with a current focus on policy advocacy and communications.
https://intelligence.org/
Machine Learning for Alignment Bootcamp (MLAB)
MLAB is an intensive in-person bootcamp run by Redwood Research that trains technically skilled programmers in the machine learning engineering skills needed to work on AI alignment research.
https://github.com/redwoodresearch/mlabMachine Learning for Socio-technical Systems Lab
A university research lab at the University of Rhode Island directed by Dr. Sarah M Brown, studying how machine learning interacts with complex socio-technical systems, with a focus on fairness of automated decision-making and AI safety evaluation.
https://ml4sts.com/
Macroscopic Ventures
Swiss nonprofit funder making grants and investments to reduce suffering risks from catastrophic AI misuse, AI conflict, and other large-scale harms. Formerly known as Center for Emerging Risk Research (CERR) and Polaris Ventures.
https://macroscopic.org/Macrostrategy Research Initiative
A nonprofit research organization founded by Nick Bostrom to study how present-day actions influence humanity's long-term future, with a focus on existential risk, AI safety, and AGI governance.
https://www.macrostrategy.co.uk/
Manifold Markets
The world's largest social prediction market platform, where anyone can create and trade on prediction markets for any topic using play money called Mana.
https://manifold.markets/
Manifund
A philanthropic platform and 501(c)(3) nonprofit that facilitates regranting, impact certificates, and crowdfunding for charitable projects, with a primary focus on AI safety and effective altruism cause areas.
https://manifund.org/Massachusetts Institute of Technology
MIT is a private research university in Cambridge, Massachusetts, widely recognized as a global leader in science, engineering, and technology research, including AI safety and alignment.
https://mit.edu/Mathematical Metaphysics Institute
A nonprofit research institute that seeks to develop mathematically rigorous foundations for metaphysics, using category theory to formalize insights from contemplative traditions, with applications to AI alignment and trustworthy AI.
https://www.mathematicalmetaphysics.org/Matthew Kenney
Individual AI safety researcher and founder of the Algorithmic Research Group, focused on benchmarking AI agents' capacity for autonomous research and development.
https://www.algorithmicresearchgroup.com/Meaning Alignment Institute
A nonprofit research institute that develops methods to align AI systems, markets, and democratic institutions with what people genuinely value, using an approach they call full-stack alignment.
https://www.meaningalignment.org/
Median Group
A small nonprofit research organization studying global catastrophic risks, best known for its insight-based AI timelines model and research on the feasibility of training AGI via deep reinforcement learning.
https://mediangroup.org/MentaLeap
MentaLeap is an Israel-based AI safety research group focused on mechanistic interpretability, applying neuroscience and cybersecurity expertise to reverse-engineer neural networks and reduce risks from advanced AI systems.
https://mentaleap.ai/Meridian Cambridge
Meridian Cambridge is an independent research and incubation hub in Cambridge, UK focused on AI safety, biosecurity, frontier-risk policy, and institutional design. Formerly Effective Altruism Cambridge CIC, it hosts the Cambridge AI Safety Hub, biosecurity and governance hubs, research labs, and fellowships.
https://www.meridiancambridge.org/
Meta Charity Funders
Meta Charity Funders (MCF) is a donor funding circle that pools capital and expertise to support EA meta charities - organizations working one level removed from direct impact. Members each commit $100,000 or more annually and coordinate through biannual open grant rounds.
https://www.metacharityfunders.com/
Metaculus
An online forecasting platform and aggregation engine that harnesses collective intelligence to produce calibrated predictions on questions of global importance, including AI timelines, biosecurity, nuclear risk, and climate change.
https://www.metaculus.com/Michigan State University
Michigan State University's Department of Computer Science and Engineering (CSE) conducts AI safety research, notably through the OPTML group's work on trustworthy machine learning and LLM unlearning.
https://engineering.msu.edu/about/departments/cseMila
Mila is the Quebec Artificial Intelligence Institute, the world's largest academic research center for deep learning, founded by Turing Award winner Yoshua Bengio. It brings together over 1,400 researchers and professors to advance AI for the benefit of all, with responsible and safe AI as a core strategic priority.
https://mila.quebec/enMiles's Substack
The personal newsletter of Miles Brundage, former Head of Policy Research at OpenAI, covering independent AI policy research and governance.
https://milesbrundage.substack.com/Mindstream Project
Mindstream Project operates the Buddhism & AI Initiative, a collaborative effort to bring together Buddhist communities, technologists, and contemplative researchers to help shape the future of artificial intelligence.
https://www.engagedbuddhists.ai/Missing Measures
A pre-launch organization fiscally sponsored by Lightcone Infrastructure and funded by the Survival and Flourishing Fund in 2025.
https://missingmeasures.com/
MIT AI Risk Repository
A comprehensive, living database of over 1,700 AI risks extracted from published frameworks and organized through causal and domain taxonomies, maintained as a program within MIT FutureTech.
https://airisk.mit.edu/
MIT Algorithmic Alignment Group
A research group at MIT CSAIL developing algorithmic frameworks, techniques, and policies to make AI systems safe and socially beneficial. Led by Associate Professor Dylan Hadfield-Menell.
https://algorithmicalignment.csail.mit.edu/MIT FutureTech
MIT FutureTech is an interdisciplinary research group at MIT CSAIL studying the economic and technical foundations of progress in computing and AI. The group produces rigorous insights on AI trends, risks, and impacts to inform policy, industry, and scientific funding decisions.
https://futuretech.mit.edu/
ML Alignment & Theory Scholars (MATS)
MATS (ML Alignment & Theory Scholars) is the largest AI safety research fellowship and talent pipeline, running intensive 12-week research programs that pair fellows with leading AI alignment mentors in Berkeley and London.
https://www.matsprogram.org/ML Safety Newsletter
A free newsletter publishing curated summaries of recent machine learning safety research, run by Dan Hendrycks and contributors associated with the Center for AI Safety.
https://newsletter.mlsafety.org/
ML4Good
ML4Good runs intensive, fully-funded in-person bootcamps to train motivated people for careers in AI safety, covering both technical and governance tracks.
https://ml4good.org/
Model Evaluation & Threat Research (METR)
METR is a research nonprofit that develops scientific methods to evaluate whether frontier AI systems could pose catastrophic risks to society, working with leading AI labs on pre-deployment safety assessments.
https://metr.org/
Modeling Cooperation
A research project that uses game theory and computational modeling to reduce catastrophic risks from competition in the development of transformative AI.
https://www.modelingcooperation.com/Modulo Research
Modulo Research is a UK-based AI safety research organization that conducts empirical evaluations of large language models and develops datasets to advance scalable oversight research.
https://www.moduloresearch.com/
Mox
Mox is San Francisco's largest AI safety coworking and community space, providing workspace, events, and fellowships for researchers and organizations working on high-impact problems.
https://moxsf.com/MSEP Project
The Molecular Systems Engineering Platform (MSEP) is a free, open-source software tool conceived by nanotechnology pioneer Eric Drexler for designing and simulating atomically precise nanomechanical systems.
https://msep.one/
Mythos Ventures
Mythos Ventures is an early-stage venture capital firm investing in prosocial technologies and safe AI systems. They back pre-seed and seed-stage founders building AGI-resilient, positive-impact companies.
https://www.mythos.vc/National Academies of Sciences, Engineering, and Medicine
The National Academies of Sciences, Engineering, and Medicine is the United States' preeminent independent scientific advisory body, providing expert consensus reports to inform government policy on science, engineering, and medicine, including AI safety and governance.
https://www.nationalacademies.org/National Science Foundation
The National Science Foundation (NSF) is an independent US federal agency that funds basic research and education across all non-medical fields of science and engineering, including substantial investment in AI safety-relevant research.
https://www.nsf.gov/Neel Nanda
Neel Nanda is the Mechanistic Interpretability Team Lead at Google DeepMind and creator of TransformerLens, the primary open-source library for mechanistic interpretability research.
https://www.neelnanda.io/New York University
New York University is a major private research university in New York City, home to several AI safety-relevant research groups including the NYU Alignment Research Group and the Center for Responsible AI.
https://www.nyu.edu/Nice Light
Nice Light is a London-based documentary film production company that produces films on the risks of advanced AI for broad public audiences.

Non-Trivial
Non-Trivial runs free online research fellowships for talented young people ages 14-20 to develop impactful projects on the world's most pressing problems. The program offers mentorship, scholarships up to $10,000, and a global peer community.
https://www.non-trivial.org/
Nonlinear
A nonprofit AI safety organization that researches, funds, and seeds high-impact interventions to reduce existential risk from artificial intelligence, operating key programs including the Nonlinear Network funding platform and the Nonlinear Library podcast.
https://www.nonlinear.org/Northeastern University
Northeastern University is a private R1 research university in Boston, Massachusetts, home to notable AI safety and mechanistic interpretability research through its Khoury College of Computer Sciences and Institute for Experiential AI.
https://www.northeastern.edu/
NYU Alignment Research Group (ARG)
An academic research group at New York University doing empirical work with language models to address longer-term safety concerns about highly capable AI systems.
https://wp.nyu.edu/arg/
Observatorio de Riesgos Catastróficos Globales
A scientific diplomacy organization working to improve global catastrophic risk governance in Spanish-speaking countries, with focus areas spanning AI regulation, pandemic biosecurity, food security, and risk management systems.
https://www.riesgoscatastroficosglobales.com/Obsolete
Reporting and analysis on capitalism, great power competition, and the race to build machine superintelligence by freelance journalist Garrison Lovely.
https://www.obsolete.pub/Odyssean Institute
A UK-based research and advocacy think tank that combines complexity modelling, expert elicitation, and democratic deliberation to improve policymaking around existential and catastrophic risks.
https://www.odysseaninstitute.org/Open Phil AI Fellowship
A fellowship program by Open Philanthropy that funds PhD students in AI and machine learning to pursue research aimed at reducing catastrophic risks from advanced AI systems.
https://coefficientgiving.org/ai-fellowship/Open Philanthropy Technology Policy Fellowship
A fellowship program run by Open Philanthropy that placed individuals in US government, Congressional, and think tank roles focused on AI and biosecurity policy. The program has since concluded.
https://coefficientgiving.org/open-philanthropy-technology-policy-fellowship/
OpenAI
OpenAI is an AI research and deployment company working to ensure that artificial general intelligence benefits all of humanity. It is the creator of ChatGPT, GPT-4, and a wide range of frontier AI models.
https://openai.com/
OpenBook
OpenBook is a searchable database of approximately 4,000 effective altruism grants from major EA funders, built to make funding flows in the EA ecosystem transparent and discoverable. The project is no longer actively maintained.
https://openbook.fyi/
OpenMined
OpenMined is a 501(c)(3) nonprofit building open-source privacy-preserving AI infrastructure that enables secure computation across siloed data. Their tools allow AI auditors and researchers to evaluate proprietary AI systems without requiring direct access to sensitive models or data.
https://openmined.org/Oregon State University
Oregon State University is a public research university in Corvallis, Oregon, whose hardware security research group contributed to AI compute governance through the Survival and Flourishing Fund's FlexHEG (Flexible Hardware-Enabled Guarantees) program.
https://oregonstate.edu/
Orthogonal
A non-profit AI alignment research organization focused on agent foundations, pursuing formal goal alignment approaches that would scale to superintelligence.
https://orxl.org/
Ought
Ought was a nonprofit AI alignment research lab that developed factored cognition approaches and built Elicit, an AI research assistant, before spinning Elicit off as an independent public benefit corporation in 2023.
https://ought.org/Oxford AI Safety Initiative
OAISI is a student- and researcher-led community at the University of Oxford committed to reducing catastrophic risks from advanced AI. It runs technical and governance programmes to support existing researchers and introduce new Oxford talent to AI safety work.
https://oaisi.org/Oxford China Policy Lab
A non-partisan, interdisciplinary research group based at the University of Oxford that produces policy-relevant research to mitigate global risks stemming from US-China great power competition, with a particular focus on artificial intelligence and emerging technologies.
https://oxfordchinapolicylab.org/Oxford Martin AI Governance Initiative
A research initiative at the University of Oxford's Martin School that combines technical AI expertise with deep policy analysis to understand and mitigate lasting risks from AI through governance research, decision-maker education, and training future technology governance leaders.
https://aigi.ox.ac.uk/P.H.I
P.H.I. (Prompt Human Inc.) was the individual research entity of Quentin Feuillade-Montixi, a French AI safety researcher focused on model psychology and LLM evaluation.
Palisade Research
Nonprofit investigating cyber offensive AI capabilities and the controllability of frontier AI models to help humanity avoid permanent disempowerment by strategic AI agents.
https://palisaderesearch.org/Panoplia Laboratories
Panoplia Laboratories (now operating as Active Site) is a nonprofit that evaluates the risks and capabilities of AI-driven biology through wet lab research, and develops broad-spectrum antivirals for pandemic preparedness.
https://www.panoplialabs.org/
Partnership on AI (PAI)
Partnership on AI is a global multi-stakeholder nonprofit that brings together industry, civil society, and academia to address the social implications of AI and promote responsible development and deployment.
https://partnershiponai.org/
Paul Christiano's Blog
Personal AI alignment blog by Paul Christiano, covering technical approaches to making AI systems safe, honest, and beneficial. The archive remains a key reference in the field.
https://ai-alignment.com/Pause House
Pause House is a residential community in Blackpool, UK, that provides free housing and stipends to activists working toward a global pause on AGI development.
https://gregcolbourn.substack.com/p/pause-house-blackpool
PauseAI
PauseAI is a global grassroots movement advocating for an immediate pause on the development of frontier AI systems until their safety can be demonstrated and they can be kept under democratic control.
https://pauseai.info/PEAKS
PEAKS is a coworking space in Zurich, Switzerland for professionals working on Effective Altruism and AI Safety research.
https://peaks-office.ch/Penn State University
Penn State University hosts AI safety research led by Prof. Rui Zhang, whose group received Open Philanthropy funding to develop methods for detecting and mitigating sandbagging in AI systems.
https://ryanzhumich.github.io/PIBBSS
A nonprofit research organization that runs interdisciplinary fellowship and affiliate programs bringing researchers from complex systems sciences (neuroscience, ecology, economics, physics, and others) to work on AI safety and alignment research.
https://pibbss.ai/fellowship/
Pivotal Research Fellowship
Pivotal Research runs a 9-week in-person research fellowship in London for early-career researchers working on AI safety, AI governance, and biosecurity. Fellows work alongside mentors from leading organizations to produce impactful research and launch careers in reducing global catastrophic risks.
https://www.pivotal-research.org/Planned Obsolescence
A Substack newsletter by Ajeya Cotra exploring AI capabilities, timelines, and the societal implications of increasingly autonomous AI systems.
https://www.planned-obsolescence.org/Plurality Institute
A nonprofit research hub that develops and experiments with plural technologies to strengthen democracy and support human cooperation at scale, bridging computer science, political science, and philosophy.
https://www.plurality.institute/Poseidon Research
Poseidon Research is an independent AI safety laboratory conducting deep technical research in interpretability, control, and secure monitoring to make advanced AI systems transparent, trustworthy, and governable.
https://poseidonresearch.org/Pour Demain
A Swiss non-profit think tank that develops evidence-based policy proposals on AI safety, biosecurity, and emerging technologies, bridging science, politics, and civil society for Switzerland and beyond.
https://www.pourdemain.ngo/Practical AI Alignment and Interpretability Research Group
A remote, non-profit research group focused on mechanistic interpretability of deep learning models, developing causal abstraction frameworks, open-source course materials, and mentorship programs for the AI safety community.
https://prair.group/
Preamble Windfall Foundation
The Preamble Windfall Foundation is a small Pittsburgh-based 501(c)(3) that supports animal welfare research and philanthropy guidance, notably through the Planetary Animal Welfare Survey (PAWS) project.
https://preambleforgood.org/Princeton University
Princeton University is a leading Ivy League research institution that conducts significant AI safety and AI governance research through several interdisciplinary centers and initiatives.
https://www.princeton.edu/Probably Good
Probably Good is a nonprofit that helps individuals build high-impact careers through free, evidence-based guides, 1-on-1 advising, and a curated job board.
https://probablygood.org/Psychosecurity Ethics @ EURAIO
A program within EURAIO (European Responsible Artificial Intelligence Office) that convenes expert summits and develops frameworks to address AI-driven psychological manipulation and protect civil liberties from autonomy-eroding AI systems.
https://www.psychosecurity.ai/Purdue University
Purdue University is a major public research university in West Lafayette, Indiana, whose computer science department has received AI safety funding for research on language model robustness and adversarial deception detection.
https://www.purdue.edu/Quantified Uncertainty Research Institute
A nonprofit research organization that builds open-source tools and conducts research on forecasting, epistemics, and uncertainty quantification to improve decision-making for the long-term future of humanity.
https://quantifieduncertainty.org/RadicalxChange Foundation Ltd.
A nonprofit foundation promoting democratic innovation, plural technology, and new governance mechanisms such as quadratic voting and funding to enable more equitable and participatory collective decision-making.
https://www.radicalxchange.org/RAISEimpact
RAISEimpact is a consulting program that helps AI safety organizations strengthen their management, leadership, and organizational culture to amplify their effectiveness.
https://www.raiseimpact.org/RAND Corporation
A major nonprofit policy research organization that, through its Center on AI, Security, and Technology (CAST) and Global and Emerging Risks division, conducts influential research on AI safety, frontier model security, AI governance, and existential risk policy.
https://www.rand.org/
Rational Animations
Rational Animations is a YouTube channel producing high-quality animated videos about AI safety, rationality, and effective altruism to reach mainstream audiences.
https://www.rationalanimations.com/Rationality Meetups
Coordinates and supports rationality-focused community meetup groups worldwide, serving as a hub for ACX (Astral Codex Ten), LessWrong, and broader rationality community organizers.
https://www.rationalitymeetups.org/
Redwood Research
A nonprofit AI safety research lab that pioneers threat assessment and mitigation techniques for advanced AI systems, with a current focus on AI control protocols and detecting strategic deception in language models.
https://www.redwoodresearch.org/Research on AI & International Relations
A research project fiscally sponsored by Convergence Analysis, focused on studying how AI technologies affect international relations, global governance, and geopolitical dynamics.
Responsible AI Collaborative
The Responsible AI Collaborative (TheCollab) is a nonprofit that maintains the AI Incident Database (AIID), the leading public repository of documented real-world AI harms and near-harms.
https://incidentdatabase.ai/
Rethink Priorities
A research-focused think-and-do tank that conducts empirical research across animal welfare, global health and development, AI, and other cause areas to uncover high-impact, neglected opportunities for improving the lives of humans and animals.
https://rethinkpriorities.org/Rice, Hadley, Gates & Manuel LLC
Rice, Hadley, Gates & Manuel (RHGM) is an international strategic consulting firm founded by former senior U.S. national security officials that helps companies navigate emerging markets and technology policy. Through Open Philanthropy funding, the firm has conducted research on AI accident risk and technology competition between the U.S. and China.
https://www.rhgm.com/
RiesgosIA.org
RiesgosIA.org is a Spanish-language non-profit providing open-access tools and educational resources on AI safety and governance, primarily serving Spanish-speaking communities.
https://riesgosia.org/Rising Tide
Blog by Helen Toner (Director of Strategy at CSET and former OpenAI board member) offering analysis on navigating the transition to advanced AI systems.
https://helentoner.substack.com/
Safe AI Forum
A US 501(c)(3) nonprofit dedicated to advancing international cooperation to reduce extreme AI risks, best known for running the International Dialogues on AI Safety (IDAIS) series that convenes leading scientists from around the world.
https://saif.org/
Safe Superintelligence Inc. (SSI)
Safe Superintelligence Inc. (SSI) is an AI research company founded by Ilya Sutskever focused solely on building safe superintelligence, with no other products or commercial distractions.
https://ssi.inc/
SaferAI
A French nonprofit that develops AI risk management frameworks, independently rates AI companies' safety practices, and contributes to international AI governance standards.
https://www.safer-ai.org/Sage Future
Sage builds tools to improve forecasting skills and public understanding of AI capabilities, with the goal of reducing global catastrophic risks from emerging technologies.
https://sage-future.org/Samotsvety Forecasting
Samotsvety is an elite team of superforecasters applying rigorous probability analysis to high-stakes questions in AI risk, nuclear risk, and existential risk. They are widely regarded as one of the best forecasting teams in the world.
https://samotsvety.org/Saturn Data
Saturn Data builds FPGA-accelerated servers for high-memory, high-bandwidth workloads and has received funding to prototype flexible hardware-enabled governors (FlexHEGs) for AI compute governance.
https://saturndata.com/
Saving Humanity from Homo Sapiens (SHfHS)
SHfHS is a small philanthropic foundation that identifies and funds researchers and organizations working on existential risk reduction. It acts as a funding intermediary rather than conducting direct research.
http://shfhs.org/Science of Trustworthy AI
A research funding program run by Schmidt Sciences that supports foundational technical research on understanding, predicting, and controlling risks from frontier AI systems. The program funds academic and nonprofit researchers working on AI safety science, evaluation methodology, and oversight of advanced AI.
https://www.schmidtsciences.org/trustworthy-ai/Secure AI Project
A nonprofit that develops and advocates for pragmatic policies to reduce the risk of severe harm from advanced AI, promoting transparency, accountability, and safe development through state and federal legislation.
https://secureaiproject.org/
SecureBio
A biosecurity nonprofit working to protect humanity against catastrophic pandemics through AI risk evaluation, pathogen-agnostic early warning surveillance, and DNA synthesis screening.
https://securebio.org/SeedAI
SeedAI is a Washington, D.C. nonprofit working at the intersection of AI policy and practical application, helping policymakers and communities across the U.S. understand, adopt, and shape AI responsibly.
https://www.seedai.org/Seldon Labs
An AI security accelerator and research lab based in San Francisco that invests in and supports early-stage startups building infrastructure for safe AGI deployment.
https://seldonlab.com/Sentience Institute
A nonprofit think tank researching the expansion of humanity's moral circle, with a primary focus on digital minds and the moral status of AI systems.
https://www.sentienceinstitute.org/Sentinel
A foresight and emergency response nonprofit that monitors global catastrophic risks using AI-augmented analysis and expert forecasters, publishing weekly risk briefings and maintaining a reserve team for rapid crisis response.
https://sentinel-team.org/
Siliconversations
Siliconversations is a YouTube channel that creates animated videos explaining AI safety risks and existential risk from advanced AI to general audiences. It is run by a former quantum scientist who became a full-time content creator.
https://www.youtube.com/@Siliconversations
Simon Institute for Longterm Governance
A Geneva-based think tank that fosters international cooperation on governing frontier AI by conducting research, facilitating dialogue between technical and policy communities, and training diplomats and civil servants.
https://simoninstitute.ch/Simon McGregor
Simon McGregor is a complex adaptive systems researcher at the University of Sussex who works on formal theories of agency and cognition, and organizes workshops bridging AI safety and artificial life research.
Simplex
AI safety research organization applying computational mechanics from physics and computational neuroscience to build a rigorous science of intelligence, with a focus on understanding the internal representations and emergent behavior of neural networks.
https://www.simplexaisafety.com/Sincxpress Education
Sincxpress Education is a STEM education company founded by Dr. Mike X Cohen that produces online courses and textbooks on applied mathematics, deep learning, and mechanistic interpretability for AI safety. Its courses have reached over 300,000 learners worldwide.
https://sincxpress.com/Singapore AI Safety Hub
Singapore's first civil society organization for AI safety, providing a co-working space, events, and community hub for researchers and professionals working on AI safety governance, technical research, and field-building in Asia.
https://www.aisafety.sg/SLT Summit organizers
Organizers of the Singular Learning Theory and Alignment Summit, a conference series connecting mathematical foundations of learning theory with AI alignment research.
https://singularlearningtheory.com/Softmax
Softmax is an AI alignment research startup developing the science of organic alignment through multi-agent reinforcement learning. Founded by Emmett Shear, Adam Goldstein, and David Bloomin, the company studies how agents learn to cooperate, share goals, and form collectively intelligent systems.
https://softmax.com/SPARC
SPARC is a free two-week summer program for mathematically gifted high school students, teaching applied rationality, decision theory, and AI safety to cultivate a generation of thoughtful technical leaders.
https://www.sparc.camp/Species
Species is a YouTube channel run by Drew Spartz that produces high-effort mini-documentaries educating a general audience about AI risk and the implications of advancing AGI.
https://www.youtube.com/@AISpeciesStanford Existential Risks Initiative
A Stanford University initiative that hosts and promotes academic scholarship on existential risks, running research fellowships, conferences, courses, and discussion groups focused on AI, nuclear war, pandemics, and climate change.
https://seri.stanford.edu/Stanford University
Stanford University is a leading research university hosting several AI safety-relevant programs, including the Human-Centered AI Institute (HAI), the Existential Risks Initiative (SERI), the Center for International Security and Cooperation (CISAC), and the Center for AI Safety.
https://www.stanford.edu/Steve Byrnes's Brain-Like AGI Safety
Steve Byrnes is a physicist and Research Fellow at Astera Institute working on AI safety through a neuroscience-informed lens, focusing on alignment challenges specific to future brain-like AGI systems.
https://sjbyrnes.com/Stiftung Neue Verantwortung
interface (formerly Stiftung Neue Verantwortung) is a Berlin-based independent think tank producing technology policy analysis and ideas for European policymakers and the public.
https://www.interface-eu.org/
Stop AGI
Stop AGI is a project and website launched by Andrea Miotti in April 2023 to communicate the extinction risks of artificial general intelligence to the public and propose policy solutions to prevent its development.
https://stop.ai/Stop AI
Stop AI is a grassroots activist organization that uses non-violent civil disobedience and public advocacy to demand a permanent, enforceable global ban on the further development of frontier AI technology.
https://www.stopai.info/Straumli
Straumli is an AI safety company that offers managed auditing and self-serve evaluations to help AI developers identify misuse risks and ship safer models faster.
https://straumli.ai/Study and Training Related to AI Policy Careers
An Open Philanthropy grant program providing scholarship and career development funding for individuals pursuing careers in AI governance and policy.
https://www.openphilanthropy.org/funding-for-study-and-training-related-to-ai-policy-careers/
Successif
Successif helps mid-career and senior professionals transition into high-impact careers in AI safety and governance through free personalized advising, workshops, and job market research.
https://www.successif.org/Supervised Program for Alignment Research
SPAR is a part-time, remote research fellowship that pairs aspiring AI safety and policy researchers with experienced mentors for 3-month research projects. It is one of the largest AI safety research fellowships by participant count.
https://sparai.org/Surge AI
Surge AI is a data labeling and AI training data company that provides high-quality human annotation, RLHF datasets, and adversarial red-teaming services to frontier AI labs including Anthropic, OpenAI, Google, Microsoft, and Meta.
https://surgehq.ai/
Survival and Flourishing Fund (SFF)
A major philanthropic fund that organizes grant applications and evaluates them using the S-Process algorithm to direct Jaan Tallinn's giving toward organizations working to ensure humanity's long-term survival and flourishing. It is the second-largest funder of AI safety after Open Philanthropy.
https://survivalandflourishing.fund/Swiss AI Safety Summer Camp
A free in-person bootcamp in Switzerland introducing students and early-career researchers to AI safety through technical and conceptual coursework. The camp covers alignment, mechanistic interpretability, and governance tracks.
https://www.aisafetycamp.ch/Talos Network
A German nonprofit that cultivates the next generation of European AI policy leaders through its flagship Talos Fellowship, combining training, a Brussels policymaking summit, and paid placements at leading think tanks and policy organizations.
https://www.talosnetwork.org/TamperSec
A hardware security startup developing tamper-proof enclosures for AI chips to prevent physical attacks on AI hardware and enable international AI governance through verifiable compliance mechanisms.
https://tampersec.com/Tarbell Center for AI Journalism
A nonprofit supporting journalism that helps society navigate the emergence of increasingly advanced AI, through fellowships, grants, and its own publication Transformer.
https://www.tarbellcenter.org/
Team Shard
Team Shard is a small alignment research collective led by Alex Turner (TurnTrout) that studies how reinforcement learning induces values in trained agents, with the goal of learning to reliably instill human-compatible values in AI systems.
https://turntrout.com/team-shardTechnical Alignment Impossibility Proofs
An independent research project focused on proving formal impossibility results in AI alignment using theoretical computer science methods, led by Alexander Bistagne as a Ronin Institute Fellow.
Technical Alignment Research Accelerator (TARA)
TARA is a free 14-week part-time technical AI safety training program for Python programmers in the Asia-Pacific region, enabling participants to develop AI safety research skills without relocating or leaving their jobs.
https://www.taraprogram.org/Technical University of Munich
Technical University of Munich (TUM) is one of Europe's leading research universities, with significant AI safety and reliable AI research programs including the Konrad Zuse School of Excellence in Reliable AI (relAI).
https://www.tum.de/Technion - Israel Institute of Technology
Israel's oldest and largest research university, founded in 1912, with particular strength in computer science, engineering, and AI research. It ranks first in Europe and second globally for AI research output.
https://www.technion.ac.il/en/The AI Governance Archive (TAIGA)
TAIGA is a private platform for qualified AI governance researchers to share non-public research, coordinate efforts, and find collaborators. It serves as a centralized hub to improve the efficiency and effectiveness of the transformative AI strategy and governance research community.
https://www.taigarchive.com/
The AI Policy Network (AIPN)
AIPN is a bipartisan 501(c)(4) advocacy organization that lobbies the U.S. federal government to enact policies preparing America for the emergence of AGI and advanced AI systems. It brings together government leaders, technology policy experts, and technical researchers to champion human control of transformative AI.
https://theaipn.org/The AI Policy Podcast
A biweekly podcast from CSIS's Wadhwani AI Center hosted by Gregory C. Allen, covering AI policy, regulation, national security, and geopolitics.
https://www.csis.org/podcasts/ai-policy-podcastThe AI Risk Network (ARN)
A Baltimore-based nonprofit media platform that produces podcasts, videos, and social content to bring AI extinction risk into mainstream public conversation.
https://www.guardrailnow.org/
The AI Whistleblower Initiative (AIWI)
An independent nonprofit supporting whistleblowers at frontier AI companies through expert guidance, legal support, and secure anonymous reporting channels. Now operating as The AI Whistleblower Initiative (AIWI).
https://aiwi.org/
The Alliance for Secure AI Action
A Washington, D.C.-based 501(c)(3) nonprofit that educates the public, policymakers, and media about the risks of advanced AI and advocates for bipartisan safeguards before AGI arrives.
https://secureainow.org/
The Australian Responsible Autonomous Agents Group
A cross-institutional Australian research collective focused on multi-objective reinforcement learning approaches to AI safety and alignment, with researchers at Federation University, Deakin University, and UNSW.
https://araac.au/The Building Capacity Blog
A Substack newsletter by Gergő Gáspár covering fieldbuilding strategy, careers, and marketing for the AI Safety and Effective Altruism communities.
https://fieldbuilding.substack.com/
The Cognitive Revolution
A leading AI podcast hosted by Nathan Labenz that interviews AI builders, researchers, and investors to help leaders make sense of transformative developments in artificial intelligence.
https://www.cognitiverevolution.ai/
The Compendium
The Compendium is a living document and website that presents a comprehensive, accessible argument for why artificial general intelligence poses an extinction risk to humanity and what can be done about it.
https://www.thecompendium.ai/
The Future Society
A nonprofit organization based in the US and Europe that works to align AI through better governance, developing and advocating for AI governance mechanisms ranging from laws and regulations to voluntary frameworks.
https://thefuturesociety.org/The Goodly Institute
A nonprofit R&D lab (operating as Goodly Labs) that builds collective intelligence tools to combat misinformation, strengthen democratic deliberation, and foster civic engagement through rigorous social science research.
https://www.goodlylabs.org/The Intrinsic Perspective
Erik Hoel's Substack newsletter covering consciousness, AI, science, literature, and cultural commentary, with a focus on bridging disciplinary barriers between the sciences and humanities.
https://www.theintrinsicperspective.com/The Midas Project
An AI safety advocacy nonprofit that monitors major AI companies' safety policies and conducts public campaigns to pressure the industry toward greater transparency, accountability, and responsible development practices.
https://www.themidasproject.com/
The Millennium Project
A global participatory futures research think tank that produces the annual State of the Future report and tracks 15 Global Challenges facing humanity, with growing focus on AGI governance and existential risk.
https://www.millennium-project.org/
The Navigation Fund
The Navigation Fund is a major philanthropic funder that grants over $60 million annually to high-impact organizations working on climate change, farm animal welfare, criminal justice reform, open science, and AI safety.
https://www.navigation.org/The Power Law
The Power Law is a Substack newsletter by Peter Wildeford (also known as Peter Hurford) covering AI forecasting, AI policy, national security, and emerging technology.
https://peterwildeford.substack.com/The Society Library
A nonprofit that archives humanity's ideas, ideologies, and world-views through structured debate mapping, with a focus on AI safety, alignment, and democratic governance of AI.
https://www.societylibrary.org/
The Unjournal
A nonprofit that commissions and funds open, expert evaluation and quantitative rating of economics and social science research relevant to global priorities, without the constraints of traditional academic journals.
https://www.unjournal.org/
The Wilson Center
The Woodrow Wilson International Center for Scholars is a congressionally chartered, nonpartisan think tank in Washington, DC that bridges the world of ideas and the world of policy through research, analysis, and scholarship on global affairs.
https://www.wilsoncenter.org/Theorem Labs
Theorem Labs is an AI and programming languages research lab that builds tools to formally verify the correctness of AI-generated code before it ships.
https://theoremlabs.com/Thomas Liao
Individual AI safety researcher who created and maintains the Foundation Model Tracker, a website tracking the release of large AI models. Received a $15,000 grant from Open Philanthropy in 2024 to support this work.
https://thomasliao.com/Threading the Needle
A Substack newsletter by Anton Leicht covering the political economy of AI progress, examining how institutions and political incentives interact with rapid technological change.
https://writing.antonleicht.me/
Timaeus
An AI safety research organization applying Singular Learning Theory and developmental interpretability to understand how capabilities and values emerge during neural network training.
https://timaeus.co/
Tony Blair Institute for Global Change
A not-for-profit policy institute that advises governments and political leaders worldwide on strategy, policy, and delivery, with a major focus on AI governance and technology adoption in the public sector.
https://institute.global/Topos Institute
A nonprofit research institute applying category theory, topos theory, and type theory to develop mathematical foundations and open-source tools for collective sense-making, collaborative modeling, and shaping technology for public benefit.
https://topos.institute/Touro College & University System
Touro is a large private Jewish university system headquartered in New York City, operating over 38 schools across the US and internationally. It received an Open Philanthropy grant to support Professor Gabriel Weil's legal research on using tort liability to mitigate catastrophic AI risks.
https://www.touro.edu/Training For Good
Training for Good was an EA-incubated organization that upskilled talent for high-impact careers in AI policy and journalism, running the EU Tech Policy Fellowship and the Tarbell Fellowship before spinning both off as independent organizations.
https://www.trainingforgood.com/
Trajectory Labs
Trajectory Labs is a nonprofit coworking and events space in downtown Toronto dedicated to AI safety research and community building. It provides free workspace, weekly events, and a peer network to grow Toronto's AI safety ecosystem.
https://www.trajectorylabs.org/
Transformative Futures Institute
A nonprofit research institute applying foresight methods to anticipate and mitigate societal-scale risks from advanced artificial intelligence. TFI produces rigorous research for policymakers and decision-makers working to prevent catastrophic AI outcomes.
https://transformative.org/
Transluce
Transluce is an independent nonprofit AI research lab that builds open, scalable technology for understanding AI systems and steering them in the public interest.
https://transluce.org/TruthfulAI
TruthfulAI is a non-profit AI safety research organization based in Berkeley that studies situational awareness, deception, and hidden reasoning in large language models.
https://truthful.ai/UCLA School of Law
A leading U.S. law school that conducts research on AI governance, policy, and safety through its PULSE program and Institute for Technology, Law & Policy.
https://law.ucla.edu/
UK AI Security Institute (UK AISI)
UK government research organization that tests frontier AI systems, advances AI safety science, and informs policymakers about the risks and capabilities of advanced AI.
https://www.aisi.gov.uk/Ulyssean PBC
Ulyssean builds integrated hardware and software to secure the data center infrastructure where frontier AI models are trained and deployed, protecting AI model weights against state-sponsored and intelligence-grade threats.
https://ulyssean.com/Université de Montréal
Canada's second-largest research university by research volume, and the institutional home of leading AI safety researchers including Yoshua Bengio and David Krueger. UdeM anchors Montreal's position as a global hub for AI research and responsible AI development.
https://www.umontreal.ca/University of British Columbia
Jeff Clune's AI safety and alignment research lab at UBC's Department of Computer Science, focused on deep learning, AI interpretability, and open-ended AI systems.
https://www.cs.ubc.ca/people/jeff-cluneUniversity of California, Berkeley
UC Berkeley is a leading public research university and one of the world's foremost hubs for AI safety research, hosting CHAI, BAIR, CLTC, and other major centers focused on beneficial and safe AI development.
https://humancompatible.ai/University of California, San Diego
UC San Diego is a major public research university conducting AI safety-relevant research including LLM persuasion evaluation, trustworthy machine learning, and safe autonomous systems.
https://ucsd.edu/University of California, Santa Barbara
UC Santa Barbara is a major public research university whose Center for Responsible Machine Learning conducts AI safety-adjacent research on fairness, bias, transparency, and the societal impacts of AI systems.
https://ml.ucsb.edu/University of California, Santa Cruz
UC Santa Cruz is a public research university whose Baskin School of Engineering conducts AI safety-relevant research, including adversarial robustness work supported by Open Philanthropy.
https://www.ucsc.edu/
University of Cambridge
One of the world's oldest and most prestigious universities, founded in 1209, and a major hub for AI safety and existential risk research through centers such as CSER and the Leverhulme Centre for the Future of Intelligence.
https://www.cam.ac.uk/University of Chicago
A leading private research university on Chicago's South Side that hosts several AI safety and existential risk research programs, including the Existential Risk Laboratory (XLab), the Chicago Human+AI Lab, and the Harris School's Technology and Society Initiative.
https://www.uchicago.edu/
University of Illinois Urbana-Champaign
A major public research university hosting several prominent AI safety research groups, including work on formal neural network verification, adversarial robustness, and AI agent security benchmarks.
https://siebelschool.illinois.edu/University of Louisville (Dr. Roman Yampolskiy's Research Group (Cybersecurity Lab))
A university research lab at the University of Louisville directed by Dr. Roman Yampolskiy, one of the founders of the field of AI safety, conducting research on the theoretical limits of AI controllability, AI containment, and cybersecurity.
https://faculty.cse.louisville.edu/roman/
University of Maryland
The University of Maryland, College Park is a flagship public research university conducting extensive AI safety, trustworthy AI, and responsible AI research through multiple interdisciplinary institutes and centers.
https://umd.edu/University of Massachusetts Amherst
UMass Amherst is a public research university whose AI safety-relevant work is centered in the SCALAR Lab, led by Associate Professor Scott Niekum, which focuses on safe and aligned machine learning and robotics.
https://people.cs.umass.edu/~sniekum/University of Michigan
A major public research university in Ann Arbor, Michigan, hosting faculty conducting AI safety and alignment research funded by organizations including Open Philanthropy.
https://umich.edu/University of Minnesota, Twin Cities
A major public research university and Minnesota's only land-grant institution, home to AI and NLP research relevant to AI safety including benchmarking of LLM capabilities on high-stakes professional tasks.
https://twin-cities.umn.edu/University of Oxford
One of the world's oldest and most prestigious research universities, Oxford has been a central hub for AI safety and existential risk research through institutions like the Future of Humanity Institute (FHI) and the Oxford Martin AI Governance Initiative (AIGI).
https://www.ox.ac.uk/University of Pavia
One of the world's oldest universities, home to the Center for Reasoning, Normativity and AI (CERNAI), which conducts AI safety and alignment research led by Prof. Federico Faroldi.
https://en.unipv.it/enUniversity of Pennsylvania
An Ivy League research university in Philadelphia with multiple programs relevant to AI safety, including formal verification of autonomous systems, AI governance research, and AGI international security analysis.
https://www.upenn.edu/University of Southern California
Major private research university in Los Angeles that received SFF flexHEGs funding for hardware-enabled AI governance research, and hosts multiple labs and centers working on AI safety, alignment, and responsible AI development.
https://www.usc.edu/University of Texas at Austin
A major public research university whose AI safety-relevant work is centered on the AI+Human Objectives Initiative (AHOI) and Scott Aaronson's computational-complexity-meets-alignment research group, both supported by Open Philanthropy.
https://utexas.edu/
University of Toronto
The University of Toronto is home to the Schwartz Reisman Institute for Technology and Society, a leading interdisciplinary research institute dedicated to ensuring that advanced AI develops safely, ethically, and in the public interest.
https://srinstitute.utoronto.ca/University of Toronto & University of Michigan
A cross-institutional AI safety research collaboration between Zhijing Jin's Jinesis AI Lab at the University of Toronto and Rada Mihalcea's Language and Information Technologies (LIT) Lab at the University of Michigan, focused on multi-agent LLM safety, causal reasoning, and AI alignment.
https://zhijing-jin.com/home/
University of Tübingen
One of Germany's oldest and most prestigious research universities, founded in 1477 and designated a University of Excellence, hosting leading AI and machine learning research groups including the Tübingen AI Center and the Cluster of Excellence in Machine Learning.
https://uni-tuebingen.de/en/University of Utah
The ARIA Lab (Aligned, Robust, and Interactive Autonomy Lab) at the University of Utah, led by Professor Daniel S. Brown, conducts research on human-AI alignment, reward learning, and AI safety. The lab develops algorithms and theory to enable AI systems to safely learn from and interact with humans.
https://aria-lab.cs.utah.edu/University of Virginia
The University of Virginia is a major public research university in Charlottesville, Virginia, with faculty and programs conducting AI safety and alignment research.
https://www.virginia.edu/University of Washington
A major public research university in Seattle with significant AI research programs, including responsible AI and AI safety-relevant work through its Paul G. Allen School of Computer Science & Engineering and the RAISE center.
https://www.washington.edu/University of Waterloo
A leading Canadian research university founded in 1957, home to AI safety-relevant research programs including technical AI safety grants from Coefficient Giving and CIFAR's Canadian AI Safety Institute program.
https://uwaterloo.ca/University of Wisconsin–Madison
A major public research university in Madison, Wisconsin, home to AI safety relevant research including interpretability work in the Statistics department and student-led AI safety initiatives.
https://www.wisc.edu/Upgradable
Upgradable is an applied research lab and life optimization service that helps effective altruists, AI safety researchers, and existential risk advocates lead more impactful lives.
https://www.upgradable.org/Usman Anwar
AI safety researcher who completed his PhD at Cambridge's Computational and Biological Learning lab, focusing on alignment and monitorability of large language models.
https://uzman-anwar.github.io/Vanderbilt University
A private research university in Nashville, Tennessee, that received SFF Fairness Track funding for research related to AI fairness, algorithmic equity, and the societal implications of AI systems.
https://www.vanderbilt.edu/
Victoria Krakovna's Blog
Personal blog of Victoria Krakovna, Senior Research Scientist at Google DeepMind and co-founder of the Future of Life Institute, covering AI alignment research and related topics.
https://vkrakovna.wordpress.com/Virtue AI
Virtue AI is an AI-native security and compliance platform that helps enterprises secure their AI systems and agents against threats like prompt injection, hallucinations, and data poisoning. It was founded in 2024 by leading AI safety researchers Bo Li, Dawn Song, Carlos Guestrin, and Sanmi Koyejo.
https://www.virtueai.com/
Vista Institute for AI Policy
The Vista Institute for AI Policy builds AI law and policy as an academic field and develops talent for careers in AI governance, with a focus on promoting risk-mitigating U.S. regulation.
https://vistainstituteai.org/Wavefront Security
Wavefront Security provides at-cost cybersecurity services to nonprofits and policy organizations in the AI safety, biosecurity, and global catastrophic risk space.
https://www.wavefrontsecurity.com/
WhiteBox Research
WhiteBox Research is a Manila-based nonprofit that trains early-career researchers in mechanistic interpretability and AI safety, with a focus on building research capacity in Southeast Asia.
https://www.whiteboxresearch.org/Worcester Polytechnic Institute & University of Massachusetts Amherst
A collaborative hardware security research effort between WPI and UMass Amherst focused on developing tamper-detection and verification mechanisms for semiconductor chips, with applications to AI governance and hardware-enabled guarantees.
Workshop Labs
Workshop Labs is a public benefit corporation building billions of personalized, privacy-preserving AI models with a mission to keep humans empowered as AI advances.
https://workshoplabs.ai/World Economic Forum
The World Economic Forum is an international non-governmental organization that convenes global leaders from business, government, academia, and civil society to address major challenges including AI governance and emerging technology risks.
https://www.weforum.org/Wytham Abbey
A historic manor house near Oxford acquired by Effective Ventures Foundation in 2022 as a dedicated conference and retreat venue for the AI safety and effective altruism communities, which operated for approximately two years before being sold in 2025.
https://www.wythamabbey.org/
xAI
Elon Musk's AI company, founded in 2023, focused on building maximally truth-seeking AI and understanding the nature of the universe. Creator of Grok, an AI chatbot integrated with X (formerly Twitter).
https://x.ai/Yale University
Yale University is a private Ivy League research university in New Haven, Connecticut, home to several AI safety and governance research programs, including the Schmidt Program on AI and National Power, the Center for Algorithms, Data, and Market Design (CADMY), and the Digital Ethics Center.
https://www.yale.edu/