AI Safety Ideas
About
AI Safety Ideas is a collaborative research platform operating at aisafetyideas.com, created by Apart Research as a public tool for collecting and sharing AI safety research ideas. The platform was launched in open alpha in October 2022, announced via the Effective Altruism Forum and LessWrong by Apart Research and Esben Kran. The platform addresses two gaps in the AI safety research ecosystem: providing a scalable, collaborative method to prioritize and work on specific safety agendas, and creating an accessible, low-barrier entry point for newcomers between formal courses and intensive programs like MLAB or SERI MATS. It draws inspiration from collaborative open research models like EleutherAI and CarperAI. AI Safety Ideas aggregates project ideas from public sources such as Alignment Forum posts alongside user-submitted ideas and testable hypotheses. Planned features include bounty systems enabling funders to directly fund specific research results, community vetting mechanisms, and tools connecting researchers with shared interests. The platform is described as "a public Apart tool" and is maintained by the Apart Research team with support from volunteers and collaborators. Apart Research, the organization behind the platform, is a nonprofit AI safety research institute founded by Esben Kran. Apart is based in Copenhagen, Denmark and operates globally with hubs in London, San Francisco, and Paris. Apart Research focuses on accelerating AI safety research through hackathons, fellowships, and mentorship programs, and has engaged over 5,000 participants across 45 events, produced 22 peer-reviewed publications, and placed researchers at 20+ organizations.
Theory of Change
AI Safety Ideas aims to reduce existential risk from AI by lowering the barrier to entry for AI safety research. By creating an accessible, crowdsourced repository of well-defined project ideas and testable hypotheses, the platform routes new technical talent into productive AI safety work. A bounty and hypothesis-testing system allows funders to directly fund specific results, creating incentives for open evaluation of research agendas. The platform's collaborative structure is intended to scale the AI safety research community by providing an on-ramp for researchers who would otherwise lack structured entry points.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:18 AM UTC
- Created
- Apr 3, 2026, 1:18 AM UTC