AI Clarity
About
AI Clarity is the scenario planning research program of Convergence Analysis, a nonprofit research organization and think tank dedicated to designing a safe and flourishing future for humanity in a world with transformative AI. AI Clarity is one of three programs at Convergence, alongside AI Governance and AI Awareness. The AI Clarity program was formally introduced in April 2024 with the publication of "AI Clarity: An Initial Research Agenda" on the EA Forum and LessWrong. Its research method centers on scenario planning, an analytical tool used by policymakers, strategists, and academics to explore and prepare for the landscape of possible outcomes in domains defined by uncertainty. The program combines two primary activities: exploring scenarios by mapping possible AI development pathways and identifying key differentiating parameters, and evaluating strategies by assessing how various safety interventions perform across different scenarios. In 2024, AI Clarity published over 170 pages of research across 10 articles and reports. Key accomplishments include addressing gaps in foundational knowledge around AI scenario modelling, formalizing theories of victory for AI safety work, and analyzing consensus on timelines to AGI. The program hosted Threshold 2030, a two-day conference held October 30-31, 2024 in Boston, Massachusetts, which brought together 30 leading economists, AI policy experts, and professional forecasters from organizations including Google, OpenAI, DeepMind, MIT, Stanford, the UN, and the OECD to evaluate the economic impacts of frontier AI technologies by 2030. A 200-page conference report was published in February 2025. AI Clarity also established the AI Scenarios Network, an informal study group of 30+ researchers from across civil society organizations who regularly meet to share ideas and critique each other's scenario work. This is the first cross-organizational coalition of AI scenario researchers. The AI Clarity team is led by Dr. Justin Bullock, a senior researcher who also edited the Oxford Handbook of AI Governance, alongside Corin Katzke (a Convergence fellow and founding member of the AI Clarity team who also serves as lead writer for the AI Safety Newsletter at the Center for AI Safety), Zershaaneh Qureshi, and David Kristoffersson (CEO and co-founder of Convergence Analysis). Convergence Analysis itself was originally founded as a research collaboration in existential risk strategy between David Kristoffersson and Justin Shovelain beginning in 2017, incorporated as a 501(c)(3) nonprofit in 2018, and relaunched in early 2024 with a team of about 10 academics and professionals spanning ethics, technical AI alignment, AI governance, AI safety, and hardware research. The organization operates internationally across the US, UK, Canada, Portugal, and Estonia.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Mar 19, 2026, 6:22 PM UTC
- Created
- Mar 19, 2026, 6:22 PM UTC