Technology Strategy Roleplay
About
Technology Strategy Roleplay (TSR) is a charitable incorporated organisation (CIO) registered with the Charity Commission in England and Wales (charity number 1200928), incorporated on November 7, 2022. Its primary project is Intelligence Rising, a strategic role-playing game that simulates the global development of transformative AI technologies to help decision-makers understand the risks, tensions, and trade-offs that emerge in the competitive environment of AI development. The concept originated in July 2017 when Dr. Shahar Avin participated in a three-person AI futures roleplay exercise at the Future of Humanity Institute (FHI) at Oxford. That unstructured exercise was iterated on by Shahar Avin, James Fox, and Ross Gruetzemacher at the Centre for the Study of Existential Risk (CSER) at the University of Cambridge over the next two years. In September 2019, with a seed grant from the Long-Term Future Fund and a broader set of collaborators, a design sprint was conducted at FHI where the freeform version was expanded into the first tabletop version of Intelligence Rising. The charity was formally established in 2022 to oversee ongoing development and deployment. In the game, participants embody characters such as elected officials and their AI advisors in major states, and CEOs and their executive teams at leading technology firms. Each round of actions represents one to two years in-game, with exercises frequently spanning ten to fourteen years. Adversarial interactions between teams with different, conflicting objectives are moderated by expertly trained facilitators who bring extensive AI strategy backgrounds. Standard sessions last about four hours with 8-16 participants, though formats can scale to 80 participants for conferences. TSR's trustees include Shahar Avin (Systemic Safety Fund Lead at the UK AI Security Institute), Jessica Bland (Deputy Director at CSER, Cambridge), and Peter Glenday (Director of Programmes and Research at the School of International Futures). The operational team includes a Chief Operating Officer, a Senior Game Designer, a Partnerships Lead, a Training Lead, a Training Associate, and an Operations Associate, alongside a network of facilitators drawn from AI safety, governance, and policy specialists. Notable clients and participants have included Oxford University AIMS CDT, Cambridge's Leverhulme Centre, ML Alignment and Theory Scholars (MATS), and Redwood Research, as well as various government teams, tech firms, and think tanks. The project has produced peer-reviewed research including 'Exploring AI Futures Through Role Play' (AAAI/ACM Conference on AI, Ethics, and Society, 2020) and 'Strategic Insights from Simulation Gaming of AI Race Dynamics' (2024). For the financial year ending April 2025, TSR reported total income of GBP 306,497 and total expenditure of GBP 220,105 to the UK Charity Commission. For the prior year ending April 2024, income was GBP 231,685 and expenditure was GBP 192,548. The charity has no trading subsidiaries, no trustees receive remuneration, and no employees earn over GBP 60,000. An 84% participant recommendation rate was reported based on a University of Cambridge evaluation survey of 50 respondents across 14 games from 2020 to 2023.
Theory of Change
TSR believes that decision-makers in government, industry, and civil society need visceral, experiential understanding of the dynamics that emerge from competitive AI development to make well-informed policy and safety decisions. By immersing participants in realistic strategic scenarios where they face the tensions between racing to develop AI and investing in safety, cooperation, and governance, Intelligence Rising builds intuitive understanding of how individual decisions can lead to collective risks. The structured wargaming format lets participants stress-test their assumptions about AI futures, experience the consequences of failing to invest in safety and cooperation, and develop stronger strategic foresight. This experiential learning is expected to translate into better-informed decisions when participants encounter analogous situations in their real-world roles, ultimately contributing to safer and more cooperative AI governance outcomes.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:21 AM UTC
- Created
- Apr 3, 2026, 1:21 AM UTC