Advocacy Grants
About
The Nonlinear AI Safety Advocacy Grants program is a funding initiative operated by Nonlinear (nonlinear.org), an AI safety nonprofit founded in 2021 by Kat Woods (President) and Emerson Spartz. Nonlinear's overarching mission is to research and implement high-leverage interventions to reduce existential risk from artificial superintelligence. Its advocacy grants program is one of several tools the organization uses alongside incubation, the Nonlinear Network, and the (now dormant) Nonlinear Support Fund. The advocacy grants program targets individuals and groups who can build public awareness about AI risks — specifically extinction risk (x-risk) and suffering risk (s-risk) — or who advocate for pausing the development of frontier AI systems until safety can be guaranteed. Nonlinear looks for grantees with proven reach (measured by views, shares, protest turnout, and media coverage), the ability to ship quickly and frequently, and cost-effectiveness relative to their track record. The program is currently invitation-only: Nonlinear identifies and approaches potential grantees directly rather than running open application rounds. Previously, applications were accepted approximately every four months. The program focuses particularly on online outreach professionals (video creators, social media organizers) and protest organizers with experience rapidly mobilizing large groups. Nonlinear itself received approximately $599,000 in general support from the Survival and Flourishing Fund (via Jaan Tallinn as donor) and $250,000 from the now-defunct Future Fund. The organization is led by a small team and operates remotely.
Theory of Change
Nonlinear believes that transformative AI poses a genuine extinction-level threat and that public awareness and political pressure are critical levers for slowing or pausing development until safety is guaranteed. By funding high-reach advocates — those who can generate viral content, organize protests, and mobilize public sentiment — the program aims to shift the political and cultural environment around AI development. The theory is that a larger, more vocal public movement can create pressure on governments and AI labs to adopt safety-first policies, buying time for alignment research and governance frameworks to mature.
Details
- Start Date
- -
- End Date
- -
- Expected Duration
- -
- Funding Raised to Date
- -
- Last Updated
- Apr 3, 2026, 1:17 AM UTC
- Created
- Apr 3, 2026, 1:17 AM UTC