Technical Alignment Impossibility Proofs
Technical Alignment Impossibility Proofs is a solo research project by Alexander Bistagne, a UCSC graduate and Ronin Institute for Independent Scholarship Fellow based in Los Angeles. The project aims to establish formal mathematical proofs showing that certain approaches to AI alignment are impossible in a theoretical computer science sense. The primary published output is "Alignment is Hard: An Uncomputable Alignment Problem," which argues that testing alignment is CoRE-Hard under specific formal conditions. The project also explored multi-agent alignment structures under the heading "Control By Committee." The project received a $170,000 grant from the Survival and Flourishing Fund in the 2022-H2 round, channeled through the Ronin Institute as fiscal sponsor.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $176,070
- Fiscal Sponsor
- Ronin Institute for Independent Scholarship Incorporated
Theory of Change
The project's theory of change is that establishing formal impossibility results in AI alignment can redirect the field's efforts more productively. By proving that certain approaches to alignment testing are computationally intractable or undecidable, the research aims to demonstrate that aligning black-box AI agents may be fundamentally impossible, potentially convincing researchers to focus on alternative approaches such as designing AI architectures that are provably aligned by construction rather than attempting post-hoc alignment verification.
Grants Received
from Survival and Flourishing Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 19, 2026, 8:14 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC