Aligned AI is an Oxford, England-based applied alignment research company co-founded in late 2021 by CEO Rebecca Gorman and CTO Dr. Stuart Armstrong (formerly of the Future of Humanity Institute at Oxford). The company's core focus is value extrapolation — building AI systems that can correctly extend human values and intent to novel situations not seen during training. Their proprietary technologies include ACE (Algorithm for Concept Extrapolation), EquitAI (gender bias mitigation), ClassifAI, and faAIr (a tool for measuring gender bias in large language models). Aligned AI operates as a for-profit benefit corporation, aiming to develop commercially viable alignment tools that can be distributed broadly across the AI industry.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $730,000
- Fiscal Sponsor
- -
Theory of Change
Aligned AI's theory of change centers on the premise that solving value extrapolation — the ability for AI to correctly extend human values to novel contexts not present in training data — is necessary and nearly sufficient for solving the alignment problem broadly. Their approach is explicitly aimed at scalability and distribution: rather than focusing solely on frontier AI labs (which may develop internal safety measures), Aligned AI develops alignment tools intended for widespread adoption across the AI industry, analogous to how CAD software improved engineering safety broadly. By making alignment solutions commercially viable and accessible to all AI developers, the company aims to raise the safety baseline for AI systems deployed in high-stakes scenarios such as autonomous vehicles, robotics, and content moderation.
Grants Received
from Long-Term Future Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 10:10 PM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC
