Aether is a small independent AI safety research group dedicated to conducting impactful research that ensures the responsible development and deployment of AI technologies. The organization's core work centers on chain-of-thought monitoring, hidden reasoning in LLMs, and the safety of foundation model agents. Founded in 2024, Aether operates as a lean, full-time in-person team currently based at Trajectory Labs in Toronto, Canada, and is advised by researchers from institutions including Apollo Research, Astera Institute, Google DeepMind, and the University of Toronto.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
Aether believes that LLM agents represent a near-term, high-stakes safety challenge that is underserved relative to its importance. By producing rigorous technical research on how to monitor, align, and control LLM agents — particularly by improving chain-of-thought monitorability and understanding hidden reasoning — Aether aims to generate insights that can be adopted by frontier AI labs, governments, and the broader AI safety field. The goal is to make advanced AI systems more legible and controllable, reducing the risk that autonomous AI agents pursue goals misaligned with human values.
Grants Received
No grants recorded.
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 20, 2026, 3:26 AM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC