Coordinal Research (formerly Lyra Research, merged with Vectis AI) is an early-stage organization focused on accelerating the research of safe and aligned AI systems. Founded by Ronak Mehta and Jacques Thibodeau through the Catalyze Impact AI safety incubator, the organization pursues two complementary approaches: building tools that speed up human researcher progress on alignment problems, and developing automated research systems that can assist alignment work today. Their core platform accepts research plans or tasks and autonomously conducts background research, implements software, evaluates experimental results, and writes reports. They have also curated over 400 open questions in AI safety for use with their automated scaffold.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- $110,000
- Fiscal Sponsor
- -
Theory of Change
Coordinal believes the primary bottleneck to progress in AI safety is not a shortage of intelligence or ideas, but the inability to distinguish high-quality research from noise at scale (what they call a lack of taste and verification). By building automated research scaffolds and taste accelerator tools, they aim to dramatically increase the throughput and quality of alignment research output. If AI safety research can be accelerated and automated, the gap between AI capabilities research and safety research can be closed, increasing the probability that advanced AI systems are safe and beneficial. Their causal chain runs: better automation tools -> more alignment experiments conducted per researcher-hour -> faster progress on key open problems -> safer AI systems at deployment time.
Grants Received
No grants recorded.
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:52 PM UTC
- Created
- Mar 19, 2026, 10:30 PM UTC