Compassion in Machine Learning
Compassion in Machine Learning (CaML) is an AI safety research organization focused on aligning artificial intelligence with the well-being of all sentient beings. Their primary approach uses synthetic document finetuning to embed compassionate values during model pretraining, producing behavioral shifts that persist through subsequent fine-tuning. CaML also develops benchmarks to measure AI compassion, including the Animal Harm Bench, MORU (Moral Reasoning Under Uncertainty), and the CompassionBench leaderboard, and researches moral open-mindedness to help AI systems embrace ethical uncertainty while prioritizing sentient welfare.
Funding Details
- Annual Budget
- $159,000
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- $159,000
- Funding Raised to Date
- $128,000
- Fiscal Sponsor
- Players Philanthropy Fund, Inc.
Theory of Change
CaML believes that the values embedded in AI systems during pretraining are critical and persistent, surviving through subsequent fine-tuning stages. By generating high-quality synthetic pretraining data that encodes compassion towards all sentient beings, they aim to shift the baseline moral orientation of future AI models before capabilities outrun alignment efforts. Their benchmarks (Animal Harm Bench, MORU, CompassionBench) create measurable standards that frontier labs can adopt to evaluate and improve their models' treatment of sentient welfare considerations. By targeting the pretraining stage rather than post-hoc alignment, and by encouraging moral humility and open-mindedness rather than rigid value lock-in, CaML aims to produce AI systems that are robustly compassionate in ways that scale to superintelligence.
Grants Received
from Survival and Flourishing Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:48 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC