Aysja Johnson
Bio
Aysja Johnson is an AI safety researcher and policy analyst focused on AI lab scaling policies and responsible scaling frameworks. She holds a background in cognitive science, having completed undergraduate studies in Mathematics at UC Berkeley and graduate work in NYU's Computation and Cognition Lab under Todd Gureckis, where she studied human sense-making, open-ended reasoning, and human-machine intelligence. She was hired as a Research Analyst at AI Impacts in 2022, selected from over 250 applicants, contributing research on comparative cognition and technology adoption patterns relevant to AI risk. In 2023 she was a PIBBSS Summer Fellow, working on a project titled 'Towards a Science of Abstraction' exploring why natural abstractions are favored by agents and what this implies for AI alignment. She received a Long-Term Future Fund stipend for 1.5 years to conduct a thorough investigation and analysis of AI lab scaling policies, and has published critical analyses on LessWrong arguing that current responsible scaling policies lack rigor, fail to specify measurable evidence thresholds, and that behavioral evaluations alone are insufficient for safety assurance. She is active on LessWrong under the handle 'aysja' and has co-authored posts on AI lab governance topics including OpenAI's non-disparagement practices.
Links
- Personal Website
- https://medium.com/@aysjajohnson
- Twitter / X
- LessWrong
- aysja
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 22, 2026, 2:39 PM UTC
- Created
- Mar 20, 2026, 2:48 AM UTC