Scott Viteri
Bio
Scott Viteri is a CS PhD candidate at Stanford University's Center for Automated Reasoning, admitted in Autumn 2019 and advised by Prof. Clark Barrett. He holds a B.S. in Computer Science and Electrical Engineering from MIT (2018), and before starting his PhD he worked on interactive theorem proving at CMU with Simon DeDeo, publishing research on abduction in mathematics in the journal Cognition. His research focus has evolved from formal verification and programming languages to AI alignment, driven by his view that advanced AI poses a substantial existential risk. His core work involves training language models to produce causally grounded chain-of-thought reasoning via reinforcement learning, as demonstrated in his 2024 paper "Markovian Transformers for Informative Language Modeling" (arXiv 2404.18988), which achieved large gains on QA benchmarks. He has also received a grant from the Long-Term Future Fund to research a novel method for training prosociality into large language models, and Open Philanthropy recommended a grant of $153,820 to Stanford University to support his and Barrett's AI alignment research.
Links
- Personal Website
- https://scottviteri.com/
- Twitter / X
- LessWrong
- scottviteri
Grants
from Long-Term Future Fund
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Mar 23, 2026, 1:06 AM UTC
- Created
- Mar 20, 2026, 2:58 AM UTC