TamperSec develops secure physical enclosures for AI hardware that protect against nanometer-scale physical tampering. Their product is a retrofittable box for existing AI servers that can detect hardware modifications at the nanometer level and automatically deletes a secret attestation key upon tampering, preventing adversaries from extracting confidential data, stealing AI model weights, or disabling on-chip governance mechanisms. The company operates at the intersection of hardware security and AI governance, with its technology closely related to Flexible Hardware-Enabled Guarantee (FlexHEG) mechanisms that could underpin international AI safety treaties and compliance verification.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- $1,500,000
- Funding Raised to Date
- $461,000
- Fiscal Sponsor
- -
Theory of Change
TamperSec believes that effective international AI governance requires trustworthy hardware-level verification mechanisms. Without physical tamper protection, adversaries can manipulate AI chips at the nanometer scale to steal model weights, extract confidential data, or disable governance mechanisms, undermining any software-based compliance frameworks. By developing secure, retrofittable enclosures that detect physical tampering and destroy attestation keys upon breach, TamperSec aims to provide the hardware foundation for FlexHEG (Flexible Hardware-Enabled Guarantee) mechanisms. These mechanisms could enable privacy-preserving verification of compliance with international AI safety agreements, making it possible to enforce compute limits, conduct model evaluations, and ensure safety protocol adherence at the hardware level. The causal chain runs from tamper-proof hardware to trustworthy compliance verification to enforceable international AI governance treaties, ultimately reducing the risk of uncontrolled AI development.
Grants Received
from Survival and Flourishing Fund
Projects
No linked projects.
People
No linked people.
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 2, 2026, 9:55 PM UTC
- Created
- Mar 18, 2026, 11:18 PM UTC