INESIA (Institut national pour l'evaluation et la securite de l'intelligence artificielle) was inaugurated on January 31, 2025, by the French government to provide France with sovereign capacity to evaluate advanced AI systems. Rather than a new legal entity, INESIA is a coordination structure jointly led by the General Secretariat for Defence and National Security (SGDSN) and the Directorate General for Enterprise (DGE), federating the expertise of four established institutions: ANSSI (cybersecurity), Inria (digital research), LNE (metrology and testing), and PEReN (digital regulation expertise). Its work spans three main areas: supporting AI regulation implementation, analyzing systemic national security risks from advanced AI, and evaluating the performance and reliability of AI models. INESIA participates in the international network of AI Safety Institutes alongside counterparts from the US, UK, Canada, South Korea, Japan, Kenya, Singapore, and the EU AI Office.
INESIA (Institut national pour l'evaluation et la securite de l'intelligence artificielle) was inaugurated on January 31, 2025, by the French government to provide France with sovereign capacity to evaluate advanced AI systems. Rather than a new legal entity, INESIA is a coordination structure jointly led by the General Secretariat for Defence and National Security (SGDSN) and the Directorate General for Enterprise (DGE), federating the expertise of four established institutions: ANSSI (cybersecurity), Inria (digital research), LNE (metrology and testing), and PEReN (digital regulation expertise). Its work spans three main areas: supporting AI regulation implementation, analyzing systemic national security risks from advanced AI, and evaluating the performance and reliability of AI models. INESIA participates in the international network of AI Safety Institutes alongside counterparts from the US, UK, Canada, South Korea, Japan, Kenya, Singapore, and the EU AI Office.
Funding Details
- Annual Budget
- -
- Monthly Burn Rate
- -
- Current Runway
- -
- Funding Goal
- -
- Funding Raised to Date
- -
- Fiscal Sponsor
- -
Theory of Change
INESIA's theory of change holds that advanced AI systems pose systemic risks — to national security, to individual citizens, and to societal trust — that can only be managed through rigorous, independent technical evaluation and strong regulatory frameworks. By federating France's leading technical institutions (ANSSI, Inria, LNE, PEReN), INESIA develops the sovereign capacity to assess whether AI models are safe, reliable, and compliant with regulation. This evaluation capacity then informs both domestic regulatory authorities implementing the EU AI Act and France's contributions to international AI governance through the AI Safety Institutes network. Internationally, INESIA strengthens norms and practices for AI safety evaluation by participating in joint testing exercises and sharing methods with allied institutes. The causal chain runs from technical evaluation expertise to evidence-based regulation to reduced systemic risk from advanced AI systems.
Grants Received– no grants recorded
Projects– no linked projects
People– no linked people
Discussion
Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.
Details
- Last Updated
- Apr 7, 2026, 8:14 PM UTC
- Created
- Apr 7, 2026, 6:28 PM UTC