Economy & Markets
6 min read
New European AI Security Standard ETSI EN 304 223 Launched
Help Net Security
January 19, 2026•3 days ago

AI-Generated SummaryAuto-generated
The European Telecommunications Standards Institute (ETSI) has released a new standard, ETSI EN 304 223, establishing baseline cybersecurity requirements for AI models and systems. It addresses AI-specific risks like data poisoning and prompt injection across the entire AI lifecycle, from design to end-of-life. The standard aims to provide a shared security baseline for vendors, integrators, and operators.
The European Telecommunications Standards Institute (ETSI) has released a new European Standard that addresses a growing concern for security teams working with AI. The standard, ETSI EN 304 223, sets baseline cybersecurity requirements for AI models and systems intended for real-world use.
Addressing security risks specific to AI
ETSI EN 304 223 treats AI as a distinct category of technology from a security perspective. AI systems introduce risks tied to their data pipelines, model behavior, and operational environments. These include data poisoning, model obfuscation, indirect prompt injection, and weaknesses linked to complex training and deployment practices.
ETSI EN 304 223 brings established cybersecurity practices together with measures designed for these AI-specific risks. The result is a structured set of requirements that security teams can apply to AI models and systems across their operational lifespan.
Lifecycle-based requirements
ETSI EN 304 223 defines 13 principles and requirements across five phases of the AI lifecycle:
Secure design
Secure development
Secure deployment
Secure maintenance
Secure end of life
Each phase aligns with internationally recognized AI lifecycle models. References to related standards and publications appear at the start of each principle to support consistent implementation and alignment with existing guidance across the AI ecosystem.
Relevance across the AI supply chain
The scope of the standard covers AI systems that rely on deep neural networks, including generative AI. It targets systems intended for deployment in operational environments. Vendors, system integrators, and operators can use the standard as a shared baseline for AI security practices.
Development of ETSI EN 304 223 reflects input from international organizations, government bodies, and experts from the cybersecurity and AI communities. This collaborative approach supports applicability across multiple industries and deployment contexts.
Rate this article
Login to rate this article
Comments
Please login to comment
No comments yet. Be the first to comment!
