Geopolitics
14 min read
UN Agencies Demand Urgent Child-Focused AI Governance
The Eastleigh Voice
January 20, 2026•2 days ago

AI-Generated SummaryAuto-generated
UN agencies urge states to adopt child-centered AI governance frameworks. They highlight that AI technologies are outpacing protections for children's rights and safety. The statement calls for policies to criminalize AI-related child abuse and exploitation. It emphasizes integrating child rights into AI strategies and establishing accountability mechanisms for AI system violations.
As artificial intelligence (AI) becomes increasingly embedded in education, communication and online platforms, the International Telecommunication Union (ITU) and its UN partners are urging states to urgently adopt child-centred AI governance frameworks. They warn that rapid technological advances are outpacing existing protections for children’s rights, safety and privacy.
In a joint statement, the agencies noted that most AI-supported tools and applications — along with their underlying models, techniques and systems — are not designed with children’s well-being in mind. This, they said, underscores the need to streamline existing technology regulations to better align them with the protection of children’s rights.
The statement urged states to promote child-rights-based AI governance through policies, programmes and legislation that respect, protect and promote children’s rights. It also called on governments to criminalise, investigate and prosecute all forms of online child sexual abuse and exploitation involving AI systems, including AI-generated abuse material, child labour and sexual exploitation, as well as online grooming.
Protection from harmful content
Measures to protect children from harmful content, the agencies stressed, must comply with international human rights law, respect freedom of expression, and be appropriate to children’s evolving capacities. At the same time, states were urged to protect children from violations committed by third parties, including businesses operating within their jurisdictions.
“States can consider requiring business enterprises, particularly for AI-driven platforms, including social media, educational technologies, video streaming and gaming, to adopt age assurance mechanisms, consistent with data protection and safeguarding requirements, where such mechanisms are necessary and proportionate to ensure children are protected from online harms related to AI,” the statement says.
The agencies also called for the establishment of accountability mechanisms for violations of children’s rights caused by AI systems at any stage of their lifecycle. This includes providing child-friendly mechanisms for children, parents and caregivers to report concerns, as well as ensuring responsibility for addressing reported issues.
Addressing UN bodies and other international organisations, the statement called for the rights of the child to be included in an explicit, systematic and sustained manner in all internal and external policies, strategies, plans and approaches related to AI. It further encouraged civil society organisations to actively participate in oversight and accountability processes, including through advisory bodies, AI ethics committees and regulatory consultations, to advocate for child rights-based AI governance.
Forms of abuse
The statement outlined forms of abuse that can occur through or with the support of AI systems, tools and platforms, including physical, sexual and mental violence; gender-based violence; cyberbullying; exposure to harmful content; exploitation; and AI-generated content that propagates hate speech, incites violence or promotes child labour. Other risks include child trafficking, recruitment and use of children, and killing and maiming in armed conflict situations.
Harmful content may include deepfakes and other AI-generated deceptive media, hate speech, graphic violent material, child sexual abuse content, forced child begging, misinformation or disinformation targeting children, and content promoting self-harm, eating disorders, drug use or other harmful substances, gambling, or other algorithmically amplified harmful narratives.
The agencies noted the need for training and capacity building tailored to all stakeholders involved in the design, development, deployment and governance of AI. This includes AI literacy programmes for children, teachers, parents and caregivers, as well as training for policymakers and governments on AI frameworks, data protection methods and child rights impact assessments.
“In all actions or decisions that concern the child and that involve the design, development, deployment or governance of AI in both the public and private sphere, the best interests of every child must be assessed, determined and taken into account by the state as a primary consideration. In situations where rights of the child seemingly compete, States should follow due process to assess and determine what is in the child's best interests,” the statement urges.
Rate this article
Login to rate this article
Comments
Please login to comment
No comments yet. Be the first to comment!
