Italy passes new AI regulations for privacy protection oversight and children access

Italy has positioned itself as a European leader in artificial intelligence regulation by adopting a comprehensive national AI law, reinforcing the EU-wide AI Act with stricter national measures on privacy, human oversight, and protections for minors.

Key Provisions of Italy’s AI Law

  • Strict Privacy Safeguards: The law requires all AI systems, especially those processing health or biometric data, to implement robust data protection mechanisms. Companies must conduct detailed risk assessments and ensure human oversight at every stage—development, training, testing, and deployment. Italy’s data protection authority, the Garante, has already signaled it will strictly enforce these rules, particularly for systems not classified as medical devices, which otherwise benefit from medical regulatory oversight[1].
  • Child Access Restrictions: The law bans or severely restricts minors’ access to certain AI applications, including social chatbots like Replika and generative AI tools such as ChatGPT, unless strict age-verification and parental consent mechanisms are in place. This addresses growing concerns over AI’s impact on children’s privacy and mental health[4].
  • Obligatory Human Oversight: For high-risk AI applications—such as those used in healthcare, employment, or public administration—the law mandates continuous human supervision. AI systems cannot make final decisions affecting individuals without meaningful human review, ensuring accountability and reducing risks of bias or error[1].
  • National Hosting Requirement: Public-sector AI systems must be hosted on servers within Italian territory, a move aimed at strengthening data sovereignty and security. This requirement could significantly impact procurement processes and vendor selection for government contracts[3].
  • Transparency and Accountability: Providers of general-purpose AI models must publish detailed documentation on training data, copyright policies, and incident reporting. The law also introduces mandatory “AI literacy” obligations for organizations deploying these technologies[3].

Enforcement and Compliance Deadlines

Italy’s new AI statute supplements the EU AI Act, which entered into force in August 2024. Prohibitions on “unacceptable” AI uses and literacy duties are already active, with governance and national authority obligations kicking in from August 2025. Most high-risk requirements will apply from 2026–2027, giving businesses time to adapt[3].

The Italian Data Protection Authority (Garante) has demonstrated it will actively enforce these rules, as seen in its recent fines for inadequate anonymization of personal data in AI-powered surveillance systems[2]. Companies and public bodies are advised to conduct thorough data protection impact assessments (DPIAs), ensure genuine anonymization where claimed, and clearly separate the roles of data protection officers (DPOs) from operational implementation[2].

Impact on Businesses and Public Sector

The law’s stringent requirements—especially around data localization, human oversight, and child protection—will have immediate implications for tech companies, healthcare providers, and public administrations operating in Italy. Firms must now map their AI systems against the new risk categories, document oversight mechanisms, and stay abreast of evolving national guidance.

Legal professionals are reminded that AI is permissible in Italy, but only within a framework of documented risk controls, compliance with both EU and national rules, and close monitoring of Garante guidance[3].

Looking Ahead

Italy’s approach sets a benchmark for AI regulation in Europe, blending EU-wide standards with national priorities on privacy, security, and the protection of vulnerable groups. As other EU member states refine their own AI laws, Italy’s model—emphasizing strict oversight, transparency, and localized control—may influence broader European policy.

Latest AI News

Stay Informed with the Latest news and trends in AI