Children’s Rights in the Digital Age: Navigating AI, Privacy, and Safety

Dr. Desara Dushi


In today’s hyper-connected world, children are increasingly immersed in digital environments that weren’t designed with their specific needs in mind. As generative AI systems like ChatGPT become more prevalent, we face unprecedented challenges in balancing the educational benefits of technology with the imperative to protect children’s rights, privacy, and well-being. This post explores the complex landscape of children’s digital interactions, the legal frameworks attempting to protect them, and the path forward for responsible technology development.

Today’s Digital Reality for Children

Children and teenagers have emerged as prolific internet users, despite the fact that most online environments weren’t designed with their needs in mind. A 2024 UK survey revealed that 77.1% of teenagers (ages 13-18) have already used generative AI systems, with youth adoption rates approximately twice that of adults. These statistics highlight how quickly young people embrace new technologies.

Despite age restrictions implemented on platforms, like Australia’s social media regulations, these barriers are frequently circumvented by younger users. Meanwhile, children regularly engage with AI embedded in various applications, toys, and learning software, often without full awareness of the data collection practices involved.

The Dual Nature of AI Interaction

Benefits for Children’s Learning

Generative AI, like ChatGPT offer several potential benefits for children:

  • Enhanced Learning: These systems can foster curiosity, experimentation, and creative problem-solving
  • Skill Development: AI tools can help enhance vocabulary and writing skills
  • Accessibility: They support students with disabilities through personalized assistance
  • Educational Support: AI can function as a learning tutor for various subjects
  • Digital Literacy: Interaction with AI helps children develop critical thinking skills and AI literacy

Potential Risks and Concerns

However, these same technologies present significant risks:

  • Accuracy Issues: AI can provide incorrect information with potentially serious consequences
  • Synthetic Reality: Children are vulnerable to AI-generated content, including deepfakes and harmful ads
  • Exploitation Enablement: AI technologies can be misused for cyberbullying, online grooming, sexual harassment, and extortion
  • Child Sexual Abuse Material: According to Europol, AI-generated child sexual abuse material is projected to be the fastest-growing threat for 2025
  • Cognitive and Mental Health Impacts: There’s risk of underdeveloped reasoning skills and potential negative impacts on writing, critical thinking and creativity
  • Psychological Manipulation: MIT research shows children often attribute real feelings to AI agents, making them susceptible to manipulation

Children’s Rights in the Digital Environment

Protecting children in digital spaces requires a comprehensive rights-based approach that goes beyond basic data protection. Key principles include:

  • Best Interests Principle: This prioritizes children’s wellbeing in all design decisions and balances their full range of rights
  • Equal Importance of Rights: We must ensure protection of personal data, protection from exploitation, access to information, and respect for children’s views

Children’s digital rights encompass:

  • Right to Development: Guaranteed under Article 6 of the UNCRC
  • Freedom of Expression: Essential for development but must align with children’s best interests
  • Right to access information: In age-appropriate formats
  • Right to be protected from harm: Including from sexual abuse and exploitation
  • Dual Role Recognition: Children are both consumers and creators of information
  • Algorithmic Impact Awareness: Protection from narrowed learning paths and predetermined futures

The Legal Framework

Currently, in the EU there are three major regulatory frameworks attempting to protect children in digital environments:

The General Data Protection Regulation (GDPR)

The GDPR places significant emphasis on protecting children’s data, recognizing their inherent vulnerability in the digital ecosystem. Key provisions include:

  • Recognition that AI systems, including generative AI, are typically trained on broad datasets that include personal data
  • Prohibition of using web-scraped datasets for AI training without a specific legal basis
  • Requirements for clear age verification procedures
  • Special protection for children, particularly regarding the use of personal data for marketing or profiling
  • Age-based consent regime requiring parental authorization for information society services offered to children under 16 (or as low as 13-16 depending on the member state)
  • Requirements for transparent, intelligible, and accessible information presented in clear language
  • Prohibition of misleading interfaces and dark patterns
  • Right to erasure of personal data collected during childhood
  • Privacy-protective defaults rather than relying on user action

The 2021 Resolution on Children’s Digital Rights from Data Protection Authorities emphasizes that service providers should integrate children’s best interests into service design, implement age-appropriate privacy by default, use proportionate verification mechanisms, and refrain from profiling children for commercial purposes.

The AI Act (AIA)

The AI Act establishes:

  • Strict oversight and safety measures at both developer and deployer levels for generative AI systems that could create harmful content
  • Requirements to watermark deepfakes and other AI-generated materials
  • Disclosure obligations regarding artificial origin of deepfake outputs
  • Transparency requirements about content used for training general-purpose AI models
  • Requirement to inform children when they are interacting with AI
  • Prohibition of AI systems that exploit vulnerabilities of individuals or groups due to their age (Article 5(1)(b))

While labeling and watermarking requirements provide tools for detection, they don’t inherently prevent misuse of AI systems to create explicit deepfake content involving minors.

The Digital Services Act (DSA)

The DSA regulates online intermediaries and platforms including marketplaces, social networks, content-sharing platforms, app stores, and travel/accommodation platforms with the goal of preventing illegal and harmful activities online and stopping the spread of disinformation.

Under Article 28 (Online Protection of Minors), the DSA requires:

  1. Platform providers accessible to minors must implement appropriate and proportionate measures ensuring high levels of privacy, safety, and security
  2. Prohibition against presenting advertisements based on profiling using personal data when the service recipient is known to be a minor
  3. Platforms not to process additional personal data solely to assess whether a user is a minor

For Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), additional requirements include:

  • Annual risk assessments regarding potential online risks for children and young people
  • Implementation of mitigation measures including parental controls, age verification, and tools for reporting abuse
  • Non-compliance penalties of up to 6% of annual turnover
  • Default special privacy and security settings
  • Child-friendly Terms & Conditions written to be understandable by minors
  • Prohibition of dark patterns and deceptive design practices
  • Rapid identification, reporting and removal procedures for illegal content, with priority for harmful content affecting minors
  • Development of an EU Code of Conduct for age-appropriate design

Balancing Privacy and Safety

The three major regulatory frameworks mentioned above, each approach children’s digital rights with different emphases, creating a complementary but sometimes overlapping legal landscape:

Privacy vs. Safety Balance

Primarily focused on data privacy and protection, the GDPR approaches child safety through the lens of data minimization and consent. Its emphasis on parental consent for data processing (for children under 13-16) prioritizes privacy but may create barriers to accessing beneficial online services. The framework’s strength lies in preventing misuse of children’s data offering less direct protection against harmful content exposure.

The AI Act centers on preventing harm from AI systems themselves, focusing on transparency and preventing exploitation. Unlike GDPR’s privacy-first approach, the AI Act targets safety through prohibitions on systems that exploit children’s vulnerabilities and requirements for content labeling. This provides stronger protections against manipulation while addressing data privacy only tangentially.

The DSA on the other hand, takes the most comprehensive approach to safety through platform responsibility, emphasizing risk mitigation and content moderation. The DSA’s risk assessment requirements and measures against harmful content provide direct protection mechanisms beyond privacy considerations, addressing the lived experience of children online rather than just their data.

Strengths and Gaps

FrameworkKey StrengthsNotable Gaps
GDPRRobust protection against data exploitationLimited tools for addressing harmful content exposure (not necessarily a gap given the scope of the regulation)
AI ActProtection against manipulative AI systemsLimited mechanisms for individual remedy
DSAPractical safety measures and platform accountabilityPotential overreliance on platform self-assessment

Table 1: Key strengths and gaps of the analysed legal frameworks.

Together, these legal frameworks create a complementary regulatory approach, each addressing different aspects of digital governance. The gaps across these frameworks suggest that child safety online requires additional targeted protections beyond what these general regulatory approaches currently provide. An integrated, child-specific regulatory approach might better address the unique vulnerabilities children face in digital environments.

Regulatory Interaction and Overlap

These frameworks don’t operate in isolation but form an interconnected regulatory ecosystem:

  1. Cumulative Compliance Requirements: Companies operating in the EU must simultaneously comply with all three regulations, creating a layered protection system. For example, an AI-powered social platform must:
    • Follow GDPR’s age verification and consent requirements
    • Implement AI Act transparency measures
    • Conduct DSA risk assessments and provide safety tools
  2. Definitional Challenges: The frameworks sometimes define key terms differently. “Minor” is consistently anyone under 18, but “harmful content” has varying definitions across the DSA and AI Act, creating potential compliance confusion.
  3. Enforcement Complementarity:
    • GDPR enforcement through Data Protection Authorities
    • AI Act through national supervisory authorities
    • DSA through Digital Services Coordinators

This creates multiple regulatory oversight pathways but risks inconsistent enforcement.

  1. Jurisdictional Extension: All three frameworks have extraterritorial application, affecting services provided to EU residents regardless of the provider’s location, creating a global regulatory impact.
  2. Implementation Timeline Differences: The staggered implementation of these regulations (GDPR already active, DSA partially implemented, AI Act still being phased in) creates a dynamic regulatory environment requiring continuous adaptation.

Practical Implications and Challenges

The interaction of these frameworks creates both strengths and challenges:

  • Comprehensive Protection: Together, they address the full spectrum of children’s digital rights from data collection to content exposure
  • Regulatory Burden: Compliance costs may disadvantage smaller providers, potentially limiting innovative child-friendly services
  • Age Verification Paradox: Stricter verification to protect children may require more invasive data collection, creating privacy-safety tensions
  • Design Impact: Privacy-by-default requirements combined with safety features may fundamentally alter user experience design
  • Global Standards Effect: The combined regulatory weight is establishing de facto global standards for children’s digital protection

The complex interplay between these frameworks reflects the multifaceted challenge of protecting children online. While they create the world’s most comprehensive protection system for children’s digital rights, their effectiveness will ultimately depend on consistent interpretation, coordinated enforcement, and technological implementation that balances children’s right to protection with their rights to participation and development in digital spaces.

Looking Forward: Balancing Innovation and Protection

As we navigate this evolving landscape, several key considerations emerge:

  1. Child-Centered Design: Technology must be developed with children’s specific needs and vulnerabilities in mind, implementing the “best interests” principle from the earliest stages of development
  2. Rights-Based Approach: A comprehensive framework balancing protection, participation, and provision rights is essential
  3. Regulatory Harmonization: Consistency across different legal frameworks will provide clearer guidance for developers and platforms
  4. Education and Digital Literacy: Children, parents, and educators need tools to navigate these complex systems safely
  5. Industry Responsibility: Technology providers must prioritize children’s wellbeing over commercial interests

The relationship between children and digital technologies represents one of the most significant challenges of our time. By centering children’s rights, implementing robust protections, and fostering responsible innovation, we can create digital environments where young people can learn, create, and grow safely.

The future of digital childhood depends on our collective commitment to protecting the most vulnerable users while empowering them to become confident digital citizens in an increasingly AI-mediated world.