Citation
  • Irwin, J, Dharamshi, A., Zon, N. (2021). Children’s Privacy in the Age of Artificial Intelligence. Canadian Standards Association, Toronto, ON.

Executive Summary

Artificial intelligence (AI) is playing a growing role in children’s lives, fundamentally reshaping their everyday experiences and places – from their homes, to their schools, to other public services and spaces. While the application of AI has rapidly expanded, the tools to address the challenges AI can pose to children’s privacy have not kept pace.

Instead, children are navigating the age of AI with little consideration for their best interests from developers and policymakers alike. But children are deeply affected by AI; they both directly and indirectly interact with AI-enabled technologies, including those designed for adults. Children are also distinctly affected by AI; they have specific privacy rights, needs, and circumstances that are impacted by this technology.

There is an immediate need for policymakers to address this gap and develop a child-specific approach to privacy in the context of AI. If left unaddressed, this oversight will have profound implications for present and future generations of children.

This report seeks to advance understanding and protections of children’s privacy by focusing on three main areas of risk from AI:

  • Data risks: AI requires data to learn and improve, incentivizing the mass collection of data. The sheer magnitude and scope of data collected about children today is unprecedented. Children’s data captured and processed by AI systems may include sensitive information. This data can be shared or sold to third parties and may follow children over the course of their lives.
  • Function risks: AI applications often use data in ways that infringe on children’s privacy and autonomy. AI functions like surveillance, profiling, decision-making, and inference are already commonly used in children’s lives, generally in “low-stakes” applications like targeted online ads. However, AI functions are increasingly being deployed in “high-stakes” applications like university admissions, child protective services, and biometric monitoring.
  • Oversight risks: AI can produce unfair, incorrect, or discriminatory outcomes for children using their personal information. The complexity of AI can prevent humans from easily understanding or contesting how these algorithmic decisions are made. A lack of formalized governance or common standards for AI means that those who create, deploy, or profit from AI systems are currently subject to minimal transparency and accountability requirements.

Used responsibly, AI technology has remarkable potential to improve the lives of children. However, without effective interventions, the risks to children’s privacy from AI may have profound negative impacts on children’s present and future lives.

To address the challenges AI poses to children’s privacy, this report identifies a number of recommended interventions. Some of these actions are cross-cutting and involve commitments to meaningfully and systematically include children in the development of AI and privacy policies that affect their lives. Others are targeted at specific stages across the lifecycle of AI – from integrating children’s privacy considerations before technologies are deployed, to increasing their capacity to make informed privacy decisions, to ensuring they have mechanisms to pursue redress for any harms.

While these recommendations cannot eliminate all the potential risks to children, they represent important steps in promoting their privacy in the age of AI.

Recommendations

1. Cross-cutting actions

  • Consider children as a distinct and vulnerable population
  • Involve children in privacy and AI policy development

2. Interventions across the lifecycle of AI

a. Before Deployment

  • Mandate and operationalize children’s privacy by design
  • Require children’s privacy impact assessments

b. During adoption

  • Develop educational resources for children, teachers and parents
  • Require child-friendly notices and terms of service
  • Encourage certification and labelling
  • Provide dynamic and granular consent options

c. After use

  • Mandate organizational oversight mechanisms
  • Fund independent oversight institutions
  • Introduce strict penalties for privacy violations

Summary for Policymakers

Download the Summary for Policymakers

Contact Standards Research

Orange line image

Are you interested in learning more about our research? Email us at [email protected].

Join the Research Community

Orange line image

Join the CSA Community today so you can keep informed about research that is critical to standards development

When you join the CSA Community, you’ll gain access to the CSA Group Research Space. You can learn about current studies, get the latest information on upcoming programs, ask questions, andget involved in future research.