Children’s Privacy in the Age of Artificial Intelligence
- Irwin, J, Dharamshi, A., Zon, N. (2021). Children’s Privacy in the Age of Artificial Intelligence. Canadian Standards Association, Toronto, ON.
Artificial intelligence (AI) is playing a growing role in children’s lives, fundamentally reshaping their everyday experiences and places – from their homes, to their schools, to other public services and spaces. While the application of AI has rapidly expanded, the tools to address the challenges AI can pose to children’s privacy have not kept pace.
Instead, children are navigating the age of AI with little consideration for their best interests from developers and policymakers alike. But children are deeply affected by AI; they both directly and indirectly interact with AI-enabled technologies, including those designed for adults. Children are also distinctly affected by AI; they have specific privacy rights, needs, and circumstances that are impacted by this technology.
There is an immediate need for policymakers to address this gap and develop a child-specific approach to privacy in the context of AI. If left unaddressed, this oversight will have profound implications for present and future generations of children.
This report seeks to advance understanding and protections of children’s privacy by focusing on three main areas of risk from AI:
- Data risks: AI requires data to learn and improve, incentivizing the mass collection of data. The sheer magnitude and scope of data collected about children today is unprecedented. Children’s data captured and processed by AI systems may include sensitive information. This data can be shared or sold to third parties and may follow children over the course of their lives.
- Function risks: AI applications often use data in ways that infringe on children’s privacy and autonomy. AI functions like surveillance, profiling, decision-making, and inference are already commonly used in children’s lives, generally in “low-stakes” applications like targeted online ads. However, AI functions are increasingly being deployed in “high-stakes” applications like university admissions, child protective services, and biometric monitoring.
- Oversight risks: AI can produce unfair, incorrect, or discriminatory outcomes for children using their personal information. The complexity of AI can prevent humans from easily understanding or contesting how these algorithmic decisions are made. A lack of formalized governance or common standards for AI means that those who create, deploy, or profit from AI systems are currently subject to minimal transparency and accountability requirements.
Used responsibly, AI technology has remarkable potential to improve the lives of children. However, without effective interventions, the risks to children’s privacy from AI may have profound negative impacts on children’s present and future lives.
To address the challenges AI poses to children’s privacy, this report identifies a number of recommended interventions. Some of these actions are cross-cutting and involve commitments to meaningfully and systematically include children in the development of AI and privacy policies that affect their lives. Others are targeted at specific stages across the lifecycle of AI – from integrating children’s privacy considerations before technologies are deployed, to increasing their capacity to make informed privacy decisions, to ensuring they have mechanisms to pursue redress for any harms.
While these recommendations cannot eliminate all the potential risks to children, they represent important steps in promoting their privacy in the age of AI.
1. Cross-cutting actions
- Consider children as a distinct and vulnerable population
- Involve children in privacy and AI policy development
2. Interventions across the lifecycle of AI
a. Before Deployment
- Mandate and operationalize children’s privacy by design
- Require children’s privacy impact assessments
b. During adoption
- Develop educational resources for children, teachers and parents
- Require child-friendly notices and terms of service
- Encourage certification and labelling
- Provide dynamic and granular consent options
c. After use
- Mandate organizational oversight mechanisms
- Fund independent oversight institutions
- Introduce strict penalties for privacy violations
Summary for Policymakers
© 2021 Canadian Standards Association. All Rights Reserved.
- Jasmine Irwin, Springboard Policy
- Alannah Dharamshi, Springboard Policy
- Noah Zon, Springboard Policy
Project Advisory Panel
- Brent Barron, Canadian Institute for Advanced Research
- Cara Yarzab, Prodigy Game
- Carol Todd, Amanda Todd Legacy Society
- Fardouz Hosseiny, Centre of Excellence on PTSD
- Gareth Jones, Canada Safety Council
- Matthew Johnson, MediaSmarts
- Nimmi Kanji, TELUS Social Purpose Programs
- Uyen Ta, Mental Health Commission of Canada
- Valerie Steeves, University of Ottawa
- Wendy Craig, PrevNET
- Hélène Vaillancourt, CSA Group
- Nicki Islic, CSA Group (Project Manager)
This CSA Group research report was prepared with financial support from the Office of the Privacy Commissioner of Canada’s (OPC) Contributions Program.
This work has been produced by Springboard Policy and is owned by Canadian Standards Association. It is designed to provide general information in regards to the subject matter covered. The views expressed in this publication are those of the authors and research participants. Springboard Policy and Canadian Standards Association are not responsible for any loss or damage which might occur as a result of your reliance or use of the content in this publication.