Guest Blog

AI Shapes Childhood. It’s Time Children Shape AI Too

Children deserve AI that listens before it learns, protects before it predicts, and keeps their data local, minimal, and erasable by default.

Aug 28th, 2025
Shivansh Shalabh
data.post.heading

During the lockdown, I watched my classmates sit in online classes with cameras off. Teachers would call on them for attendance or questions and silence followed, as if no one was there. That silence made me wonder: What if an AI-powered platform could actually verify whether students were present, even when their cameras were off?

That question led me to build Attentive, an AI-powered tool that regularly takes snapshots of students and runs facial recognition to automate school attendance.

But very quickly, I ran into a bigger question: Where should the data go?

AI isn’t magic. It runs on data. And when that data involves children’s biometrics, the stakes couldn’t be higher. Around the same time, I read about a massive data breach at a big tech company that exposed millions of customer records. I couldn’t stop thinking: What if those records weren’t just logins or credit cards? What if they contained even more sensitive data?

That thought shaped my entire design. I decided Attentive would process everything locally, on the student’s own device, without ever transmitting a single frame of video to the cloud. No servers storing faces. No networks moving sensitive data. Just local AI, built with privacy in mind.

And that’s when it clicked for me: The real issue isn’t just what AI can do, but what data it consumes and how responsibly we handle it. A clever algorithm with careless data practices is a danger, not an innovation.

As a 20-year-old developer, I love pushing the boundaries of what AI can build. But when it comes to children, speed and accuracy aren’t enough. The real measure is empathy, and in a world rapidly datafying childhood, the real responsibility starts with data.

When AI Meets Childhood, Data Decides the Story

Children today don’t just use tech—they live in it. From voice assistants in the home to gamified learning platforms at school, data is constantly collected from and about them. A 2021 UNICEF report found that children’s data is often gathered long before they can meaningfully understand what that even means.

The promise of this data is huge. It can personalize education, make healthcare more inclusive, inform better public services, and even help us respond to climate change. But the risks are huge, too. It’s not just about leaks. There's another flipside: Many AI systems simply don’t perform equally well for everyone. That gap in performance can reinforce disadvantage instead of reducing it.

When I built Attentive, my AI-powered app to automate attendance using facial recognition, I hit that reality fast. The model I used, tiny by necessity, simply didn’t perform equally for all students. In dim light, students with darker skin tone were frequently misrecognized. Every home presented a different lighting puzzle. Even the “smartest” model couldn’t keep up. These weren’t one-off glitches. They pointed to something deeper: AI doesn't seem to be benefiting everyone equally, even its failures are uneven.

Research backs this up. A landmark MIT-Stanford study by Joy Buolamwini and Timnit Gebru in 2018 found that three widely-used commercial facial-analysis systems, in this instance used for gender classification, had near-perfect accuracy (error rates under 1%) for men with lighter skin tones. But for darker-skinned women, the error rates jumped to 20–34%, and for the darkest-skinned women, up to 46%. In other words, in some cases, the AI’s guess was no better than random chance. 

The problem wasn’t the algorithm alone. It was the data it learned from. According to the same study,  Labeled Faces in the Wild (LFW), a dataset consisting of celebrity images, is the most commonly used benchmark for face recognition, but these images were estimated to be more than 77% male and over 83% white. If our training data fail to represent racial groups, the systems we build on top of them will likely be skewed, too. Even worse, if our “gold standards” fail to represent racial groups, we are unable to detect skewed AI systems during testing and validation . And this is not the only example. A 2025 study found that common face-detection methods failed only 0.28% of the time for the lightest-skinned participants, but up to 24.34% for the darkest-skinned individuals—an almost hundred-fold gap in accuracy.

This is why we can’t treat bias in AI as a minor bug. It’s structural. And when the subjects are children, whose identities, opportunities, and futures are increasingly mediated by data, the stakes are far too high to ignore.

One attendance app misreading a student’s presence or attention may seem small, but multiplied across thousands of apps and millions of children, the consequences could be enormous. The cumulative effect of these misjudgments, and of data pipelines that privilege lighter skin, well-lit homes, and tidy datasets, is a system that boxes children in before they even grow.

From “Cool Features” to Critical Reflection

When I first started building tech, I saw speed and automation as wins. Faster, smarter, cleaner—right?

But working on tools like an AI-based attendance tracker opened my eyes to something bigger. Parents were concerned their children were being “watched” instead of “supported.” Educators worried about how biases might seep into prediction models. Children had a constant feeling that their every move was being tracked.

What I learned: Technology isn’t neutral. The impact of data doesn’t start when it’s analyzed. It starts the moment a developer decides what data to collect and how to frame it.

To explore this firsthand, I ran a small experiment on my university campus: I asked my fellow students, “How does AI make you feel?” The responses were mixed, curious, hopeful, anxious, sometimes overwhelmed. One student put it perfectly: They loved how ChatGPT helped them brainstorm ideas, but worried that the AI might be shaping their thinking in ways they didn’t fully realize. It highlighted that AI isn’t just a tool. It’s an active participant in how we process information, make decisions, and shape our perceptions and ideas.

These feelings don’t stop on campus. They extend to a larger scale: When policies are made without our involvement, we feel our voices go unheard. Many young people feel like “digital guinea pigs”—tested on but rarely consulted. 

Yet AI is already reshaping the lives of youth in profound ways. From chatbots that guide our homework, to recommendation systems that shape what we see and believe, to tools in classrooms that track attention or flag behavior, these technologies are not neutral. They influence how we learn, how we’re judged, and even how opportunities open or close for us. And still, we have little say in deciding the rules, values, or safeguards built into them.

The stakes are high. AI systems, if left unchecked, can make structural misjudgments on a massive scale. Every dataset, every metric, every design choice has consequences, multiplying across classrooms, schools, and millions of children’s experiences. AI may be smart, but it is not inherently fair. We know AI is getting smarter. Are our governance models getting smarter with it?

How We Can (and Must) Design Better Systems, Together

I was fortunate to get the chance to participate, and Generation Unlimited, a multi-stakeholder partnership platform anchored in UNICEF to support youth livelihoods, has given me that space. Since 2022, I’ve joined events ranging from the global launch of a report on preparing young people for work in an AI-powered world, to a youth–executive dialogue on how Gen Z will shape the future workplace. Each time, I was struck by the diversity of young people in the room and the distinct perspectives they brought. Being part of these conversations has shown me the power of participatory governance. When young people are included, not as tokens, but as co-creators, we can push for tech that respects our evolving capacities and unique contexts. Because children aren’t a monolith. What works for a six-year-old using a voice assistant is different from what a 16-year-old needs from their edtech dashboard.

That’s why we need a collective shift in how we think about data and young people, not just as users of technology but as stakeholders in how that technology is built and governed. Designing systems that are truly inclusive and empowering means involving youth voices from the start and holding institutions accountable for their role in shaping digital experiences. If we want data to work for us, not against us, these are some of the first steps we need to take:

  • Protect children’s data with the highest safeguards. Data leaks are not uncommon. Governments must require systems that minimize data collection, process locally when possible, and delete sensitive records by default.

  • Treat bias as a structural risk, not a glitch. Tech companies producing products for children and young people should be required to test and report how their systems perform across different groups—by age, gender, skin tone, and environment. A model that works in a well-lit classroom in a developed country but fails in a rural school elsewhere is not ready for deployment.

  • Shift from surveillance to support. Educators and schools should demand AI tools that are designed to help children learn, not to constantly monitor them. Children deserve dashboards that encourage growth and experimentation, not systems that box them in or penalize them for differences.

  • Put young people at the table. Donors, investors, and policymakers should fund and support platforms that are co-designed with youth. Not as tokens, but as partners. Because no one understands the trade-offs of growing up digital better than the generation living it right now. Effective AI governance must consider the full spectrum of youth, children, adolescents, and young adults, ensuring that systems are designed to protect and empower them at every stage of development.

If you’re building, funding, or regulating technology that affects children, there are a few lessons I’ve taken from years of developing AI tools, and from learning to slow down:

  • Collect only what you truly need. More data doesn’t automatically mean better outcomes.

  • Design for growth, not labels. A “low engagement” score at age nine shouldn’t follow a child into high school.

  • Let children, and their caregivers, have a say. Not just in the interface, but in policies and governance.

  • Don’t just consult youth—share the power. From shaping product roadmaps to sitting on governance boards, young people need real seats at the table, not symbolic invitations.

  • Turn principles into practice. Use resources like RD4C tools, developed by UNICEF and NYU’s GovLab, to design child-centered systems. Whether it’s a classroom chatbot or a national health data platform, this framework helps bring responsibility to life. 

We need to ensure children grow up not just subject to data systems, but able to act with and through them. That means teaching digital literacy alongside algebra, listening when a system feels “creepy” or “unfair,” and embedding youth voices in every stage of technology design and data governance.

Childhood is meant to be a time to explore, experiment, and make mistakes. But in a world where every action can be tracked, tagged, and tokenized, that freedom is at risk. To the adults reading this: Don’t underestimate what young people notice. We see the systems. We feel the trade-offs. And we care deeply about how technology shapes our minds, our relationships, and our futures.

Technology should amplify a child’s potential, not define it. Let’s build data systems that listen before they learn, protect before they predict, and center children not as passive subjects, but as active shapers of their digital lives.

 

About the Author

Shivansh Shalabh is a member of Generation Unlimited's Youth Advisory Board.