Blog

News and developments from RD4C

Guest Blog

Empowering Youth With Child-Aligned AI

Posted on 20th of March 2024 by John-Matthew Conely

Empowering Youth With Child-Aligned AI
Empowering Youth With Child-Aligned AI

Guest blog by John-Matthew Conely (as part of his internship at the Responsible Data for Children Initiative); RD4C will delve deeper into AI and how it relates to the RD4C Principles.

Children are growing up in a world where artificial intelligence (AI) is increasingly a part of their daily life: they interact with AI-powered technologies when they use social media, smart toys, smart devices, or watch online videos; their lives are affected by automated decision making for educational assessment; their personal development is likely influenced by the algorithms that permeate our digital environment.

AI - defined by the OECD as machine-based systems that transform inputs into outputs, such as content or predictions, that influence the physical or virtual environment - can be highly positive for children, creating opportunities to sustainably flourish and grow while supercharging education and medical care. On the other hand, it also has the potential to cause harm to children. Social media algorithms have increasingly been shown to negatively affect children’s mental health, while AI assistants can impair children’s emotional and cognitive development. AI-powered IoT (Internet of Things) devices, even toys, can violate children’s privacy by harvesting their data without their consent. Generative AI has the potential to flood the internet with harmful content including child-abuse material, further endangering children’s safety

A growing recognition of AI threats and opportunities has led to greater public sector commitments to governing AI ethically, but such initiatives are still in early stages and place minimal focus on children. For AI to truly be of benefit to children, AI systems must be designed, developed, and deployed with the well-being of children explicitly in mind. That is to say, the output and applications of AI must be aligned with values and norms that recognize children’s and that of their caregivers preferences and their right to privacy, safety, and freedom of thought. This process of determining AI values is also known as AI alignment.

 

What is AI Alignment?

The AI alignment problem is concerned with ensuring that AI delivers output in a manner consistent with human goals, values, and preferences.

To better illustrate what is meant here by “alignment”, consider designing a hypothetical AI chatbot. In developing this chatbot, you (the designer) would like to make ethical decisions about the kinds of responses the chatbot can provide to its users. For example, you might wish to restrict the chatbot from giving discriminatory or offensive answers. How would you determine which answers are appropriate or inappropriate for users? Would you decide unilaterally, believing that you already know what is best for them? Would you seek to gather input on user’s preferences, in order to more accurately reflect their needs? If your chatbot serves users in a variety of regional and cultural contexts, will you adapt the chatbot’s responses for particular contexts? 

Ultimately, every AI contains design choices that reflect specific decisions made about preferences and values, yet the designers themselves may not be in a position to arbitrarily decide which values are most appropriate. To properly align these systems with human preferences, participation is needed from the people and communities that the AI system affects. For this reason, participatory methods need to form the basis of a human-aligned AI.

 

How to Facilitate Child-Aligned AI?

AI-based systems rely on data to produce their valuable output: more data. As a data-driven technology, its usefulness and power is therefore reliant on the (responsible) data upon which it is trained. The Responsible Data for Children initiative was formed out of a desire to serve the rights of children in such data-oriented contexts. Its principles are flexible by design and applicable in a broad variety of contexts in which data is used. To see how our principles and tools are applicable to AI, consider the sample principle below. 

Participatory / People-Centric:

For AI systems to be aligned with and prioritize children’s needs and preferences, there is a need to bolster efforts for greater data literacy in children, so that children themselves can better voice their concerns and recommendations, and to foster participation mechanisms that allow children and young people to actively engage. We should promote measures to educate children, parents, and related stakeholders on AI, the nature of consent, children’s rights and protections, and risks to children. These measures ought to be as inclusive as possible, giving voice to children from communities around the world, including from the Global South.

Incorporating these participatory, democratic approaches into alignment of even the largest-scale AI is not a pipe-dream. One private company, Anthropic, is currently developing a process to align large language models via the incorporation of public input on high level ethical principles. Incidentally, this process results in a less biased model, with no appreciable degradation in math or language performance or shifts in perceived political stance. If properly leveraged, such approaches could allow all members of society, even the most vulnerable, to have a say in determining how technology will affect their future. 

Such approaches should be harnessed specifically to incorporate the values and goals of children into AI. Various vehicles to engage and interface with youth on emerging policy have already been developed, such as the Youth Solutions Lab’s participatory workshops which gather sentiments, insights, and preferences from youth on pressing policy issues. Crowdsourced input gathered from youth on specific moral preferences for artificial agents also show promise (see MIT Media Lab’s Moral Machine project). These participatory approaches could open a path to create direct linkages between the high-level principles informing AI model development and the preferences of children and youth who will eventually interact with those models. In tandem with these methods, boosting data literacy and communicating AI concepts to the public, especially youth, in a relatable, concrete manner will aid in effective discussion and participation.

 

Join the Responsible Data for Children Efforts

As we enter our fifth year, the Responsible Data for Children initiative continues to pioneer ways to address new and emerging data challenges affecting children around the world. We need thought partners to dedicate more substantial research to the field of AI alignment and child rights, empowerment and self-determination. If you would like to collaborate, please reach out to [email protected].

 

 

Image by Jamie Street | Unsplash is licensed under CC0.

Back to the Blog