The RD4C Principles
Principles to guide responsible data handling toward saving children’s lives, defending their rights, and helping them fulfill their potential from early childhood through adolescence.
Read about our principles at
Principles Page
From our blog
New developments from RD4C.
Guest Blog
Empowering Youth With Child-Aligned AIGuest blog by John-Matthew Conely (as part of his internship at the Responsible Data for Children Initiative); RD4C will delve deeper into AI and how it relates to the RD4C Principles. Children are growing up in a world where artificial intelligence (AI) is increasingly a part of their daily life: they interact with AI-powered technologies when they use social media, smart toys, smart devices, or watch online videos; their lives are affected by automated decision making for educational assessment; their personal development is likely influenced by the algorithms that permeate our digital environment. AI - defined by the OECD as machine-based systems that transform inputs into outputs, such as content or predictions, that influence the physical or virtual environment - can be highly positive for children, creating opportunities to sustainably flourish and grow while supercharging education and medical care. On the other hand, it also has the potential to cause harm to children. Social media algorithms have increasingly been shown to negatively affect children’s mental health, while AI assistants can impair children’s emotional and cognitive development. AI-powered IoT (Internet of Things) devices, even toys, can violate children’s privacy by harvesting their data without their consent. Generative AI has the potential to flood the internet with harmful content including child-abuse material, further endangering children’s safety. A growing recognition of AI threats and opportunities has led to greater public sector commitments to governing AI ethically, but such initiatives are still in early stages and place minimal focus on children. For AI to truly be of benefit to children, AI systems must be designed, developed, and deployed with the well-being of children explicitly in mind. That is to say, the output and applications of AI must be aligned with values and norms that recognize children’s and that of their caregivers preferences and their right to privacy, safety, and freedom of thought. This process of determining AI values is also known as AI alignment. What is AI Alignment? The AI alignment problem is concerned with ensuring that AI delivers output in a manner consistent with human goals, values, and preferences. To better illustrate what is meant here by “alignment”, consider designing a hypothetical AI chatbot. In developing this chatbot, you (the designer) would like to make ethical decisions about the kinds of responses the chatbot can provide to its users. For example, you might wish to restrict the chatbot from giving discriminatory or offensive answers. How would you determine which answers are appropriate or inappropriate for users? Would you decide unilaterally, believing that you already know what is best for them? Would you seek to gather input on user’s preferences, in order to more accurately reflect their needs? If your chatbot serves users in a variety of regional and cultural contexts, will you adapt the chatbot’s responses for particular contexts? Ultimately, every AI contains design choices that reflect specific decisions made about preferences and values, yet the designers themselves may not be in a position to arbitrarily decide which values are most appropriate. To properly align these systems with human preferences, participation is needed from the people and communities that the AI system affects. For this reason, participatory methods need to form the basis of a human-aligned AI. How to Facilitate Child-Aligned AI? AI-based systems rely on data to produce their valuable output: more data. As a data-driven technology, its usefulness and power is therefore reliant on the (responsible) data upon which it is trained. The Responsible Data for Children initiative was formed out of a desire to serve the rights of children in such data-oriented contexts. Its principles are flexible by design and applicable in a broad variety of contexts in which data is used. To see how our principles and tools are applicable to AI, consider the sample principle below. Participatory / People-Centric: For AI systems to be aligned with and prioritize children’s needs and preferences, there is a need to bolster efforts for greater data literacy in children, so that children themselves can better voice their concerns and recommendations, and to foster participation mechanisms that allow children and young people to actively engage. We should promote measures to educate children, parents, and related stakeholders on AI, the nature of consent, children’s rights and protections, and risks to children. These measures ought to be as inclusive as possible, giving voice to children from communities around the world, including from the Global South. Incorporating these participatory, democratic approaches into alignment of even the largest-scale AI is not a pipe-dream. One private company, Anthropic, is currently developing a process to align large language models via the incorporation of public input on high level ethical principles. Incidentally, this process results in a less biased model, with no appreciable degradation in math or language performance or shifts in perceived political stance. If properly leveraged, such approaches could allow all members of society, even the most vulnerable, to have a say in determining how technology will affect their future. Such approaches should be harnessed specifically to incorporate the values and goals of children into AI. Various vehicles to engage and interface with youth on emerging policy have already been developed, such as the Youth Solutions Lab’s participatory workshops which gather sentiments, insights, and preferences from youth on pressing policy issues. Crowdsourced input gathered from youth on specific moral preferences for artificial agents also show promise (see MIT Media Lab’s Moral Machine project). These participatory approaches could open a path to create direct linkages between the high-level principles informing AI model development and the preferences of children and youth who will eventually interact with those models. In tandem with these methods, boosting data literacy and communicating AI concepts to the public, especially youth, in a relatable, concrete manner will aid in effective discussion and participation. Join the Responsible Data for Children Efforts As we enter our fifth year, the Responsible Data for Children initiative continues to pioneer ways to address new and emerging data challenges affecting children around the world. We need thought partners to dedicate more substantial research to the field of AI alignment and child rights, empowerment and self-determination. If you would like to collaborate, please reach out to [email protected]. Image by Jamie Street | Unsplash is licensed under CC0.
Read moreNew Publication
Testimonials from the FieldSince the beginning of the year, the Responsible Data for Children (RD4C) initiative has been actively seeking feedback and to refine its focus for 2024. Following the insightful recommendations gathered during our recent RD4C dinner event, we have since diversified our outreach, collecting valuable testimonials from field practitioners we previously collaborated with in different contexts. In these interviews, Mariana Rozo-Paz (Policy, Research, and Project Management Lead at Datasphere Initiative), Risdianto Irawan (Data Specialist, UNICEF Indonesia), and Lisa Zimmermann (Chief of Child Protection, UNICEF Madagascar) working in Latin America, Asia and Africa respectively, stress the importance of engaging with young people. They emphasize the value of working particularly in the Global South—where the majority of adolescents and youth are located—to use technology for their well-being, while minimizing harm. Our interviewees recognise a need for increased capacity building of practitioners working with data for and about children, particularly on the foundational principles related to responsible data handling and the risks generated by data and data technologies. This is particularly true when this data is particularly sensitive, such as protection or mental health related data. These firsthand accounts offer a unique perspective on the real-world challenges practitioners are faced with, guiding us towards areas where our initiatives can make the most meaningful difference. Join us as we explore these testimonials, shedding light on our past work as well as the path forward for RD4C this year. *** As we enter our fifth year, the Responsible Data for Children initiative continues to pioneer ways to address new and emerging data challenges. The themes that emerged from our engagements with practitioners in the field will help guide a yearly Responsible Data for Children strategic dialogue to set priorities and select projects for the year. Our next strategy session is in mid April, please do not hesitate to reach out with suggestions at [email protected]! If you’d like to stay up to date with us as we continue this journey, we encourage you to sign up for our newsletter.
Read moreNew Publication
Responsible Data for Children in the Context of AI and Emerging Technology“Data and AI can help deliver vital services to those most in need. It can increase accessibility to education, healthcare, and other vital services for child and human flourishing.” “Data and AI have the potential to widen existing inequalities within and across countries; there is a risk that the rich get richer and the poor get poorer.” On February 26, 2024, the Responsible Data for Children initiative (co-led by UNICEF at Chief Data Office and The GovLab at New York University) convened a “brain trust” dinner to discuss the future of data and AI for children. Hosted by the Doris Duke Foundation and supported by UNICEF USA, we came together united by optimism that a better future is possible and a shared belief in the role that data and data technologies can play in enabling every child to reach their full potential. However, we were also united in concern that the steps that are currently being taken (or not taken) may not put us on the right path – the path to realizing the capacity of data and data technologies for social and public good to the benefit of all. Together, we identified pressing needs in our efforts to improve child wellbeing, along with possible solutions for the Responsible Data for Children initiative to carry forward through its work in countries across the globe. New Tech, Old Problems When people talk about emerging technology, there is a tendency to focus on new problems to solve with new ‘exciting’ tools. But participants seemed to agree that a better use of new technology is to solve old, protracted, difficult problems we have been struggling to solve for years; inclusion, child well-being, access to quality education, healthcare. Many countries are more concerned with how data systems must be enhanced to improve civil registration, digitized administrative systems, and more. All these seemingly basic changes have a profound impact on the lives of children and young people. “It’s important to think about the basics,” said one expert. “ And the basics are finding kids at birth and following them as they grow up [...] ensuring they are protected throughout the process. Tech must build upon existing foundations to contribute to success.” “The theme I hear from this conversation is that we should not always ‘shoot for the stars.’ We should use tech to deal with what we are struggling to do currently and should have done before instead of immediately leaping to new problems.” To stop getting distracted by ‘sugar coated shiny objects’, we should be concrete about what new technology can do to old problems. Data as an Enabler “What do you think are the most exciting opportunities opened up by data and technology for children who are being born today?” An hors d’oeuvre to brain trustees who brought to the table expertise in statistics, technology, product development, international development, and child well-being from National Statistical Offices, Permanent Missions to the UN, philanthropies, and technology companies. The central concept of “data is an enabler, never a solution itself” was defined early on. “Technology by itself will not do it” neither, specified one participant. While new data and technology provides exciting possibilities—for example, using low-cost audio doppler systems to identify high-risk pregnancies to improve maternal health outcomes—it is not a cure-all. Both data and technology need to be embedded in a supportive ecosystem—for example, trained staff to use these low-cost doppler systems, as well as structures and processes for referrals and care where needed. Responsible Governance Data and technology are tools like any other, neither inherently good nor bad. There is a critical need to identify ways to use technology for social and public good and to minimize the ways it might be risky, harmful, destructive, or a driver of widening inequalities. Securing Transparency and Building Trust For several participants, a major gap around data and technology systems is the scarcity of trustworthy tools stemming from the lack of understanding of what went into them. We cannot derive meaningful answers from new tools without understanding what fed them (both quality and origins of the data) or what assumptions undergird them. Relying on their experiences working on welfare in the United States, one participant shared that they always asked themselves, “What are the biases within the system [...] that appear objective but in fact embed norms and value judgements?” Efforts to build trust in new technology systems could therefore create significant harm if there were not significant changes in how they are designed, developed and deployed. This concern is particularly acute with regards to generative AI. Participants argued that “people are being given agency to use machines with blind trust. They are being granted power over something that only ‘sounds’ like it is operating correctly [because] if a machine gives us information that sounds intelligent, it must be true.” “We cannot let [generative AI] make decisions about children when there’s so little transparency about how results emerge.” This unease about the use of generative AI without knowing how it works and what guardrails are in place led us to a conversation around regulations and accountabilities. Strengthening Regulations “To put it bluntly, a lack of governance kills kids. [...] Technology is sexy, but we need to focus on unsexy stuff”, said one participant. Our laws and regulatory mechanisms have not evolved to guarantee AI tools are designed and used responsibly. To do good, we need principled and flexible rules, checks and balances that reflect our fast-evolving reality. Without them, we risk deepening inequities — as communities who have the resources to use (and produce) new technologies will use them to help themselves, while others languish. To this point, one participant flagged the importance of self-governance and urged everyone, in particular tech companies, to help those underserved so as to reduce systemic gaps, both globally and within communities. Similar to what Statistics Offices do, tech companies could state the origins of the data used by their technologies and include a level of confidence or margin of error in the results their systems produce. This would help companies explain how their systems generate answers or from what datasets those answers emerge. “The rules themselves should be shaped in such a way as to give people the most choices throughout the process. It’s like voting. You go more than once.” Fostering Participation, Agency and Literacy In this regard, participants emphasized the importance of engaging young people in the conversation, ensuring they have their finger in the pie. “We need to hear their voices and understand their concerns.” “Including young people in the conversation [of how tech is used] has the possibility to democratize technology.” For their participation to be meaningful, some urged the need to make both adults and children more literate about technology. Additionally, while high-tech systems can erode our ability to critically interrogate them (a risk of turning us into passive consumers), reinforcing skills like programming and critical thinking can stymie that loss. One compared skills training to math, noting that “kids need to learn to do math themselves before they are given a calculator, otherwise they’ll only ever rely on the calculator and fail to grasp the foundational concepts.” Meaningful agency will not be possible until these problems are addressed. “Tech gives tools to young people to expand their capacity. We should think about how we can give them [meaningful] agency in these conversations.” *** As we enter our fifth year, the Responsible Data for Children initiative continues to pioneer ways to address new and emerging data challenges. The themes that emerged during discussion will guide a yearly Responsible Data for Children strategic dialogue to set priorities and select projects for the year. But doing this alone is not a piece of cake… Our next strategy session is in mid April, please do not hesitate to reach out with suggestions at [email protected]! If you’d like to stay up to date with us as we continue this journey, we encourage you to sign up for our newsletter. Photo by Brooke Lark on Unsplash
Read moreNew Publication
Learning Package for Responsible Data for Refugee ChildrenThis piece was originally posted on The GovLab website in coordination with UNICEF and UNHCR. To view it in its original format, please click the link here From 29 to 31 January 2024, UNICEF and UNHCR and The Governance Lab at New York University hosted three 90-minute webinars on ways they can support the well-being of children through data, highlighting the ways development and humanitarian practitioners around the world can reinforce data responsibility principles and practices in their daily work with and for children. Led by The GovLab’s Stefaan Verhulst, UNICEF’s Krisana Messerli and Eugenia Olliaro, and UNHCR’s Rachelle Cloutier, participants learned: Why is data important for refugee children? What is data responsibility, and why does it matter? Why does data responsibility matter for refugee children? What are the principles for responsible data management in humanitarian settings? How can data responsibility principles be implemented in practice across the data and information management process? Following the completion of this series, we are happy to publish a final “learning package” that compiles all of the components used for this event. This learning package includes: A recording of the 30 January 2024 session of the webinar. The PowerPoint presentation used for the webinar. A collection of additional resources that can guide participants interested in further advancing their understanding of responsible data for refugee children. The additional resources are available here. *** We hope these resources are of interest and use to you. If you have any questions, please do not hesitate to contact us at [email protected].
Read more
Check out other articles in our
Blog Section
About us
The RD4C initiative is a joint endeavor between UNICEF and The GovLab at New York University to highlight and support best practice in our work; identify challenges and develop practical tools to assist practitioners in evaluating and addressing them; and encourage a broader discussion on actionable principles, insights, and approaches for responsible data management.
The work is intended to address practical considerations across the data lifecycle, including routine data collection and one-off data collections; and compliments work on related topics being addressed by the development community such as guidance on specific data systems and technologies, technical standardization, and digital engagement strategies.
Additional tools and materials are coming soon and will be posted on this website as they become available. Join the conversation to receive regular updates.