The RD4C Principles
Principles to guide responsible data handling toward saving children’s lives, defending their rights, and helping them fulfill their potential from early childhood through adolescence.
Read about our principles at
Principles Page
From our blog
New developments from RD4C.
New Publication
Why Responsible Data for Children matters when designing a child wellbeing monitoring programIn our increasingly datafied world, the responsible handling of data for and about children has become a critical issue. UNICEF, governments, and its partners acknowledge the vital importance of acting to advance children’s well-being while preserving their rights and agency. We engage with partners around the world to inform practices around data (re)use for and about children. In June, the RD4C team travelled to Beijing, where UNICEF China organized a social policy salon. Attendees discussed how to better understand and respond to children's development needs through data in the field of social protection. Representatives from the Ministry of Civil Affairs and six local civil affairs departments attended the meeting and shared their experiences in tracking the wellbeing of low-income populations and the development of a child module. Reflecting on this recent policy forum in Beijing as well as past experiences in several other contexts, we have come up with three broad clusters of rationales that highlight the significance of data responsibility for children in the context of government owned, administrative data systems. 1. Building Trust to Enable and Improve Service Delivery Trust is foundational in any relationship, especially when it comes to children and their data. Responsible data management fosters trust among children, parents, and the broader community, enabling more effective service delivery. Trust in data practices means that individuals and communities believe their data is being handled securely, ethically and efficiently. For children, whose understanding of data and its implications may be limited, trust is even more vital. When parents and guardians believe that their children's data is being managed responsibly, they are more likely to engage with and support services and programs that require them to provide their and their children’s data. Without trust in how data is handled, children and their caregivers can adopt “privacy-protective behaviour”. This behaviour occurs when individuals or communities take actions to protect their privacy, often by limiting their engagement with services they perceive as untrustworthy. This can mean that children don’t receive the services they need. By establishing and maintaining trust through responsible data practices, organisations can prevent privacy protective behaviour and ensure that children and communities overall have access to essential services. Along the same lines, responsible data practices support an organisation's reputation as a responsible player, which equally contributes to trust in the institution. In an era where data breaches and privacy scandals can significantly damage credibility and trust, demonstrating a commitment to responsible data management is crucial. This reputation can enhance relationships with stakeholders, including parents and caregivers, communities, and funding bodies. Transparency in data practices and a proactive approach to safeguarding children's data can significantly bolster an organisation's standing in the community, both at national and international level. 2. Enhancing Performance through Better System Design Another main objective of implementing more responsible data practices is improving performance of data systems and the quality of related services. Responsible data practices require organisations to think critically about their data management processes and what the data will be used for, to ensure their efforts make a (responsible) difference and drive impact. For that reason, a clear purpose and information needs should be defined from design stage on, though it is never too late to make improvements in responsible data practices with children’s best interests in mind. Determining an overarching purpose of the insights gained through data can be further used for policy making and improving the quality-of-service delivery. For example, data collected in the social protection sector can be used to identify children in need of support and also to track progress and determine eligibility to additional services and interventions tailored to their specific needs. By monitoring progress in purpose-driven systems, social protection programs can track the effectiveness of interventions, adjust strategies as needed, and ensure continuous improvement. Additionally, data allows for efficient resource allocation, ensuring that social assistance, basic health insurance and so forth essential support reach the children who need them most, thereby maximising the overall impact of social protection initiatives. 3. Protecting Children and Safeguarding them from Harm The protection of children and their rights is a fundamental rationale for responsible data practices. Children are particularly vulnerable to the consequences of data misuse and its potential harms, making it imperative to prioritise their safety, in the digital age that we live in. Data about children’s personal lives is particularly sensitive. Misuse of this data can lead to significant harm, including identity theft, exploitation, discrimination, psychological impact, population fatigue of providing data and a loss of trust in service providers as previously mentioned. Another potential harm is the missed use of data for social good and inefficiencies in handling data, that can lead to opportunities for intervention for example on a policy level not being identified and critical needs not addressed. Responsible data practices aim to mitigate these risks by implementing robust safeguards and ensuring that children's data is handled with utmost care. International conventions, such as the United Nations Convention on the Rights of the Child (UNCRC), mandate the protection of children's privacy and data. Adhering to these legal frameworks is not only a compliance issue but also a moral imperative to promote and protect children's well-being. Implementing Responsible Data Practices in Government-Owned Data Systems: Key Considerations The rationales mentioned above highlight the importance of implementing responsible data principles, especially when handling data for and about children. It also provided practical insights into how these principles can be translated into practices and policies. By exploring the three broad rationales—trust and reputation, performance of services and related data systems, and protection from harms—we highlighted the multifaceted benefits of prioritising responsible data for children. Trust enables effective service delivery by preventing privacy protective behaviour, and supporting an organisation's reputation as a responsible player fosters this trust. A focus on data responsibility by design enhances system performance from the outset on, with a clear purpose and defined impact paths of both the programme and the data management system in question. The insights gained from data, can be used by public services and policy makers to improve the quality of their offering with children and their families’ best interests at heart. *** The bottom line here is responsible data management for children goes beyond mere compliance with existing regimes but requires a multifaceted approach intersecting with trust, performance, protection, and reputation, with a positive impact on children’s lives at the centre. If you’d like to follow the progress on this work, sign up for our newsletter here for further updates.
Read moreGuest Blog
Empowering Youth With Child-Aligned AIGuest blog by John-Matthew Conely (as part of his internship at the Responsible Data for Children Initiative); RD4C will delve deeper into AI and how it relates to the RD4C Principles. Children are growing up in a world where artificial intelligence (AI) is increasingly a part of their daily life: they interact with AI-powered technologies when they use social media, smart toys, smart devices, or watch online videos; their lives are affected by automated decision making for educational assessment; their personal development is likely influenced by the algorithms that permeate our digital environment. AI - defined by the OECD as machine-based systems that transform inputs into outputs, such as content or predictions, that influence the physical or virtual environment - can be highly positive for children, creating opportunities to sustainably flourish and grow while supercharging education and medical care. On the other hand, it also has the potential to cause harm to children. Social media algorithms have increasingly been shown to negatively affect children’s mental health, while AI assistants can impair children’s emotional and cognitive development. AI-powered IoT (Internet of Things) devices, even toys, can violate children’s privacy by harvesting their data without their consent. Generative AI has the potential to flood the internet with harmful content including child-abuse material, further endangering children’s safety. A growing recognition of AI threats and opportunities has led to greater public sector commitments to governing AI ethically, but such initiatives are still in early stages and place minimal focus on children. For AI to truly be of benefit to children, AI systems must be designed, developed, and deployed with the well-being of children explicitly in mind. That is to say, the output and applications of AI must be aligned with values and norms that recognize children’s and that of their caregivers preferences and their right to privacy, safety, and freedom of thought. This process of determining AI values is also known as AI alignment. What is AI Alignment? The AI alignment problem is concerned with ensuring that AI delivers output in a manner consistent with human goals, values, and preferences. To better illustrate what is meant here by “alignment”, consider designing a hypothetical AI chatbot. In developing this chatbot, you (the designer) would like to make ethical decisions about the kinds of responses the chatbot can provide to its users. For example, you might wish to restrict the chatbot from giving discriminatory or offensive answers. How would you determine which answers are appropriate or inappropriate for users? Would you decide unilaterally, believing that you already know what is best for them? Would you seek to gather input on user’s preferences, in order to more accurately reflect their needs? If your chatbot serves users in a variety of regional and cultural contexts, will you adapt the chatbot’s responses for particular contexts? Ultimately, every AI contains design choices that reflect specific decisions made about preferences and values, yet the designers themselves may not be in a position to arbitrarily decide which values are most appropriate. To properly align these systems with human preferences, participation is needed from the people and communities that the AI system affects. For this reason, participatory methods need to form the basis of a human-aligned AI. How to Facilitate Child-Aligned AI? AI-based systems rely on data to produce their valuable output: more data. As a data-driven technology, its usefulness and power is therefore reliant on the (responsible) data upon which it is trained. The Responsible Data for Children initiative was formed out of a desire to serve the rights of children in such data-oriented contexts. Its principles are flexible by design and applicable in a broad variety of contexts in which data is used. To see how our principles and tools are applicable to AI, consider the sample principle below. Participatory / People-Centric: For AI systems to be aligned with and prioritize children’s needs and preferences, there is a need to bolster efforts for greater data literacy in children, so that children themselves can better voice their concerns and recommendations, and to foster participation mechanisms that allow children and young people to actively engage. We should promote measures to educate children, parents, and related stakeholders on AI, the nature of consent, children’s rights and protections, and risks to children. These measures ought to be as inclusive as possible, giving voice to children from communities around the world, including from the Global South. Incorporating these participatory, democratic approaches into alignment of even the largest-scale AI is not a pipe-dream. One private company, Anthropic, is currently developing a process to align large language models via the incorporation of public input on high level ethical principles. Incidentally, this process results in a less biased model, with no appreciable degradation in math or language performance or shifts in perceived political stance. If properly leveraged, such approaches could allow all members of society, even the most vulnerable, to have a say in determining how technology will affect their future. Such approaches should be harnessed specifically to incorporate the values and goals of children into AI. Various vehicles to engage and interface with youth on emerging policy have already been developed, such as the Youth Solutions Lab’s participatory workshops which gather sentiments, insights, and preferences from youth on pressing policy issues. Crowdsourced input gathered from youth on specific moral preferences for artificial agents also show promise (see MIT Media Lab’s Moral Machine project). These participatory approaches could open a path to create direct linkages between the high-level principles informing AI model development and the preferences of children and youth who will eventually interact with those models. In tandem with these methods, boosting data literacy and communicating AI concepts to the public, especially youth, in a relatable, concrete manner will aid in effective discussion and participation. Join the Responsible Data for Children Efforts As we enter our fifth year, the Responsible Data for Children initiative continues to pioneer ways to address new and emerging data challenges affecting children around the world. We need thought partners to dedicate more substantial research to the field of AI alignment and child rights, empowerment and self-determination. If you would like to collaborate, please reach out to [email protected]. Image by Jamie Street | Unsplash is licensed under CC0.
Read moreNew Publication
Testimonials from the FieldSince the beginning of the year, the Responsible Data for Children (RD4C) initiative has been actively seeking feedback and to refine its focus for 2024. Following the insightful recommendations gathered during our recent RD4C dinner event, we have since diversified our outreach, collecting valuable testimonials from field practitioners we previously collaborated with in different contexts. In these interviews, Mariana Rozo-Paz (Policy, Research, and Project Management Lead at Datasphere Initiative), Risdianto Irawan (Data Specialist, UNICEF Indonesia), and Lisa Zimmermann (Chief of Child Protection, UNICEF Madagascar) working in Latin America, Asia and Africa respectively, stress the importance of engaging with young people. They emphasize the value of working particularly in the Global South—where the majority of adolescents and youth are located—to use technology for their well-being, while minimizing harm. Our interviewees recognise a need for increased capacity building of practitioners working with data for and about children, particularly on the foundational principles related to responsible data handling and the risks generated by data and data technologies. This is particularly true when this data is particularly sensitive, such as protection or mental health related data. These firsthand accounts offer a unique perspective on the real-world challenges practitioners are faced with, guiding us towards areas where our initiatives can make the most meaningful difference. Join us as we explore these testimonials, shedding light on our past work as well as the path forward for RD4C this year. *** As we enter our fifth year, the Responsible Data for Children initiative continues to pioneer ways to address new and emerging data challenges. The themes that emerged from our engagements with practitioners in the field will help guide a yearly Responsible Data for Children strategic dialogue to set priorities and select projects for the year. Our next strategy session is in mid April, please do not hesitate to reach out with suggestions at [email protected]! If you’d like to stay up to date with us as we continue this journey, we encourage you to sign up for our newsletter.
Read moreNew Publication
Responsible Data for Children in the Context of AI and Emerging Technology“Data and AI can help deliver vital services to those most in need. It can increase accessibility to education, healthcare, and other vital services for child and human flourishing.” “Data and AI have the potential to widen existing inequalities within and across countries; there is a risk that the rich get richer and the poor get poorer.” On February 26, 2024, the Responsible Data for Children initiative (co-led by UNICEF at Chief Data Office and The GovLab at New York University) convened a “brain trust” dinner to discuss the future of data and AI for children. Hosted by the Doris Duke Foundation and supported by UNICEF USA, we came together united by optimism that a better future is possible and a shared belief in the role that data and data technologies can play in enabling every child to reach their full potential. However, we were also united in concern that the steps that are currently being taken (or not taken) may not put us on the right path – the path to realizing the capacity of data and data technologies for social and public good to the benefit of all. Together, we identified pressing needs in our efforts to improve child wellbeing, along with possible solutions for the Responsible Data for Children initiative to carry forward through its work in countries across the globe. New Tech, Old Problems When people talk about emerging technology, there is a tendency to focus on new problems to solve with new ‘exciting’ tools. But participants seemed to agree that a better use of new technology is to solve old, protracted, difficult problems we have been struggling to solve for years; inclusion, child well-being, access to quality education, healthcare. Many countries are more concerned with how data systems must be enhanced to improve civil registration, digitized administrative systems, and more. All these seemingly basic changes have a profound impact on the lives of children and young people. “It’s important to think about the basics,” said one expert. “ And the basics are finding kids at birth and following them as they grow up [...] ensuring they are protected throughout the process. Tech must build upon existing foundations to contribute to success.” “The theme I hear from this conversation is that we should not always ‘shoot for the stars.’ We should use tech to deal with what we are struggling to do currently and should have done before instead of immediately leaping to new problems.” To stop getting distracted by ‘sugar coated shiny objects’, we should be concrete about what new technology can do to old problems. Data as an Enabler “What do you think are the most exciting opportunities opened up by data and technology for children who are being born today?” An hors d’oeuvre to brain trustees who brought to the table expertise in statistics, technology, product development, international development, and child well-being from National Statistical Offices, Permanent Missions to the UN, philanthropies, and technology companies. The central concept of “data is an enabler, never a solution itself” was defined early on. “Technology by itself will not do it” neither, specified one participant. While new data and technology provides exciting possibilities—for example, using low-cost audio doppler systems to identify high-risk pregnancies to improve maternal health outcomes—it is not a cure-all. Both data and technology need to be embedded in a supportive ecosystem—for example, trained staff to use these low-cost doppler systems, as well as structures and processes for referrals and care where needed. Responsible Governance Data and technology are tools like any other, neither inherently good nor bad. There is a critical need to identify ways to use technology for social and public good and to minimize the ways it might be risky, harmful, destructive, or a driver of widening inequalities. Securing Transparency and Building Trust For several participants, a major gap around data and technology systems is the scarcity of trustworthy tools stemming from the lack of understanding of what went into them. We cannot derive meaningful answers from new tools without understanding what fed them (both quality and origins of the data) or what assumptions undergird them. Relying on their experiences working on welfare in the United States, one participant shared that they always asked themselves, “What are the biases within the system [...] that appear objective but in fact embed norms and value judgements?” Efforts to build trust in new technology systems could therefore create significant harm if there were not significant changes in how they are designed, developed and deployed. This concern is particularly acute with regards to generative AI. Participants argued that “people are being given agency to use machines with blind trust. They are being granted power over something that only ‘sounds’ like it is operating correctly [because] if a machine gives us information that sounds intelligent, it must be true.” “We cannot let [generative AI] make decisions about children when there’s so little transparency about how results emerge.” This unease about the use of generative AI without knowing how it works and what guardrails are in place led us to a conversation around regulations and accountabilities. Strengthening Regulations “To put it bluntly, a lack of governance kills kids. [...] Technology is sexy, but we need to focus on unsexy stuff”, said one participant. Our laws and regulatory mechanisms have not evolved to guarantee AI tools are designed and used responsibly. To do good, we need principled and flexible rules, checks and balances that reflect our fast-evolving reality. Without them, we risk deepening inequities — as communities who have the resources to use (and produce) new technologies will use them to help themselves, while others languish. To this point, one participant flagged the importance of self-governance and urged everyone, in particular tech companies, to help those underserved so as to reduce systemic gaps, both globally and within communities. Similar to what Statistics Offices do, tech companies could state the origins of the data used by their technologies and include a level of confidence or margin of error in the results their systems produce. This would help companies explain how their systems generate answers or from what datasets those answers emerge. “The rules themselves should be shaped in such a way as to give people the most choices throughout the process. It’s like voting. You go more than once.” Fostering Participation, Agency and Literacy In this regard, participants emphasized the importance of engaging young people in the conversation, ensuring they have their finger in the pie. “We need to hear their voices and understand their concerns.” “Including young people in the conversation [of how tech is used] has the possibility to democratize technology.” For their participation to be meaningful, some urged the need to make both adults and children more literate about technology. Additionally, while high-tech systems can erode our ability to critically interrogate them (a risk of turning us into passive consumers), reinforcing skills like programming and critical thinking can stymie that loss. One compared skills training to math, noting that “kids need to learn to do math themselves before they are given a calculator, otherwise they’ll only ever rely on the calculator and fail to grasp the foundational concepts.” Meaningful agency will not be possible until these problems are addressed. “Tech gives tools to young people to expand their capacity. We should think about how we can give them [meaningful] agency in these conversations.” *** As we enter our fifth year, the Responsible Data for Children initiative continues to pioneer ways to address new and emerging data challenges. The themes that emerged during discussion will guide a yearly Responsible Data for Children strategic dialogue to set priorities and select projects for the year. But doing this alone is not a piece of cake… Our next strategy session is in mid April, please do not hesitate to reach out with suggestions at [email protected]! If you’d like to stay up to date with us as we continue this journey, we encourage you to sign up for our newsletter. Photo by Brooke Lark on Unsplash
Read more
Check out other articles in our
Blog Section
About us
The RD4C initiative is a joint endeavor between UNICEF and The GovLab at New York University to highlight and support best practice in our work; identify challenges and develop practical tools to assist practitioners in evaluating and addressing them; and encourage a broader discussion on actionable principles, insights, and approaches for responsible data management.
The work is intended to address practical considerations across the data lifecycle, including routine data collection and one-off data collections; and compliments work on related topics being addressed by the development community such as guidance on specific data systems and technologies, technical standardization, and digital engagement strategies.
Additional tools and materials are coming soon and will be posted on this website as they become available. Join the conversation to receive regular updates.