Guest Blog
Opening the Black Box: Leveraging Youth Power to Decode AI in Humanitarian Crises
At a time when AI is rapidly reshaping how crises are predicted, managed, and responded to, it is crucial for young voices to be heard on the responsible development and governance of AI.
Posted on 6th of May 2025 by Marine Ragnet, Aiza Shahid Qureshi
As young researchers deeply interested in the intersection of technology and social impact, we have been investigating the use of artificial intelligence (AI) in humanitarian and peacebuilding efforts. This exploration has revealed a complex landscape of serious potential and existing ethical concerns.
At a time when AI is rapidly reshaping how crises are predicted, managed, and responded to, it is crucial for young voices to be heard on the responsible development and governance of AI, particularly in conflict-affected contexts.
Risks and Opportunities of AI in Humanitarian Settings
AI offers extraordinary opportunities to enhance humanitarian and peacebuilding efforts and accelerate the delivery of support to beneficiaries. For instance, machine learning (ML) algorithms can analyze vast amounts of data to predict potential conflict hotspots, facilitating more proactive peacekeeping interventions.
AI-powered chatbots can provide mental health support to refugees, bridging critical gaps in care. Natural language processing (NLP) tools can break down language barriers in crisis communication, and AI-powered early warning systems can analyze online news articles and social media posts to predict the likelihood of violent events in a given area. However, these technologies also carry significant risks, especially when deployed in vulnerable communities. Our research has identified several key areas of concern:
Algorithmic Bias: AI models trained on non-representative data can perpetuate and amplify existing biases, leading to discriminatory outcomes in aid distribution or conflict analysis. A 2021 study found that widely-used NLP models exhibited significant biases against certain dialects and linguistic variations, leading to higher false positive rates for hate speech in texts written by marginalized communities. The study evaluated popular NLP models like BERT and RoBERTa on a dataset of Arabic tweets, finding that the models struggled to accurately classify hate speech in dialectal Arabic and often misclassified innocuous phrases as offensive.
Privacy and Consent: The collection of sensitive data for AI applications raises serious privacy concerns, especially in contexts where individuals may feel pressured to provide personal information to access vital services. The World Food Programme's implementation of the SCOPE system in Uganda's Bidi Bidi refugee settlement in 2018 highlights these issues. Many refugees reported feeling compelled to provide their biometric data to receive food aid, raising questions about forced consent among people living in insecure environments.
Lack of Transparency: Many AI systems operate as "black boxes," making it difficult for affected individuals to understand or contest decisions made about them. This opacity is particularly problematic in humanitarian contexts where decisions can have life-altering consequences. The Dutch government's use of an algorithmic risk assessment system (SyRI) to detect welfare fraud, which was found to violate human rights by a Dutch court in 2020, is one example of how opaque AI systems in social services can harm intended beneficiaries.
Erosion of Human Agency: Over-reliance on AI in humanitarian contexts risks undermining participatory decision-making processes, sidelining the communities these efforts aim to support.
Empowering Youth Through AI Literacy
To navigate this complex landscape, it is crucial that young people become better informed about AI technologies and their implications. This goes beyond fostering basic digital skills to developing a deep understanding of how AI systems work and their limitations—including machine learning, neural networks, and deep learning. Young people can participate in identifying potential biases in AI applications and learn how to mitigate them through diverse data collection and algorithmic fairness measures. AI literacy also involves an awareness of data rights and privacy implications, including concepts like data minimization, purpose limitation, and the right to explanation under regulations like GDPR.
Educational institutions and youth organizations should prioritize AI literacy programs that equip young people with understanding and engagement with AI systems. Participatory workshops where young people can analyze real-world AI systems used in humanitarian contexts would be particularly valuable, where youth examine the UNHCR's Project Jetson, which uses machine learning to forecast forced displacement, and discuss gaps in governance, the ethical implications of the project, and methods to strengthen protections for beneficiaries affected by the project.
Youth-Led AI Governance: From Consultation to Co-Creation
Young people shouldn't just be subjects of AI governance. We should be active participants who help shape it. Organizations developing AI for humanitarian use should establish youth advisory boards with real decision-making power. Beyond traditional policy bodies, youth can influence AI governance through:
Grassroots campaigns raising awareness of AI ethics in humanitarian contexts, such as social media campaigns highlighting the potential risks of biometric data collection in refugee camps
Developing youth-led ethical guidelines for AI in crisis response, drawing inspiration from existing frameworks like the IEEE's Ethically Aligned Design principles
Participating in "algorithmic audits" to assess AI systems for potential bias or harm, using tools like IBM's AI Fairness 360 toolkit or Google's What-If Tool
Creating youth-centric resources on responsible AI development, such as interactive online courses or podcasts discussing real-world case studies of AI ethics in humanitarian contexts
Engaging with tech companies and NGOs on ethical AI design and governance, potentially through internship programs or youth innovation challenges focused on developing ethical AI solutions for humanitarian challenges
Young people have an innately valuable perspective on AI and technology due to growing up in a digital world. We are inheriting a rapidly changing AI landscape, where this technology is being deployed in every field, and its regulation is severely lacking. It is necessary for governance to keep up with technological advancement, and we can contribute to mitigating many of the unintended consequences of AI systems that older policymakers and technologists are less equipped to confront.
Ethical AI: Harnessing Youth Innovation
Young people can also advocate for transparency and education of AI systems used in crisis response through demanding clear documentation of training data sources and decision-making processes. On the development side, young people can create AI applications that address humanitarian challenges while prioritizing ethics, such as those with privacy-preserving federated learning models for health data analysis in refugee camps. Participating in "AI for good" hackathons focused on ethical challenges in peacebuilding can promote AI literacy and youth participation in the ethical development of AI, and can result in developing AI-powered conflict early warning systems that respect privacy and avoid stigmatization. Young people can also collaborate with researchers to study AI's impact on vulnerable communities through participatory action research projects that involve affected populations and are informed by their experiences.
The Road Ahead: Building an Ethical AI Future
As we navigate the complexities of AI in humanitarian contexts, amplifying youth voices is essential. Young people have a vital stake in how these technologies are used in global crisis response and peacebuilding efforts, and we bring digital fluency, diverse perspectives, and a dedication to ethical development to these critical discussions.
By engaging youth as partners in ethical AI governance, we can harness the potential of these powerful technologies while safeguarding human rights, promoting fairness, and upholding the dignity of vulnerable populations. To build a truly ethical AI future, we need sustained collaboration between youth, policymakers, humanitarian organizations, tech companies, and researchers. This means creating ongoing channels for youth input, investing in AI literacy programs, and empowering young innovators—in both fragile contexts and developed nations—to cultivate responsible AI solutions for humanitarian challenges.
About the Author
Marine Ragnet and Aiza Shahid Qureshi are researchers at NYU, specializing in the examination of AI's impact in fragile contexts. Their research develops a multilevel framework to analyze AI-related risks and opportunities in humanitarian efforts and peacebuilding. Drawing on their interdisciplinary backgrounds, they aim to shape ethical guidelines and governance mechanisms for responsible AI deployment in challenging environments, bridging the gap between technological innovation and humanitarian needs.
(Photo by Salah Darwish on Unsplash)