GEN(der) AI SAFETY

The rapid proliferation of Generative AI (GenAI) technologies has led to new challenges in digital wellbeing, particularly for women and girls in the Global South. This Google Awarded research project focuses on the impact of digitally manipulated content, specifically non-consensual synthetic intimate imagery (NCSII) in the form of ‘deepfakes’ and ‘shallowfakes’. This content and its dissemination disproportionately harm already vulnerable groups in these regions. The advent of GenAI tools has enabled the creation and widespread circulation of manipulated, intimate content that infringes on privacy and limits freedoms, fosters online harassment, and exacerbates gender-based violence. In areas with limited digital literacy, nascent privacy protections, and weak policy frameworks and non-existing laws, the risks of these technologies are heightened. The socio-cultural norms in many Global South countries, including victim-blaming and stigmatization, further intensify the harm experienced by survivors, particularly women and LGBTQI+ individuals.

This grant project uses a multi-sited digital ethnography approach to investigate the gendered, racial, socio-economic, and cultural dimensions of NCSII, exploring its creation, dissemination, and impact on digital safety, digital equity, and emotional wellbeing. By conducting case studies, interviews, and policy analysis in Africa, India, and Mexico, the research aims to develop a ‘Gen(Der) AI Safety framework’ to address the varied harms caused by GenAI tools in the Global South. The ultimate goal is to provide actionable insights for the responsible design and deployment of GenAI technologies, ensuring that digital tools are designed with the equity, safety, and dignity of vulnerable populations at the forefront. This research will also inform ethical guidelines, safety protections, and design recommendations for AI development, contributing to safer and more inclusive digital spaces.

Research Outputs

  • Payal Arora, Kiran Vinod Bhatia & Marta Zarzycka (March, 2025). Deepfakes, Real Harm: Building a Women’s Safety-Centered GenAI. Friedrich-Ebert-Stiftung.
  • Huang, W., & Arora, P. (In Review: Submitted June 2025). Geographies of Hope: Policy Pathways for Gender AI Safety in the Global South. Media and Communication Journal.
  • Arora, P. & Morales, A. & Zarzycka, M. (Dec, 2025). Creative Violence in the Age of AI: Deepfakes, Feminist Resistance, and Ethical Ethnography in Mexico; Accepted for presentation at UVA’s GenAI conference Dec 17-18 2025

Payal Arora
Principal Investigator

Marta Zarzycka
Google Partner & Liaison

Kiran Bhatia
India field lead and Project co-lead

Weijie Huang
Policy analysis Review

Ana Miranda Mora
Mexico field lead