CALL FOR PAPERS
This is a call for papers for a special issue of Information, Communication and Society
Data Care: A Humanities and Social Sciences Approach to Debiasing Large Language Models
Large Language Models (LLMs) are AI systems trained on vast datasets to generate, understand, and process human language and expression, enabling applications like chatbots, translation, and content creation. Much research on LLMs is led by computer scientists focused on debiasing data to build fairer models, while humanities and social science scholars remain underrepresented in shaping AI’s decision-making processes. Computational research often addresses ‘data loss’ and ‘data deficiencies’ through data-centric AI approaches, whereas scholars from media studies, anthropology, and political science for instance take a social-centric approach to critique AI’s role in reinforcing historical inequalities through data extraction and algorithmic governance. These critiques, however, while important, rarely translate into co-constructive decision-making to build datasets that are transparent, equitable, and representative of the Majority World.
Emergent research from the Global South highlights AI’s potential to challenge traditional gatekeepers, oppressive regimes, and patriarchal norms, fostering a more hopeful perspective on LLM-powered innovations. The rise of diverse LLMs—such as OpenAI’s GPT (USA), DeepSeek, Qwen (China), Mistral (France), and Matilda (Australia)—demands a cross-cultural approach to AI development. These models reflect different linguistic, ethical, and socio-political contexts, underscoring the need for a humanities and social sciences analysis around what constitutes localized training data, multilingual adaptability, and culturally aware governance that moves beyond resistance toward rational optimism.
This special issue seeks to engage humanities and social science scholars committed to improving the decision-making of AI by focusing on debiasing strategies around notions of authenticity, provenance, representation, and inclusion in data capture and curation. Centred on the concept of Data Care, it promotes ethical, inclusive, and community-driven data ecosystems guided by the CARE principles: Collective Benefit, Authority to Control, Responsibility, and Ethics. Moving beyond critique, this issue fosters interdisciplinary dialogue on equitable AI development and invites contributions on replicable strategies to debias and diversify LLMs from a cross-cultural perspective.