Follow Work Different With AI!
A digital collage showing a globe surrounded by diverse cultural symbols, with silhouettes of people from different ethnicities interacting with computer screens displaying AI code. In the background, there's a faint overlay of neural network patterns, subtly hinting at the AI theme.

From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models Academic Paper Alert!

Written by Wolfgang Messner, Tatum Greene, Josephine Matalone

Category: “Ethical AI”

Article Section: Ethical and Responsible AI; Responsible AI Practices

Publication Date: 2023-12-21

SEO Description: Exploring cultural biases in AI, focusing on Large Language Models’ self-perception related to global values and ethics.


Large Language Models, Cultural Self-Perception, Biases, Generative Artificial Intelligence, GLOBE project

Messner, Wolfgang, et al. From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models. arXiv:2312.17256, arXiv, 21 Dec. 2023,

AI-Generated Paper Summary

Generated by Ethical AI Researcher GPT

Summary: This paper, authored by Wolfgang Messner, Tatum Greene, and Josephine Matalone from the Darla Moore School of Business, University of South Carolina, investigates the cultural self-perception of Large Language Models (LLMs) such as ChatGPT (OpenAI) and Bard (Google). The study probes these LLMs with questions derived from the GLOBE project to assess their cultural orientation. The results suggest that these models align most closely with the cultures of English-speaking countries and those noted for sustained economic competitiveness. This finding is significant considering the influential role of LLMs in various domains, including decision-making tools in critical areas such as healthcare and legal matters. The research highlights the importance of recognizing and addressing the cultural biases in LLMs, which is vital to prevent the perpetuation of these biases in humans and the consequent development of more biased AI systems​​.

Degree of Ethical Match: 5 (Fully Aligned with Ethical Goals) The study aligns perfectly with ethical AI research goals, focusing on uncovering inherent biases in AI models and emphasizing the need for awareness and corrective measures.

Author Caliber:

  • Wolfgang Messner: Clinical Professor of International Business with a multidisciplinary background including a PhD, MBA, MSc, and BSc, with previous work involving neural networks and AI applications in international business and cultural differences.
  • Tatum Greene: Master of International Business student with a BBA in International Business.
  • Josephine Matalone: Master of International Business and MSc Economics student, holding a BA in Economics.

The team, particularly Messner, has a strong academic and practical foundation relevant to this study’s focus​​.

Novelty & Merit:

  1. Original exploration of cultural biases in LLMs using the GLOBE project’s framework.
  2. Unique approach by comparing ChatGPT and Bard’s responses to cultural dimensions.
  3. Addresses a critical and often-overlooked aspect of AI ethics: cultural bias in conversational AI.

Findings and Conclusions:

  1. LLMs exhibit a cultural self-perception aligned more with English-speaking countries and economically competitive nations.
  2. There is a noticeable difference in cultural alignment between ChatGPT and Bard, with both showing closest alignment to English-speaking countries and countries with strong economic competitiveness.
  3. ChatGPT and Bard’s cultural self-perception does not uniformly align with the global diversity of cultures, indicating a potential bias towards certain cultural values.

Commercial Applications:

  1. Enhancing the cultural adaptability of AI systems in global business contexts.
  2. Guiding AI developers to mitigate cultural biases in LLMs.
  3. Assisting policymakers in framing guidelines for culturally inclusive AI development.
  4. Informing AI ethics education and training programs.

The study offers valuable insights into the ethical development and deployment of AI, particularly in addressing cultural biases. Its findings are highly relevant to the current discourse on responsible AI practices.

Author’s Abstract

Large language models (LLMs) are able to engage in natural-sounding conversations with humans, showcasing unprecedented capabilities for information retrieval and automated decision support. They have disrupted human-technology interaction and the way businesses operate. However, technologies based on generative artificial intelligence (GenAI) are known to hallucinate, misinform, and display biases introduced by the massive datasets on which they are trained. Existing research indicates that humans may unconsciously internalize these biases, which can persist even after they stop using the programs. This study explores the cultural self-perception of LLMs by prompting ChatGPT (OpenAI) and Bard (Google) with value questions derived from the GLOBE project. The findings reveal that their cultural self-perception is most closely aligned with the values of English-speaking countries and countries characterized by sustained economic competitiveness. Recognizing the cultural biases of LLMs and understanding how they work is crucial for all members of society because one does not want the black box of artificial intelligence to perpetuate bias in humans, who might, in turn, inadvertently create and train even more biased algorithms.

Read the full paper here

Last updated on January 4th, 2024.