WorkDifferentWithAI.com Academic Paper Alert!
Written by Wolfgang Messner, Tatum Greene, Josephine Matalone
Category: “Ethical AI”
Article Section: Ethical and Responsible AI; Responsible AI Practices
Publication Date: 2023-12-21
SEO Description: Exploring cultural biases in AI, focusing on Large Language Models’ self-perception related to global values and ethics.
Keywords
Large Language Models, Cultural Self-Perception, Biases, Generative Artificial Intelligence, GLOBE project
12014432
{12014432:F9PGEY6B}
items
1
modern-language-association
0
default
asc
1062
https://workdifferentwithai.net/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3A%22zotpress-f5522810f507a6831659b1911a325e66%22%2C%22meta%22%3A%7B%22request_last%22%3A0%2C%22request_next%22%3A0%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22F9PGEY6B%22%2C%22library%22%3A%7B%22id%22%3A12014432%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Messner%20et%20al.%22%2C%22parsedDate%22%3A%222023-12-21%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%3Cdiv%20class%3D%5C%22csl-bib-body%5C%22%20style%3D%5C%22line-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%5C%22%3E%5Cn%20%20%3Cdiv%20class%3D%5C%22csl-entry%5C%22%3EMessner%2C%20Wolfgang%2C%20et%20al.%20%3Ci%3EFrom%20Bytes%20to%20Biases%3A%20Investigating%20the%20Cultural%20Self-Perception%20of%20Large%20Language%20Models%3C%5C%2Fi%3E.%20arXiv%3A2312.17256%2C%20arXiv%2C%2021%20Dec.%202023%2C%20%3Ca%20href%3D%27http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2312.17256%27%3Ehttp%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2312.17256%3C%5C%2Fa%3E.%3C%5C%2Fdiv%3E%5Cn%3C%5C%2Fdiv%3E%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22From%20Bytes%20to%20Biases%3A%20Investigating%20the%20Cultural%20Self-Perception%20of%20Large%20Language%20Models%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wolfgang%22%2C%22lastName%22%3A%22Messner%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tatum%22%2C%22lastName%22%3A%22Greene%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Josephine%22%2C%22lastName%22%3A%22Matalone%22%7D%5D%2C%22abstractNote%22%3A%22Large%20language%20models%20%28LLMs%29%20are%20able%20to%20engage%20in%20natural-sounding%20conversations%20with%20humans%2C%20showcasing%20unprecedented%20capabilities%20for%20information%20retrieval%20and%20automated%20decision%20support.%20They%20have%20disrupted%20human-technology%20interaction%20and%20the%20way%20businesses%20operate.%20However%2C%20technologies%20based%20on%20generative%20artificial%20intelligence%20%28GenAI%29%20are%20known%20to%20hallucinate%2C%20misinform%2C%20and%20display%20biases%20introduced%20by%20the%20massive%20datasets%20on%20which%20they%20are%20trained.%20Existing%20research%20indicates%20that%20humans%20may%20unconsciously%20internalize%20these%20biases%2C%20which%20can%20persist%20even%20after%20they%20stop%20using%20the%20programs.%20This%20study%20explores%20the%20cultural%20self-perception%20of%20LLMs%20by%20prompting%20ChatGPT%20%28OpenAI%29%20and%20Bard%20%28Google%29%20with%20value%20questions%20derived%20from%20the%20GLOBE%20project.%20The%20findings%20reveal%20that%20their%20cultural%20self-perception%20is%20most%20closely%20aligned%20with%20the%20values%20of%20English-speaking%20countries%20and%20countries%20characterized%20by%20sustained%20economic%20competitiveness.%20Recognizing%20the%20cultural%20biases%20of%20LLMs%20and%20understanding%20how%20they%20work%20is%20crucial%20for%20all%20members%20of%20society%20because%20one%20does%20not%20want%20the%20black%20box%20of%20artificial%20intelligence%20to%20perpetuate%20bias%20in%20humans%2C%20who%20might%2C%20in%20turn%2C%20inadvertently%20create%20and%20train%20even%20more%20biased%20algorithms.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2312.17256%22%2C%22date%22%3A%222023-12-21%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2312.17256%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%5D%2C%22dateModified%22%3A%222024-01-04T21%3A50%3A50Z%22%7D%7D%5D%7D
Messner, Wolfgang, et al.
From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models. arXiv:2312.17256, arXiv, 21 Dec. 2023,
http://arxiv.org/abs/2312.17256.
AI-Generated Paper Summary
Generated by Ethical AI Researcher GPT
Author’s Abstract
Large language models (LLMs) are able to engage in natural-sounding conversations with humans, showcasing unprecedented capabilities for information retrieval and automated decision support. They have disrupted human-technology interaction and the way businesses operate. However, technologies based on generative artificial intelligence (GenAI) are known to hallucinate, misinform, and display biases introduced by the massive datasets on which they are trained. Existing research indicates that humans may unconsciously internalize these biases, which can persist even after they stop using the programs. This study explores the cultural self-perception of LLMs by prompting ChatGPT (OpenAI) and Bard (Google) with value questions derived from the GLOBE project. The findings reveal that their cultural self-perception is most closely aligned with the values of English-speaking countries and countries characterized by sustained economic competitiveness. Recognizing the cultural biases of LLMs and understanding how they work is crucial for all members of society because one does not want the black box of artificial intelligence to perpetuate bias in humans, who might, in turn, inadvertently create and train even more biased algorithms.
Read the full paper here