WorkDifferentWithAI.com Academic Paper Alert!
Written by Sachin Kumar, Vidhisha Balachandran, Lucille Njoo, Antonios Anastasopoulos, Yulia Tsvetkov, Qi Zhang, Hassan Sajjad
Category: Ethical AI
Article Section: Ethical and Responsible AI; Responsible AI Practices
Publication Date: 2023-12
SEO Description: “Explore strategies for reducing societal harms in AI language models at EMNLP 2023 tutorial.”
Keywords
societal harms, language models, social biases, misinformation, privacy violations
AI-Generated Paper Summary
Generated by Ethical AI Researcher GPT
Author’s Abstract
Numerous recent studies have highlighted societal harms that can be caused by language technologies deployed in the wild. While several surveys, tutorials, and workshops have discussed the risks of harms in specific contexts – e.g., detecting and mitigating gender bias in NLP models – no prior work has developed a unified typology of technical approaches for mitigating harms of language generation models. Our tutorial is based on a survey we recently wrote that proposes such a typology. We will provide an overview of potential social issues in language generation, including toxicity, social biases, misinformation, factual inconsistency, and privacy violations. Our primary focus will be on how to systematically identify risks, and how eliminate them at various stages of model development, from data collection, to model development, to inference/language generation. Through this tutorial, we aim to equip NLP researchers and engineers with a suite of practical tools for mitigating safety risks from pretrained language generation models.