Follow Work Different With AI!
An intricately detailed futuristic cityscape where sleek silver towers glint and bold holograms glow, representing technological capability. Groups of people in business attire stand on platforms discussing charts while a large balancing scale hovers in the sky.

A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers

WorkDifferentWithAI.com Academic Paper Alert!

Written by Anna Gausen, Bhaskar Mitra, Siân Lindley

Category: AI Strategy & Governance

Article Section: AI Strategy and Governance; Enterprise Risk Management with AI

Publication Date: 2023-12-08

SEO Description: “Exploring AI impact on knowledge access and worker risks in enterprises with new framework study.”

Gausen, Anna, et al. A Framework for Exploring the Consequences of AI-Mediated Enterprise Knowledge Access and Identifying Risks to Workers. arXiv:2312.10076, arXiv, 8 Dec. 2023, http://arxiv.org/abs/2312.10076.

Keywords

AI-Mediated Enterprise Knowledge, Access Systems, Worker Risks, Organizational Dynamics, Mitigation Framework

AI-Generated Paper Summary

Generated by Ethical AI Researcher GPT

This paper presents a framework called the Consequence-Mechanism-Risk Framework to explore the potential consequences of moral importance that could arise from deploying AI systems for enterprise knowledge access. The goal is to identify risks to workers that could manifest through different system mechanisms. The framework considers four key consequences – commodification, appropriation, concentration of power, and marginalization. For each, it maps related system mechanisms that could introduce risks and examples of how those risks could reduce worker value, power, and wellbeing. Considerations are provided to help practitioners mitigate risks when designing and deploying such systems.

The framework aims to support a structured assessment of risks to workers from AI-mediated enterprise knowledge systems. By linking consequences to mechanisms to risks, it provides an actionable analysis for system designers and deployers to understand risks introduced at a system-level and identify where to target mitigation.

Author Caliber
The authors are well-regarded researchers in human-computer interaction, AI ethics and responsible technology from institutions including Imperial College London and Microsoft Research.

Novelty & Merit:

  1. Proposes a novel structured framework for assessing risks to workers from AI systems based on tracing consequences to system mechanisms
  2. Provides an in-depth analysis applying the framework specifically to AI systems for enterprise knowledge access
  3. Extensive literature review encompassing risks to workers, existing risk taxonomies for AI systems, and social impacts of workplace technologies

Findings and Conclusions:

  1. Identified four key consequences of moral importance for enterprise knowledge access systems – commodification, appropriation, concentration of power, marginalization
  2. Mapped mechanisms within such systems that could introduce risks manifesting in reduced worker value, power and wellbeing
  3. Provided design and deployment considerations targeting identified mechanisms to help mitigate risks
  4. Demonstrated applying the framework through examples of risks introduced by large language models

Commercial Applications:

  1. Guidance for technology companies on developing ethical and responsible AI systems for business settings
  2. Framework could be customized and operationalized for risk assessments during product design cycles
  3. Approach could inform organizational policies and processes when adopting AI technologies
  4. Provides a model methodology expanding to assessments for other emerging technologies

Author’s Abstract

Organisations generate vast amounts of information, which has resulted in a long-term research effort into knowledge access systems for enterprise settings. Recent developments in artificial intelligence, in relation to large language models, are poised to have significant impact on knowledge access. This has the potential to shape the workplace and knowledge in new and unanticipated ways. Many risks can arise from the deployment of these types of AI systems, due to interactions between the technical system and organisational power dynamics. This paper presents the Consequence-Mechanism-Risk framework to identify risks to workers from AI-mediated enterprise knowledge access systems. We have drawn on wide-ranging literature detailing risks to workers, and categorised risks as being to worker value, power, and wellbeing. The contribution of our framework is to additionally consider (i) the consequences of these systems that are of moral import: commodification, appropriation, concentration of power, and marginalisation, and (ii) the mechanisms, which represent how these consequences may take effect in the system. The mechanisms are a means of contextualising risk within specific system processes, which is critical for mitigation. This framework is aimed at helping practitioners involved in the design and deployment of AI-mediated knowledge access systems to consider the risks introduced to workers, identify the precise system mechanisms that introduce those risks and begin to approach mitigation. Future work could apply this framework to other technological systems to promote the protection of workers and other groups.

Read the full paper here

Last updated on December 23rd, 2023.