Data protection and monitoring An LRD guide to privacy at work (April 2024)

Chapter 2

Bias

[page 32]

There are several concerns about bias and discrimination in the use of AI processes for monitoring workers.

If historical data is used to train an AI monitoring system and reflects bias in the workplace, the AI can perpetuate that bias or make it worse. AI systems may unfairly target certain groups of workers based on characteristics such as race, gender, age, or disability. This can result in disproportionate monitoring or scrutiny of certain individuals or groups, leading to discrimination.

To mitigate these concerns, employers should include union reps in the development and implementation of AI processes and seek feedback from workers. It is essential that employers explain how AI is used and how it makes decisions, regularly audit the systems they use and review decisions made using AI.


This information is copyright to the Labour Research Department (LRD) and may not be reproduced without the permission of the LRD.