Creating Explainable Dynamic Checklists via Machine Learning to Ensure Decent Working Environment for All: A Field Study with Labour Inspections

To address poor working conditions and promote United Nations’ sustainable development goal 8.8, “protect labour rights and promote safe working environments for all workers [...]”, government agencies around the world conduct labour inspections. To carry out these inspections, inspectors traditiona...

Full description

Saved in:
Bibliographic Details
Main Authors: Flogard, Eirik Lund, Mengshoel, Ole Jakob, Theisen, Ole Magnus, Bach, Kerstin
Format: Book
Language:English
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:To address poor working conditions and promote United Nations’ sustainable development goal 8.8, “protect labour rights and promote safe working environments for all workers [...]”, government agencies around the world conduct labour inspections. To carry out these inspections, inspectors traditionally use paper-based checklists as a means to survey individual organisations for working environment violations. Currently, these checklists are created by domain experts, but recent research indicates that machine learning (ML) could be used to generate dynamic checklists to increase inspection efficiency. A drawback with the dynamic checklists is that they are complex and could be difficult to understand for inspectors. They have also never been field-tested. In this paper, we therefore propose user-oriented explanation methods for Context-aware Bayesian Case-Based Reasoning (CBCBR), which is the current state-of-art ML method for generating dynamic checklists. We also introduce a prototype of CBCBR and present a field study where we test it in real-world labour inspections. The results from the study indicate that using the explainable dynamic checklists increases the efficiency of the labour inspections, and inspectors also report that they find the checklists useful. The results also suggest that current ML evaluation methods, where model prediction performance is evaluated on existing data, may not fully reflect the real-world field performance of checklists.
ISSN:3218-3225