Loading…

Mitigating Unsafe Feedback with Learning Constraints

While there has been progress towards aligning Large Language Models (LLMs) with human values and ensuring safe behaviour at inference time, safety-guards can easily be removed when fine-tuned on unsafe and harmful datasets.While this setting has been treated extensively, another popular training pa...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-12
Main Authors: Rosati, Domenic, Edkins, Giles, Harsh Raj, Atanasov, David, Majumdar, Subhabrata, Janarthanan Rajendran, Rudzicz, Frank, Hassan Sajjad
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:While there has been progress towards aligning Large Language Models (LLMs) with human values and ensuring safe behaviour at inference time, safety-guards can easily be removed when fine-tuned on unsafe and harmful datasets.While this setting has been treated extensively, another popular training paradigm, learning from unsafe feedback with reinforcement learning, has previously been unexplored. This is concerning due to the widespread deployment of feedback collection systems. We address this gap by providing an analysis of learning settings where feedback is adversarial and noisy, i.e. that unsafe samples are preferred over safe ones despite model developers goal to maintain safety. We find that safety-aligned LLMs easily explore unsafe action spaces through generating harmful text and optimize for adversarial reward indicating that current safety guards are not enough to prevent learning from unsafe feedback. In order to protect against this vulnerability, we adapt a number of both "implict" and "explicit" harmful fine-tuning defences to evaluate whether they are effective as learning constraints in an RL setting finding that no method is generally effective pointing to the need for more research in defences given the widespread adoption of methods designed to learn from feedback. We end the paper with the observation that some defences work by performing "harmless reward hacking" for which we provide a theoretical explanation drawn from the theory of Constrained Markov Decision Processes and provide some direction for future defence development.
ISSN:2331-8422