Loading…

Are Soft Prompts Good Zero-Shot Learners for Speech Recognition?

Large self-supervised pre-trained speech models require computationally expensive fine-tuning for downstream tasks. Soft prompt tuning offers a simple parameter-efficient alternative by utilizing minimal soft prompt guidance, enhancing portability while also maintaining competitive performance. Howe...

Full description

Saved in:
Bibliographic Details
Main Authors: Ng, Dianwen, Zhang, Chong, Zhang, Ruixi, Ma, Yukun, Ritter-Gutierrez, Fabian, Nguyen, Trung Hieu, Ni, Chongjia, Zhao, Shengkui, Chng, Eng Siong, Ma, Bin
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Large self-supervised pre-trained speech models require computationally expensive fine-tuning for downstream tasks. Soft prompt tuning offers a simple parameter-efficient alternative by utilizing minimal soft prompt guidance, enhancing portability while also maintaining competitive performance. However, not many people understand how and why this is so. In this study, we aim to deepen our understanding of this emerging method by investigating the role of soft prompts in automatic speech recognition (ASR). Our findings highlight their role as zero-shot learners in improving ASR performance while also exposing them to the risk of malicious modifications. Soft prompts aid generalization but are not obligatory for inference. We also identify two primary roles of soft prompts: content refinement and noise information enhancement, which enhances robustness against background noise. Additionally, we propose an effective modification on noise prompts to show that they are capable of zero-shot learning on adapting to out-of-distribution noise environments.
ISSN:2379-190X
DOI:10.1109/ICASSP48485.2024.10447746