Loading…

AURA: Privacy-Preserving Augmentation to Improve Test Set Diversity in Speech Enhancement

Speech enhancement models running in production environments are commonly trained on publicly available data. This approach leads to regressions due to the lack of training/testing on representative customer data. Moreover, due to privacy reasons, developers cannot listen to customer content. This &...

Full description

Saved in:
Bibliographic Details
Main Authors: Gitiaux, Xavier, Khant, Aditya, Beyrami, Ebrahim, Reddy, Chandan, Gupchup, Jayant, Cutler, Ross
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Speech enhancement models running in production environments are commonly trained on publicly available data. This approach leads to regressions due to the lack of training/testing on representative customer data. Moreover, due to privacy reasons, developers cannot listen to customer content. This 'ears-off' situation motivates Aura, an end-to-end solution to make existing speech enhancement train and test sets more challenging and diverse while being sample efficient. Aura is 'ears-off' because it relies on a feature extractor and metrics of speech quality, DNSMOS P.835, and AECMOS, that are pre-trained on data obtained from public sources. We evalaute Aura on two speech enhancement tasks: noise suppression (NS) and audio echo cancellation (AEC). Aura samples an NS test set 0.42 harder in terms of P.835 OVRL than random sampling; and, an AEC test set 1.93 harder in AECMOS. Moreover, Aura increases diversity by 30% for NS tasks and by 530% for AEC tasks compared to greedy sampling. Moreover, Aura achieves a 26% improvement in Spearman's rank correlation coefficient (SRCC) compared to random sampling when used to stack rank NS models.
ISSN:2379-190X
DOI:10.1109/ICASSP49357.2023.10096879