Loading…

Sequence labeling to detect stuttering events in read speech

•The effect of data augmentation technique has improved the performance of all applied classifiers.•The results on human transcripts show that, without feature engineering, the BLSTM outperform the CRF classifiers.•The results after added auxiliary features to support the CRFaux classifier allows pe...

Full description

Saved in:
Bibliographic Details
Published in:Computer speech & language 2020-07, Vol.62, p.101052, Article 101052
Main Authors: Alharbi, Sadeen, Hasan, Madina, Simons, Anthony J H, Brumfitt, Shelagh, Green, Phil
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•The effect of data augmentation technique has improved the performance of all applied classifiers.•The results on human transcripts show that, without feature engineering, the BLSTM outperform the CRF classifiers.•The results after added auxiliary features to support the CRFaux classifier allows performance improvements.•The results of CRFngram , CRFaux and BLSTM classifiers on ASR transcripts, scored against human transcription degrade in these three classifiers. Stuttering is a speech disorder that, if treated during childhood, may be prevented from persisting into adolescence. A clinician must first determine the severity of stuttering, assessing a child during a conversational or reading task, recording each instance of disfluency, either in real time, or after transcribing the recorded session and analysing the transcript. The current study evaluates the ability of two machine learning approaches, namely conditional random fields (CRF) and bi-directional long-short-term memory (BLSTM), to detect stuttering events in transcriptions of stuttering speech. The two approaches are compared for their performance both on ideal hand-transcribed data and also on the output of automatic speech recognition (ASR). We also study the effect of data augmentation to improve performance. A corpus of 35 speakers’ read speech (13K words) was supplemented with a corpus of 63 speakers’ spontaneous speech (11K words) and an artificially-generated corpus (50K words). Experimental results show that, without feature engineering, BLSTM classifiers outperform CRF classifiers by 33.6%. However, adding features to support the CRF classifier yields performance improvements of 45% and 18% over the CRF baseline and BLSTM results, respectively. Moreover, adding more data to train the CRF and BLSTM classifiers consistently improves the results.
ISSN:0885-2308
1095-8363
DOI:10.1016/j.csl.2019.101052