Loading…

Audiogram estimation using Bayesian active learning

Two methods for estimating audiograms quickly and accurately using Bayesian active learning were developed and evaluated. Both methods provided an estimate of threshold as a continuous function of frequency. For one method, six successive tones with decreasing levels were presented on each trial and...

Full description

Saved in:
Bibliographic Details
Published in:The Journal of the Acoustical Society of America 2018-07, Vol.144 (1), p.421-430
Main Authors: Schlittenlacher, Josef, Turner, Richard E., Moore, Brian C. J.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Two methods for estimating audiograms quickly and accurately using Bayesian active learning were developed and evaluated. Both methods provided an estimate of threshold as a continuous function of frequency. For one method, six successive tones with decreasing levels were presented on each trial and the task was to count the number of tones heard. A Gaussian Process was used for classification and maximum-information sampling to determine the frequency and levels of the stimuli for the next trial. The other method was based on a published method using a Yes/No task but extended to account for lapses. The obtained audiograms were compared to conventional audiograms for 40 ears, 19 of which were hearing impaired. The threshold estimates for the active-learning methods were systematically from 2 to 4 dB below (better than) those for the conventional audiograms, which may indicate a less conservative response criterion (a greater willingness to respond for a given amount of sensory information). Both active-learning methods were able to allow for wrong button presses (due to lapses of attention) and provided accurate audiogram estimates in less than 50 trials or 4 min. For a given level of accuracy, the counting task was slightly quicker than the Yes/No task.
ISSN:0001-4966
1520-8524
DOI:10.1121/1.5047436