Loading…
Tracking formants in spectrograms and its application in speaker verification
Formants are the most visible features in spectrograms and they also hold the most valuable speech information. Traditionally, formant tracks are found by first finding formant points in individual frames, then the formants points in neighboring frames are joined together to form tracks. In this pap...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Formants are the most visible features in spectrograms and they also hold the most valuable speech information. Traditionally, formant tracks are found by first finding formant points in individual frames, then the formants points in neighboring frames are joined together to form tracks. In this paper we present a formant tracking approach based on image processing techniques. Our approach is to first find the running directions of the formants in a spectrogram. Then we perform smoothing on the spectrogram along the directions of the formants to produce formants that are more continuous and stable. Then we perform ridge detection to find formant track candidates in the spectrogram. After removing tracks that are too short or too weak, we fit the remaining tracks with 2 nd degree polynomial curves to extract formants that are both smooth and continuous. Besides extracting thin formant tracks, we also extracted formant tracks with width. These thick formants are able to indication not only the locations of the formants but also the width of the formants. Using the voices of 70 people, we conducted experiments to test the effectiveness of the thin formants and the thick formants when they are used in speaker verification. Using only one sentence (6 to 10 words, 3 seconds in length) for comparison, the thin formants and the thick formants are able to achieve 88.3% and 93.8% of accuracy in speaker verification, respectively. When the number of sentences for comparison increased to seven, the accuracy rate improved to 93.8% and 98.7%, respectively. |
---|---|
ISSN: | 1071-6572 2153-0742 |
DOI: | 10.1109/CCST.2012.6393541 |