Loading…
Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation
There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation. Twenty-two older...
Saved in:
Published in: | Journal of speech, language, and hearing research language, and hearing research, 2023-11, Vol.66 (11), p.4575-4589 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c301t-731034aa0a247d31680879eff251a5d001270dd21ea5d1c8dbfa969c8658c5703 |
container_end_page | 4589 |
container_issue | 11 |
container_start_page | 4575 |
container_title | Journal of speech, language, and hearing research |
container_volume | 66 |
creator | Shiell, Martha M Høy-Christensen, Jeppe Skoglund, Martin A Keidser, Gitte Zaar, Johannes Rotger-Griful, Sergi |
description | There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation.
Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition.
We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners.
MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method. |
doi_str_mv | 10.1044/2023_JSLHR-22-00641 |
format | article |
fullrecord | <record><control><sourceid>proquest_swepu</sourceid><recordid>TN_cdi_swepub_primary_oai_DiVA_org_liu_201072</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2878711074</sourcerecordid><originalsourceid>FETCH-LOGICAL-c301t-731034aa0a247d31680879eff251a5d001270dd21ea5d1c8dbfa969c8658c5703</originalsourceid><addsrcrecordid>eNpFkclOwzAQhi0EYn8CJOQjl8DYzuIcUaEtKAiJ9Wi5yQSM3LjYCQieHpeyzGVmpG_Wn5ADBscM0vSEAxfq8raa3iScJwB5ytbINssymZQM-HqMoeRJKqTcIjshvEA0luabZEsUMgNZyG2irgbbG4tvaOmVa9Ca7om6lk70J9Kxd3NamdBjhz7QR9M_0ylqv2QqFwIdO2vd-zLV9Aa1jaip6ch1b5HXvXHdHtlotQ24_-N3yf34_G40TarrycXotEpqAaxPCsFApFqD5mnRCJbLuF6JbcszprMm7s0LaBrOMGasls2s1WVe1jLPZJ0VIHZJsuob3nExzNTCm7n2H8ppo87Mw6ly_klZMygODAoe-aMVv_DudcDQq7kJNVqrO3RDUDx-p2ARTSMqVmjt480e27_mDNRSCfWvhOJcfSsRqw5_BgyzOTZ_Nb-vF19dVIRV</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2878711074</pqid></control><display><type>article</type><title>Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation</title><source>EBSCOhost MLA International Bibliography With Full Text</source><creator>Shiell, Martha M ; Høy-Christensen, Jeppe ; Skoglund, Martin A ; Keidser, Gitte ; Zaar, Johannes ; Rotger-Griful, Sergi</creator><creatorcontrib>Shiell, Martha M ; Høy-Christensen, Jeppe ; Skoglund, Martin A ; Keidser, Gitte ; Zaar, Johannes ; Rotger-Griful, Sergi</creatorcontrib><description>There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation.
Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition.
We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners.
MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method.</description><identifier>ISSN: 1092-4388</identifier><identifier>ISSN: 1558-9102</identifier><identifier>EISSN: 1558-9102</identifier><identifier>DOI: 10.1044/2023_JSLHR-22-00641</identifier><identifier>PMID: 37850878</identifier><language>eng</language><publisher>United States</publisher><subject>Acoustic Stimulation - methods ; Aged ; Deafness ; Hearing Loss ; Humans ; Speech ; Speech Perception</subject><ispartof>Journal of speech, language, and hearing research, 2023-11, Vol.66 (11), p.4575-4589</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c301t-731034aa0a247d31680879eff251a5d001270dd21ea5d1c8dbfa969c8658c5703</cites><orcidid>0000-0001-9472-3100 ; 0000-0002-7815-0864 ; 0000-0002-9002-4780</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27924,27925</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37850878$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink><backlink>$$Uhttps://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-201072$$DView record from Swedish Publication Index$$Hfree_for_read</backlink></links><search><creatorcontrib>Shiell, Martha M</creatorcontrib><creatorcontrib>Høy-Christensen, Jeppe</creatorcontrib><creatorcontrib>Skoglund, Martin A</creatorcontrib><creatorcontrib>Keidser, Gitte</creatorcontrib><creatorcontrib>Zaar, Johannes</creatorcontrib><creatorcontrib>Rotger-Griful, Sergi</creatorcontrib><title>Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation</title><title>Journal of speech, language, and hearing research</title><addtitle>J Speech Lang Hear Res</addtitle><description>There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation.
Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition.
We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners.
MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method.</description><subject>Acoustic Stimulation - methods</subject><subject>Aged</subject><subject>Deafness</subject><subject>Hearing Loss</subject><subject>Humans</subject><subject>Speech</subject><subject>Speech Perception</subject><issn>1092-4388</issn><issn>1558-9102</issn><issn>1558-9102</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpFkclOwzAQhi0EYn8CJOQjl8DYzuIcUaEtKAiJ9Wi5yQSM3LjYCQieHpeyzGVmpG_Wn5ADBscM0vSEAxfq8raa3iScJwB5ytbINssymZQM-HqMoeRJKqTcIjshvEA0luabZEsUMgNZyG2irgbbG4tvaOmVa9Ca7om6lk70J9Kxd3NamdBjhz7QR9M_0ylqv2QqFwIdO2vd-zLV9Aa1jaip6ch1b5HXvXHdHtlotQ24_-N3yf34_G40TarrycXotEpqAaxPCsFApFqD5mnRCJbLuF6JbcszprMm7s0LaBrOMGasls2s1WVe1jLPZJ0VIHZJsuob3nExzNTCm7n2H8ppo87Mw6ly_klZMygODAoe-aMVv_DudcDQq7kJNVqrO3RDUDx-p2ARTSMqVmjt480e27_mDNRSCfWvhOJcfSsRqw5_BgyzOTZ_Nb-vF19dVIRV</recordid><startdate>20231109</startdate><enddate>20231109</enddate><creator>Shiell, Martha M</creator><creator>Høy-Christensen, Jeppe</creator><creator>Skoglund, Martin A</creator><creator>Keidser, Gitte</creator><creator>Zaar, Johannes</creator><creator>Rotger-Griful, Sergi</creator><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><scope>ADTPV</scope><scope>AOWAS</scope><scope>DG8</scope><orcidid>https://orcid.org/0000-0001-9472-3100</orcidid><orcidid>https://orcid.org/0000-0002-7815-0864</orcidid><orcidid>https://orcid.org/0000-0002-9002-4780</orcidid></search><sort><creationdate>20231109</creationdate><title>Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation</title><author>Shiell, Martha M ; Høy-Christensen, Jeppe ; Skoglund, Martin A ; Keidser, Gitte ; Zaar, Johannes ; Rotger-Griful, Sergi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c301t-731034aa0a247d31680879eff251a5d001270dd21ea5d1c8dbfa969c8658c5703</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Acoustic Stimulation - methods</topic><topic>Aged</topic><topic>Deafness</topic><topic>Hearing Loss</topic><topic>Humans</topic><topic>Speech</topic><topic>Speech Perception</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shiell, Martha M</creatorcontrib><creatorcontrib>Høy-Christensen, Jeppe</creatorcontrib><creatorcontrib>Skoglund, Martin A</creatorcontrib><creatorcontrib>Keidser, Gitte</creatorcontrib><creatorcontrib>Zaar, Johannes</creatorcontrib><creatorcontrib>Rotger-Griful, Sergi</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><collection>SwePub</collection><collection>SwePub Articles</collection><collection>SWEPUB Linköpings universitet</collection><jtitle>Journal of speech, language, and hearing research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shiell, Martha M</au><au>Høy-Christensen, Jeppe</au><au>Skoglund, Martin A</au><au>Keidser, Gitte</au><au>Zaar, Johannes</au><au>Rotger-Griful, Sergi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation</atitle><jtitle>Journal of speech, language, and hearing research</jtitle><addtitle>J Speech Lang Hear Res</addtitle><date>2023-11-09</date><risdate>2023</risdate><volume>66</volume><issue>11</issue><spage>4575</spage><epage>4589</epage><pages>4575-4589</pages><issn>1092-4388</issn><issn>1558-9102</issn><eissn>1558-9102</eissn><abstract>There is a need for tools to study real-world communication abilities in people with hearing loss. We outline a potential method for this that analyzes gaze and use it to answer the question of when and how much listeners with hearing loss look toward a new talker in a conversation.
Twenty-two older adults with hearing loss followed a prerecorded two-person audiovisual conversation in the presence of babble noise. We compared their eye-gaze direction to the conversation in two multilevel logistic regression (MLR) analyses. First, we split the conversation into events classified by the number of active talkers within a turn or a transition, and we tested if these predicted the listener's gaze. Second, we mapped the odds that a listener gazed toward a new talker over time during a conversation transition.
We found no evidence that our conversation events predicted changes in the listener's gaze, but the listener's gaze toward the new talker during a silence-transition was predicted by time: The odds of looking at the new talker increased in an s-shaped curve from at least 0.4 s before to 1 s after the onset of the new talker's speech. A comparison of models with different random effects indicated that more variance was explained by differences between individual conversation events than by differences between individual listeners.
MLR modeling of eye-gaze during talker transitions is a promising approach to study a listener's perception of realistic conversation. Our experience provides insight to guide future research with this method.</abstract><cop>United States</cop><pmid>37850878</pmid><doi>10.1044/2023_JSLHR-22-00641</doi><tpages>15</tpages><orcidid>https://orcid.org/0000-0001-9472-3100</orcidid><orcidid>https://orcid.org/0000-0002-7815-0864</orcidid><orcidid>https://orcid.org/0000-0002-9002-4780</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1092-4388 |
ispartof | Journal of speech, language, and hearing research, 2023-11, Vol.66 (11), p.4575-4589 |
issn | 1092-4388 1558-9102 1558-9102 |
language | eng |
recordid | cdi_swepub_primary_oai_DiVA_org_liu_201072 |
source | EBSCOhost MLA International Bibliography With Full Text |
subjects | Acoustic Stimulation - methods Aged Deafness Hearing Loss Humans Speech Speech Perception |
title | Multilevel Modeling of Gaze From Listeners With Hearing Loss Following a Realistic Conversation |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T16%3A27%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_swepu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Multilevel%20Modeling%20of%20Gaze%20From%20Listeners%20With%20Hearing%20Loss%20Following%20a%20Realistic%20Conversation&rft.jtitle=Journal%20of%20speech,%20language,%20and%20hearing%20research&rft.au=Shiell,%20Martha%20M&rft.date=2023-11-09&rft.volume=66&rft.issue=11&rft.spage=4575&rft.epage=4589&rft.pages=4575-4589&rft.issn=1092-4388&rft.eissn=1558-9102&rft_id=info:doi/10.1044/2023_JSLHR-22-00641&rft_dat=%3Cproquest_swepu%3E2878711074%3C/proquest_swepu%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c301t-731034aa0a247d31680879eff251a5d001270dd21ea5d1c8dbfa969c8658c5703%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2878711074&rft_id=info:pmid/37850878&rfr_iscdi=true |