Loading…

Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing

Purpose: This study compared the use of 2 different types of contextual cues (sentence based and situation based) in 2 different modalities (visual only and auditory only). Method: Twenty young adults were tested with the Illustrated Sentence Test (Tye-Murray, Hale, Spehar, Myerson, & Sommers, 2...

Full description

Saved in:
Bibliographic Details
Published in:Journal of speech, language, and hearing research language, and hearing research, 2015-06, Vol.58 (3), p.1093-1102
Main Authors: Spehar, Brent, Goebel, Stacey, Tye-Murray, Nancy
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c594t-5bfa6e882229abd5affb3444ca25f2e562b8dd6afa4ece64b39c42c2b425f09a3
cites cdi_FETCH-LOGICAL-c594t-5bfa6e882229abd5affb3444ca25f2e562b8dd6afa4ece64b39c42c2b425f09a3
container_end_page 1102
container_issue 3
container_start_page 1093
container_title Journal of speech, language, and hearing research
container_volume 58
creator Spehar, Brent
Goebel, Stacey
Tye-Murray, Nancy
description Purpose: This study compared the use of 2 different types of contextual cues (sentence based and situation based) in 2 different modalities (visual only and auditory only). Method: Twenty young adults were tested with the Illustrated Sentence Test (Tye-Murray, Hale, Spehar, Myerson, & Sommers, 2014) and the Speech Perception in Noise Test (Bilger, Nuetzel, Rabinowitz, & Rzeczkowski, 1984; Kalikow, Stevens, & Elliott, 1977) in the 2 modalities. The Illustrated Sentences Test presents sentences with no context and sentences accompanied by picture-based situational context cues. The Speech Perception in Noise Test presents sentences with low sentence-based context and sentences with high sentence-based context. Results: Participants benefited from both types of context and received more benefit when testing occurred in the visual-only modality than when it occurred in the auditory-only modality. Participants' use of sentence-based context did not correlate with use of situation-based context. Cue usage did not correlate between the 2 modalities. Conclusions: The ability to use contextual cues appears to be dependent on the type of cue and the presentation modality of the target word(s). In a theoretical sense, the results suggest that models of word recognition and sentence processing should incorporate the influence of multiple sources of information and recognize that the 2 types of context have different influences on speech perception. In a clinical sense, the results suggest that aural rehabilitation programs might provide training to optimize use of both kinds of contextual cues.
doi_str_mv 10.1044/2015_JSLHR-H-14-0360
format article
fullrecord <record><control><sourceid>gale_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_4610295</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A422328637</galeid><ericid>EJ1067950</ericid><sourcerecordid>A422328637</sourcerecordid><originalsourceid>FETCH-LOGICAL-c594t-5bfa6e882229abd5affb3444ca25f2e562b8dd6afa4ece64b39c42c2b425f09a3</originalsourceid><addsrcrecordid>eNptkl9v0zAUxSMEYn_gGwCKhIR4ybAdO41fkKaq0E2VmNh4thznuvWU2MVOEfv2u1lHt6ImD7F9fufEvr5Z9o6SM0o4_8IIFeryejH_WcwLygtSVuRFdkyFqAtJCXuJYyJZwcu6PspOUrol-FBevc6OmKirUrLyOFvNrAUzpDzYfBr8AH-H_OZuDXnw-cKtI-jW-WWufYvTNIAfZ1cQbYi99gYelIt-3TmjBxd8ylHJrwGTRvUqBgMpoelN9srqLsHbx-9p9uvb7GY6LxY_vl9MzxeFEZIPhWisrqCuGWNSN63Q1jYl59xoJiwDUbGmbttKW83BQMWbUhrODGs46kTq8jT7us1db5oeWoM7ibpT6-h6He9U0E7tK96t1DL8UbzCokmBAZ8fA2L4vYE0qN4lA12nPYRNUrSSbCKwqBTRj_-ht2ETPR5vpKRgiIonaqk7UM7bgP81Y6g654yVDO9iglRxgFqCB9xk8GAdLu_xZwd4fFvonTlo-PTMsALdDasUus3Dre2DfAuaGFKKYHfFo0SNraeeWk_NFeVqbD20fXhe-J3pX68h8H4LQHRmJ88uKakmUpDyHoB73ic</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1699526925</pqid></control><display><type>article</type><title>Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing</title><source>EBSCOhost MLA International Bibliography With Full Text</source><source>Education Collection (Proquest) (PQ_SDU_P3)</source><source>Linguistics Collection</source><source>ERIC</source><source>Linguistics and Language Behavior Abstracts (LLBA)</source><source>ProQuest Social Science Premium Collection</source><creator>Spehar, Brent ; Goebel, Stacey ; Tye-Murray, Nancy</creator><creatorcontrib>Spehar, Brent ; Goebel, Stacey ; Tye-Murray, Nancy</creatorcontrib><description>Purpose: This study compared the use of 2 different types of contextual cues (sentence based and situation based) in 2 different modalities (visual only and auditory only). Method: Twenty young adults were tested with the Illustrated Sentence Test (Tye-Murray, Hale, Spehar, Myerson, &amp; Sommers, 2014) and the Speech Perception in Noise Test (Bilger, Nuetzel, Rabinowitz, &amp; Rzeczkowski, 1984; Kalikow, Stevens, &amp; Elliott, 1977) in the 2 modalities. The Illustrated Sentences Test presents sentences with no context and sentences accompanied by picture-based situational context cues. The Speech Perception in Noise Test presents sentences with low sentence-based context and sentences with high sentence-based context. Results: Participants benefited from both types of context and received more benefit when testing occurred in the visual-only modality than when it occurred in the auditory-only modality. Participants' use of sentence-based context did not correlate with use of situation-based context. Cue usage did not correlate between the 2 modalities. Conclusions: The ability to use contextual cues appears to be dependent on the type of cue and the presentation modality of the target word(s). In a theoretical sense, the results suggest that models of word recognition and sentence processing should incorporate the influence of multiple sources of information and recognize that the 2 types of context have different influences on speech perception. In a clinical sense, the results suggest that aural rehabilitation programs might provide training to optimize use of both kinds of contextual cues.</description><identifier>ISSN: 1092-4388</identifier><identifier>EISSN: 1558-9102</identifier><identifier>DOI: 10.1044/2015_JSLHR-H-14-0360</identifier><identifier>PMID: 25863923</identifier><language>eng</language><publisher>United States: American Speech-Language-Hearing Association (ASHA)</publisher><subject>Acoustic Stimulation - methods ; Acoustics ; Aging (Individuals) ; Auditory Perception ; Breakdowns ; Child ; Child Language ; Circuses ; Cochlear Implantation ; Communication Strategies ; Context Effect ; Correlation ; Cues ; Discrimination, Psychological ; Elephants ; Evaluation ; Hearing ; Hearing Aids ; Hearing Disorders - psychology ; Hearing Tests ; Humans ; Language Processing ; Lipreading ; Listening Comprehension ; Listening Skills ; Noise ; Patients ; Pattern Recognition, Physiological ; Phonetics ; Pictorial Stimuli ; Psychological Tests ; Rehabilitation ; Rehabilitation Programs ; Semantics ; Semiotics ; Sentences ; Sentences (Grammar) ; Social Class ; Speech ; Speech Perception ; Speech Production Measurement ; Stimuli ; Tests ; Verbal communication ; Word Recognition</subject><ispartof>Journal of speech, language, and hearing research, 2015-06, Vol.58 (3), p.1093-1102</ispartof><rights>COPYRIGHT 2015 American Speech-Language-Hearing Association</rights><rights>Copyright American Speech-Language-Hearing Association Jun 2015</rights><rights>Copyright © 2015 American Speech-Language-Hearing Association</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c594t-5bfa6e882229abd5affb3444ca25f2e562b8dd6afa4ece64b39c42c2b425f09a3</citedby><cites>FETCH-LOGICAL-c594t-5bfa6e882229abd5affb3444ca25f2e562b8dd6afa4ece64b39c42c2b425f09a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/1699526925/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/1699526925?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>230,314,780,784,885,12851,21378,21382,21394,27924,27925,31269,33611,33612,33877,33878,33911,33912,43733,43880,43896,74221,74397,74413</link.rule.ids><backlink>$$Uhttp://eric.ed.gov/ERICWebPortal/detail?accno=EJ1067950$$DView record in ERIC$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/25863923$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Spehar, Brent</creatorcontrib><creatorcontrib>Goebel, Stacey</creatorcontrib><creatorcontrib>Tye-Murray, Nancy</creatorcontrib><title>Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing</title><title>Journal of speech, language, and hearing research</title><addtitle>J Speech Lang Hear Res</addtitle><description>Purpose: This study compared the use of 2 different types of contextual cues (sentence based and situation based) in 2 different modalities (visual only and auditory only). Method: Twenty young adults were tested with the Illustrated Sentence Test (Tye-Murray, Hale, Spehar, Myerson, &amp; Sommers, 2014) and the Speech Perception in Noise Test (Bilger, Nuetzel, Rabinowitz, &amp; Rzeczkowski, 1984; Kalikow, Stevens, &amp; Elliott, 1977) in the 2 modalities. The Illustrated Sentences Test presents sentences with no context and sentences accompanied by picture-based situational context cues. The Speech Perception in Noise Test presents sentences with low sentence-based context and sentences with high sentence-based context. Results: Participants benefited from both types of context and received more benefit when testing occurred in the visual-only modality than when it occurred in the auditory-only modality. Participants' use of sentence-based context did not correlate with use of situation-based context. Cue usage did not correlate between the 2 modalities. Conclusions: The ability to use contextual cues appears to be dependent on the type of cue and the presentation modality of the target word(s). In a theoretical sense, the results suggest that models of word recognition and sentence processing should incorporate the influence of multiple sources of information and recognize that the 2 types of context have different influences on speech perception. In a clinical sense, the results suggest that aural rehabilitation programs might provide training to optimize use of both kinds of contextual cues.</description><subject>Acoustic Stimulation - methods</subject><subject>Acoustics</subject><subject>Aging (Individuals)</subject><subject>Auditory Perception</subject><subject>Breakdowns</subject><subject>Child</subject><subject>Child Language</subject><subject>Circuses</subject><subject>Cochlear Implantation</subject><subject>Communication Strategies</subject><subject>Context Effect</subject><subject>Correlation</subject><subject>Cues</subject><subject>Discrimination, Psychological</subject><subject>Elephants</subject><subject>Evaluation</subject><subject>Hearing</subject><subject>Hearing Aids</subject><subject>Hearing Disorders - psychology</subject><subject>Hearing Tests</subject><subject>Humans</subject><subject>Language Processing</subject><subject>Lipreading</subject><subject>Listening Comprehension</subject><subject>Listening Skills</subject><subject>Noise</subject><subject>Patients</subject><subject>Pattern Recognition, Physiological</subject><subject>Phonetics</subject><subject>Pictorial Stimuli</subject><subject>Psychological Tests</subject><subject>Rehabilitation</subject><subject>Rehabilitation Programs</subject><subject>Semantics</subject><subject>Semiotics</subject><subject>Sentences</subject><subject>Sentences (Grammar)</subject><subject>Social Class</subject><subject>Speech</subject><subject>Speech Perception</subject><subject>Speech Production Measurement</subject><subject>Stimuli</subject><subject>Tests</subject><subject>Verbal communication</subject><subject>Word Recognition</subject><issn>1092-4388</issn><issn>1558-9102</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2015</creationdate><recordtype>article</recordtype><sourceid>7SW</sourceid><sourceid>7T9</sourceid><sourceid>ALSLI</sourceid><sourceid>CJNVE</sourceid><sourceid>CPGLG</sourceid><sourceid>M0P</sourceid><sourceid>M2R</sourceid><recordid>eNptkl9v0zAUxSMEYn_gGwCKhIR4ybAdO41fkKaq0E2VmNh4thznuvWU2MVOEfv2u1lHt6ImD7F9fufEvr5Z9o6SM0o4_8IIFeryejH_WcwLygtSVuRFdkyFqAtJCXuJYyJZwcu6PspOUrol-FBevc6OmKirUrLyOFvNrAUzpDzYfBr8AH-H_OZuDXnw-cKtI-jW-WWufYvTNIAfZ1cQbYi99gYelIt-3TmjBxd8ylHJrwGTRvUqBgMpoelN9srqLsHbx-9p9uvb7GY6LxY_vl9MzxeFEZIPhWisrqCuGWNSN63Q1jYl59xoJiwDUbGmbttKW83BQMWbUhrODGs46kTq8jT7us1db5oeWoM7ibpT6-h6He9U0E7tK96t1DL8UbzCokmBAZ8fA2L4vYE0qN4lA12nPYRNUrSSbCKwqBTRj_-ht2ETPR5vpKRgiIonaqk7UM7bgP81Y6g654yVDO9iglRxgFqCB9xk8GAdLu_xZwd4fFvonTlo-PTMsALdDasUus3Dre2DfAuaGFKKYHfFo0SNraeeWk_NFeVqbD20fXhe-J3pX68h8H4LQHRmJ88uKakmUpDyHoB73ic</recordid><startdate>201506</startdate><enddate>201506</enddate><creator>Spehar, Brent</creator><creator>Goebel, Stacey</creator><creator>Tye-Murray, Nancy</creator><general>American Speech-Language-Hearing Association (ASHA)</general><general>American Speech-Language-Hearing Association</general><scope>7SW</scope><scope>BJH</scope><scope>BNH</scope><scope>BNI</scope><scope>BNJ</scope><scope>BNO</scope><scope>ERI</scope><scope>PET</scope><scope>REK</scope><scope>WWN</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>0-V</scope><scope>3V.</scope><scope>7RV</scope><scope>7T9</scope><scope>7X7</scope><scope>7XB</scope><scope>88B</scope><scope>88E</scope><scope>88G</scope><scope>88I</scope><scope>88J</scope><scope>8A4</scope><scope>8AF</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ALSLI</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>CJNVE</scope><scope>CPGLG</scope><scope>CRLPW</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>K9.</scope><scope>KB0</scope><scope>M0P</scope><scope>M0S</scope><scope>M1P</scope><scope>M2M</scope><scope>M2O</scope><scope>M2P</scope><scope>M2R</scope><scope>MBDVC</scope><scope>NAPCQ</scope><scope>PQEDU</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>Q9U</scope><scope>S0X</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>201506</creationdate><title>Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing</title><author>Spehar, Brent ; Goebel, Stacey ; Tye-Murray, Nancy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c594t-5bfa6e882229abd5affb3444ca25f2e562b8dd6afa4ece64b39c42c2b425f09a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2015</creationdate><topic>Acoustic Stimulation - methods</topic><topic>Acoustics</topic><topic>Aging (Individuals)</topic><topic>Auditory Perception</topic><topic>Breakdowns</topic><topic>Child</topic><topic>Child Language</topic><topic>Circuses</topic><topic>Cochlear Implantation</topic><topic>Communication Strategies</topic><topic>Context Effect</topic><topic>Correlation</topic><topic>Cues</topic><topic>Discrimination, Psychological</topic><topic>Elephants</topic><topic>Evaluation</topic><topic>Hearing</topic><topic>Hearing Aids</topic><topic>Hearing Disorders - psychology</topic><topic>Hearing Tests</topic><topic>Humans</topic><topic>Language Processing</topic><topic>Lipreading</topic><topic>Listening Comprehension</topic><topic>Listening Skills</topic><topic>Noise</topic><topic>Patients</topic><topic>Pattern Recognition, Physiological</topic><topic>Phonetics</topic><topic>Pictorial Stimuli</topic><topic>Psychological Tests</topic><topic>Rehabilitation</topic><topic>Rehabilitation Programs</topic><topic>Semantics</topic><topic>Semiotics</topic><topic>Sentences</topic><topic>Sentences (Grammar)</topic><topic>Social Class</topic><topic>Speech</topic><topic>Speech Perception</topic><topic>Speech Production Measurement</topic><topic>Stimuli</topic><topic>Tests</topic><topic>Verbal communication</topic><topic>Word Recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Spehar, Brent</creatorcontrib><creatorcontrib>Goebel, Stacey</creatorcontrib><creatorcontrib>Tye-Murray, Nancy</creatorcontrib><collection>ERIC</collection><collection>ERIC (Ovid)</collection><collection>ERIC</collection><collection>ERIC</collection><collection>ERIC (Legacy Platform)</collection><collection>ERIC( SilverPlatter )</collection><collection>ERIC</collection><collection>ERIC PlusText (Legacy Platform)</collection><collection>Education Resources Information Center (ERIC)</collection><collection>ERIC</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Social Sciences Premium Collection【Remote access available】</collection><collection>ProQuest Central (Corporate)</collection><collection>Nursing &amp; Allied Health Database</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Education Database (Alumni Edition)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Psychology Database (Alumni)</collection><collection>Science Database (Alumni Edition)</collection><collection>Social Science Database (Alumni Edition)</collection><collection>Education Periodicals</collection><collection>STEM Database</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Social Science Premium Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>Education Collection (Proquest) (PQ_SDU_P3)</collection><collection>Linguistics Collection</collection><collection>Linguistics Database</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Nursing &amp; Allied Health Database (Alumni Edition)</collection><collection>Education Database</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Psychology Database</collection><collection>ProQuest research library</collection><collection>Science Database</collection><collection>Social Science Database (ProQuest)</collection><collection>Research Library (Corporate)</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>ProQuest One Education</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>ProQuest Central Basic</collection><collection>SIRS Editorial</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Journal of speech, language, and hearing research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Spehar, Brent</au><au>Goebel, Stacey</au><au>Tye-Murray, Nancy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><ericid>EJ1067950</ericid><atitle>Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing</atitle><jtitle>Journal of speech, language, and hearing research</jtitle><addtitle>J Speech Lang Hear Res</addtitle><date>2015-06</date><risdate>2015</risdate><volume>58</volume><issue>3</issue><spage>1093</spage><epage>1102</epage><pages>1093-1102</pages><issn>1092-4388</issn><eissn>1558-9102</eissn><abstract>Purpose: This study compared the use of 2 different types of contextual cues (sentence based and situation based) in 2 different modalities (visual only and auditory only). Method: Twenty young adults were tested with the Illustrated Sentence Test (Tye-Murray, Hale, Spehar, Myerson, &amp; Sommers, 2014) and the Speech Perception in Noise Test (Bilger, Nuetzel, Rabinowitz, &amp; Rzeczkowski, 1984; Kalikow, Stevens, &amp; Elliott, 1977) in the 2 modalities. The Illustrated Sentences Test presents sentences with no context and sentences accompanied by picture-based situational context cues. The Speech Perception in Noise Test presents sentences with low sentence-based context and sentences with high sentence-based context. Results: Participants benefited from both types of context and received more benefit when testing occurred in the visual-only modality than when it occurred in the auditory-only modality. Participants' use of sentence-based context did not correlate with use of situation-based context. Cue usage did not correlate between the 2 modalities. Conclusions: The ability to use contextual cues appears to be dependent on the type of cue and the presentation modality of the target word(s). In a theoretical sense, the results suggest that models of word recognition and sentence processing should incorporate the influence of multiple sources of information and recognize that the 2 types of context have different influences on speech perception. In a clinical sense, the results suggest that aural rehabilitation programs might provide training to optimize use of both kinds of contextual cues.</abstract><cop>United States</cop><pub>American Speech-Language-Hearing Association (ASHA)</pub><pmid>25863923</pmid><doi>10.1044/2015_JSLHR-H-14-0360</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1092-4388
ispartof Journal of speech, language, and hearing research, 2015-06, Vol.58 (3), p.1093-1102
issn 1092-4388
1558-9102
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_4610295
source EBSCOhost MLA International Bibliography With Full Text; Education Collection (Proquest) (PQ_SDU_P3); Linguistics Collection; ERIC; Linguistics and Language Behavior Abstracts (LLBA); ProQuest Social Science Premium Collection
subjects Acoustic Stimulation - methods
Acoustics
Aging (Individuals)
Auditory Perception
Breakdowns
Child
Child Language
Circuses
Cochlear Implantation
Communication Strategies
Context Effect
Correlation
Cues
Discrimination, Psychological
Elephants
Evaluation
Hearing
Hearing Aids
Hearing Disorders - psychology
Hearing Tests
Humans
Language Processing
Lipreading
Listening Comprehension
Listening Skills
Noise
Patients
Pattern Recognition, Physiological
Phonetics
Pictorial Stimuli
Psychological Tests
Rehabilitation
Rehabilitation Programs
Semantics
Semiotics
Sentences
Sentences (Grammar)
Social Class
Speech
Speech Perception
Speech Production Measurement
Stimuli
Tests
Verbal communication
Word Recognition
title Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T03%3A12%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Effects%20of%20Context%20Type%20on%20Lipreading%20and%20Listening%20Performance%20and%20Implications%20for%20Sentence%20Processing&rft.jtitle=Journal%20of%20speech,%20language,%20and%20hearing%20research&rft.au=Spehar,%20Brent&rft.date=2015-06&rft.volume=58&rft.issue=3&rft.spage=1093&rft.epage=1102&rft.pages=1093-1102&rft.issn=1092-4388&rft.eissn=1558-9102&rft_id=info:doi/10.1044/2015_JSLHR-H-14-0360&rft_dat=%3Cgale_pubme%3EA422328637%3C/gale_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c594t-5bfa6e882229abd5affb3444ca25f2e562b8dd6afa4ece64b39c42c2b425f09a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=1699526925&rft_id=info:pmid/25863923&rft_galeid=A422328637&rft_ericid=EJ1067950&rfr_iscdi=true