Loading…

American Sign Language Recognition Using RF Sensing

Many technologies for human-computer interaction have been designed for hearing individuals and depend upon vocalized speech, precluding users of American Sign Language (ASL) in the Deaf community from benefiting from these advancements. While great strides have been made in ASL recognition with vid...

Full description

Saved in:
Bibliographic Details
Published in:IEEE sensors journal 2021-02, Vol.21 (3), p.3763-3775
Main Authors: Gurbuz, Sevgi Z., Gurbuz, Ali Cafer, Malaia, Evie A., Griffin, Darrin J., Crawford, Chris S., Rahman, Mohammad Mahbubur, Kurtoglu, Emre, Aksu, Ridvan, Macks, Trevor, Mdrafi, Robiulhossain
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c336t-84b3119675487dc55e59c5cbc45d06f0cb9af7b8cf4ed34cbdab4eafc25a22123
cites cdi_FETCH-LOGICAL-c336t-84b3119675487dc55e59c5cbc45d06f0cb9af7b8cf4ed34cbdab4eafc25a22123
container_end_page 3775
container_issue 3
container_start_page 3763
container_title IEEE sensors journal
container_volume 21
creator Gurbuz, Sevgi Z.
Gurbuz, Ali Cafer
Malaia, Evie A.
Griffin, Darrin J.
Crawford, Chris S.
Rahman, Mohammad Mahbubur
Kurtoglu, Emre
Aksu, Ridvan
Macks, Trevor
Mdrafi, Robiulhossain
description Many technologies for human-computer interaction have been designed for hearing individuals and depend upon vocalized speech, precluding users of American Sign Language (ASL) in the Deaf community from benefiting from these advancements. While great strides have been made in ASL recognition with video or wearable gloves, the use of video in homes has raised privacy concerns, while wearable gloves severely restrict movement and infringe on daily life. Methods: This article proposes the use of RF sensors for HCI applications serving the Deaf community. A multi-frequency RF sensor network is used to acquire non-invasive, non-contact measurements of ASL signing irrespective of lighting conditions. The unique patterns of motion present in the RF data due to the micro-Doppler effect are revealed using time-frequency analysis with the Short-Time Fourier Transform. Linguistic properties of RF ASL data are investigated using machine learning (ML). Results: The information content, measured by fractal complexity, of ASL signing is shown to be greater than that of other upper body activities encountered in daily living. This can be used to differentiate daily activities from signing, while features from RF data show that imitation signing by non-signers is 99% differentiable from native ASL signing. Feature-level fusion of RF sensor network data is used to achieve 72.5% accuracy in classification of 20 native ASL signs. Implications: RF sensing can be used to study dynamic linguistic properties of ASL and design Deaf-centric smart environments for non-invasive, remote recognition of ASL. ML algorithms should be benchmarked on native, not imitation, ASL data.
doi_str_mv 10.1109/JSEN.2020.3022376
format article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2477248567</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9187644</ieee_id><sourcerecordid>2477248567</sourcerecordid><originalsourceid>FETCH-LOGICAL-c336t-84b3119675487dc55e59c5cbc45d06f0cb9af7b8cf4ed34cbdab4eafc25a22123</originalsourceid><addsrcrecordid>eNo9kEtLw0AUhQdRsFZ_gLgJuE6cZ2ayLKX1QVBoLLgbJpObMMVO6ky68N-b0OLqnsV3zoUPoXuCM0Jw8fRWrd4ziinOGKaUyfwCzYgQKiWSq8spM5xyJr-u0U2MO4xJIYWcIbbYQ3DW-KRynU9K47uj6SDZgO077wbX-2Qbne-SzTqpwE_xFl215jvC3fnO0Xa9-ly-pOXH8-tyUaaWsXxIFa8ZIUUuBVeysUKAKKywteWiwXmLbV2YVtbKthwaxm3dmJqDaS0VhlJC2Rw9nnYPof85Qhz0rj8GP77UlEtJuRK5HClyomzoYwzQ6kNwexN-NcF6cqMnN3pyo89uxs7DqeMA4J8viJI55-wPgWZfEA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2477248567</pqid></control><display><type>article</type><title>American Sign Language Recognition Using RF Sensing</title><source>IEEE Xplore (Online service)</source><creator>Gurbuz, Sevgi Z. ; Gurbuz, Ali Cafer ; Malaia, Evie A. ; Griffin, Darrin J. ; Crawford, Chris S. ; Rahman, Mohammad Mahbubur ; Kurtoglu, Emre ; Aksu, Ridvan ; Macks, Trevor ; Mdrafi, Robiulhossain</creator><creatorcontrib>Gurbuz, Sevgi Z. ; Gurbuz, Ali Cafer ; Malaia, Evie A. ; Griffin, Darrin J. ; Crawford, Chris S. ; Rahman, Mohammad Mahbubur ; Kurtoglu, Emre ; Aksu, Ridvan ; Macks, Trevor ; Mdrafi, Robiulhossain</creatorcontrib><description>Many technologies for human-computer interaction have been designed for hearing individuals and depend upon vocalized speech, precluding users of American Sign Language (ASL) in the Deaf community from benefiting from these advancements. While great strides have been made in ASL recognition with video or wearable gloves, the use of video in homes has raised privacy concerns, while wearable gloves severely restrict movement and infringe on daily life. Methods: This article proposes the use of RF sensors for HCI applications serving the Deaf community. A multi-frequency RF sensor network is used to acquire non-invasive, non-contact measurements of ASL signing irrespective of lighting conditions. The unique patterns of motion present in the RF data due to the micro-Doppler effect are revealed using time-frequency analysis with the Short-Time Fourier Transform. Linguistic properties of RF ASL data are investigated using machine learning (ML). Results: The information content, measured by fractal complexity, of ASL signing is shown to be greater than that of other upper body activities encountered in daily living. This can be used to differentiate daily activities from signing, while features from RF data show that imitation signing by non-signers is 99% differentiable from native ASL signing. Feature-level fusion of RF sensor network data is used to achieve 72.5% accuracy in classification of 20 native ASL signs. Implications: RF sensing can be used to study dynamic linguistic properties of ASL and design Deaf-centric smart environments for non-invasive, remote recognition of ASL. ML algorithms should be benchmarked on native, not imitation, ASL data.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2020.3022376</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Algorithms ; American sign language ; Assistive technology ; Auditory system ; Deafness ; Doppler effect ; Electronic mail ; Fourier transforms ; Gesture recognition ; Gloves ; Human-computer interface ; Linguistics ; Machine learning ; micro-Doppler ; radar ; Radio frequency ; Recognition ; RF sensing ; Sensors ; Sign language ; Time-frequency analysis ; Wearable technology</subject><ispartof>IEEE sensors journal, 2021-02, Vol.21 (3), p.3763-3775</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c336t-84b3119675487dc55e59c5cbc45d06f0cb9af7b8cf4ed34cbdab4eafc25a22123</citedby><cites>FETCH-LOGICAL-c336t-84b3119675487dc55e59c5cbc45d06f0cb9af7b8cf4ed34cbdab4eafc25a22123</cites><orcidid>0000-0003-3127-308X ; 0000-0003-1326-6521 ; 0000-0001-5598-1713 ; 0000-0001-8923-0299 ; 0000-0001-7487-9087</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9187644$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids></links><search><creatorcontrib>Gurbuz, Sevgi Z.</creatorcontrib><creatorcontrib>Gurbuz, Ali Cafer</creatorcontrib><creatorcontrib>Malaia, Evie A.</creatorcontrib><creatorcontrib>Griffin, Darrin J.</creatorcontrib><creatorcontrib>Crawford, Chris S.</creatorcontrib><creatorcontrib>Rahman, Mohammad Mahbubur</creatorcontrib><creatorcontrib>Kurtoglu, Emre</creatorcontrib><creatorcontrib>Aksu, Ridvan</creatorcontrib><creatorcontrib>Macks, Trevor</creatorcontrib><creatorcontrib>Mdrafi, Robiulhossain</creatorcontrib><title>American Sign Language Recognition Using RF Sensing</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>Many technologies for human-computer interaction have been designed for hearing individuals and depend upon vocalized speech, precluding users of American Sign Language (ASL) in the Deaf community from benefiting from these advancements. While great strides have been made in ASL recognition with video or wearable gloves, the use of video in homes has raised privacy concerns, while wearable gloves severely restrict movement and infringe on daily life. Methods: This article proposes the use of RF sensors for HCI applications serving the Deaf community. A multi-frequency RF sensor network is used to acquire non-invasive, non-contact measurements of ASL signing irrespective of lighting conditions. The unique patterns of motion present in the RF data due to the micro-Doppler effect are revealed using time-frequency analysis with the Short-Time Fourier Transform. Linguistic properties of RF ASL data are investigated using machine learning (ML). Results: The information content, measured by fractal complexity, of ASL signing is shown to be greater than that of other upper body activities encountered in daily living. This can be used to differentiate daily activities from signing, while features from RF data show that imitation signing by non-signers is 99% differentiable from native ASL signing. Feature-level fusion of RF sensor network data is used to achieve 72.5% accuracy in classification of 20 native ASL signs. Implications: RF sensing can be used to study dynamic linguistic properties of ASL and design Deaf-centric smart environments for non-invasive, remote recognition of ASL. ML algorithms should be benchmarked on native, not imitation, ASL data.</description><subject>Algorithms</subject><subject>American sign language</subject><subject>Assistive technology</subject><subject>Auditory system</subject><subject>Deafness</subject><subject>Doppler effect</subject><subject>Electronic mail</subject><subject>Fourier transforms</subject><subject>Gesture recognition</subject><subject>Gloves</subject><subject>Human-computer interface</subject><subject>Linguistics</subject><subject>Machine learning</subject><subject>micro-Doppler</subject><subject>radar</subject><subject>Radio frequency</subject><subject>Recognition</subject><subject>RF sensing</subject><subject>Sensors</subject><subject>Sign language</subject><subject>Time-frequency analysis</subject><subject>Wearable technology</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNo9kEtLw0AUhQdRsFZ_gLgJuE6cZ2ayLKX1QVBoLLgbJpObMMVO6ky68N-b0OLqnsV3zoUPoXuCM0Jw8fRWrd4ziinOGKaUyfwCzYgQKiWSq8spM5xyJr-u0U2MO4xJIYWcIbbYQ3DW-KRynU9K47uj6SDZgO077wbX-2Qbne-SzTqpwE_xFl215jvC3fnO0Xa9-ly-pOXH8-tyUaaWsXxIFa8ZIUUuBVeysUKAKKywteWiwXmLbV2YVtbKthwaxm3dmJqDaS0VhlJC2Rw9nnYPof85Qhz0rj8GP77UlEtJuRK5HClyomzoYwzQ6kNwexN-NcF6cqMnN3pyo89uxs7DqeMA4J8viJI55-wPgWZfEA</recordid><startdate>20210201</startdate><enddate>20210201</enddate><creator>Gurbuz, Sevgi Z.</creator><creator>Gurbuz, Ali Cafer</creator><creator>Malaia, Evie A.</creator><creator>Griffin, Darrin J.</creator><creator>Crawford, Chris S.</creator><creator>Rahman, Mohammad Mahbubur</creator><creator>Kurtoglu, Emre</creator><creator>Aksu, Ridvan</creator><creator>Macks, Trevor</creator><creator>Mdrafi, Robiulhossain</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0003-3127-308X</orcidid><orcidid>https://orcid.org/0000-0003-1326-6521</orcidid><orcidid>https://orcid.org/0000-0001-5598-1713</orcidid><orcidid>https://orcid.org/0000-0001-8923-0299</orcidid><orcidid>https://orcid.org/0000-0001-7487-9087</orcidid></search><sort><creationdate>20210201</creationdate><title>American Sign Language Recognition Using RF Sensing</title><author>Gurbuz, Sevgi Z. ; Gurbuz, Ali Cafer ; Malaia, Evie A. ; Griffin, Darrin J. ; Crawford, Chris S. ; Rahman, Mohammad Mahbubur ; Kurtoglu, Emre ; Aksu, Ridvan ; Macks, Trevor ; Mdrafi, Robiulhossain</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c336t-84b3119675487dc55e59c5cbc45d06f0cb9af7b8cf4ed34cbdab4eafc25a22123</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>American sign language</topic><topic>Assistive technology</topic><topic>Auditory system</topic><topic>Deafness</topic><topic>Doppler effect</topic><topic>Electronic mail</topic><topic>Fourier transforms</topic><topic>Gesture recognition</topic><topic>Gloves</topic><topic>Human-computer interface</topic><topic>Linguistics</topic><topic>Machine learning</topic><topic>micro-Doppler</topic><topic>radar</topic><topic>Radio frequency</topic><topic>Recognition</topic><topic>RF sensing</topic><topic>Sensors</topic><topic>Sign language</topic><topic>Time-frequency analysis</topic><topic>Wearable technology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gurbuz, Sevgi Z.</creatorcontrib><creatorcontrib>Gurbuz, Ali Cafer</creatorcontrib><creatorcontrib>Malaia, Evie A.</creatorcontrib><creatorcontrib>Griffin, Darrin J.</creatorcontrib><creatorcontrib>Crawford, Chris S.</creatorcontrib><creatorcontrib>Rahman, Mohammad Mahbubur</creatorcontrib><creatorcontrib>Kurtoglu, Emre</creatorcontrib><creatorcontrib>Aksu, Ridvan</creatorcontrib><creatorcontrib>Macks, Trevor</creatorcontrib><creatorcontrib>Mdrafi, Robiulhossain</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library Online</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gurbuz, Sevgi Z.</au><au>Gurbuz, Ali Cafer</au><au>Malaia, Evie A.</au><au>Griffin, Darrin J.</au><au>Crawford, Chris S.</au><au>Rahman, Mohammad Mahbubur</au><au>Kurtoglu, Emre</au><au>Aksu, Ridvan</au><au>Macks, Trevor</au><au>Mdrafi, Robiulhossain</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>American Sign Language Recognition Using RF Sensing</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2021-02-01</date><risdate>2021</risdate><volume>21</volume><issue>3</issue><spage>3763</spage><epage>3775</epage><pages>3763-3775</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>Many technologies for human-computer interaction have been designed for hearing individuals and depend upon vocalized speech, precluding users of American Sign Language (ASL) in the Deaf community from benefiting from these advancements. While great strides have been made in ASL recognition with video or wearable gloves, the use of video in homes has raised privacy concerns, while wearable gloves severely restrict movement and infringe on daily life. Methods: This article proposes the use of RF sensors for HCI applications serving the Deaf community. A multi-frequency RF sensor network is used to acquire non-invasive, non-contact measurements of ASL signing irrespective of lighting conditions. The unique patterns of motion present in the RF data due to the micro-Doppler effect are revealed using time-frequency analysis with the Short-Time Fourier Transform. Linguistic properties of RF ASL data are investigated using machine learning (ML). Results: The information content, measured by fractal complexity, of ASL signing is shown to be greater than that of other upper body activities encountered in daily living. This can be used to differentiate daily activities from signing, while features from RF data show that imitation signing by non-signers is 99% differentiable from native ASL signing. Feature-level fusion of RF sensor network data is used to achieve 72.5% accuracy in classification of 20 native ASL signs. Implications: RF sensing can be used to study dynamic linguistic properties of ASL and design Deaf-centric smart environments for non-invasive, remote recognition of ASL. ML algorithms should be benchmarked on native, not imitation, ASL data.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSEN.2020.3022376</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0003-3127-308X</orcidid><orcidid>https://orcid.org/0000-0003-1326-6521</orcidid><orcidid>https://orcid.org/0000-0001-5598-1713</orcidid><orcidid>https://orcid.org/0000-0001-8923-0299</orcidid><orcidid>https://orcid.org/0000-0001-7487-9087</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1530-437X
ispartof IEEE sensors journal, 2021-02, Vol.21 (3), p.3763-3775
issn 1530-437X
1558-1748
language eng
recordid cdi_proquest_journals_2477248567
source IEEE Xplore (Online service)
subjects Algorithms
American sign language
Assistive technology
Auditory system
Deafness
Doppler effect
Electronic mail
Fourier transforms
Gesture recognition
Gloves
Human-computer interface
Linguistics
Machine learning
micro-Doppler
radar
Radio frequency
Recognition
RF sensing
Sensors
Sign language
Time-frequency analysis
Wearable technology
title American Sign Language Recognition Using RF Sensing
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T13%3A38%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=American%20Sign%20Language%20Recognition%20Using%20RF%20Sensing&rft.jtitle=IEEE%20sensors%20journal&rft.au=Gurbuz,%20Sevgi%20Z.&rft.date=2021-02-01&rft.volume=21&rft.issue=3&rft.spage=3763&rft.epage=3775&rft.pages=3763-3775&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2020.3022376&rft_dat=%3Cproquest_cross%3E2477248567%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c336t-84b3119675487dc55e59c5cbc45d06f0cb9af7b8cf4ed34cbdab4eafc25a22123%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2477248567&rft_id=info:pmid/&rft_ieee_id=9187644&rfr_iscdi=true