Loading…
Child-Centric Robot Dialogue Systems: Fine-Tuning Large Language Models for Better Utterance Understanding and Interaction
Dialogue systems must understand children's utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child-robot interactions. Even state-of...
Saved in:
Published in: | Sensors (Basel, Switzerland) Switzerland), 2024-12, Vol.24 (24), p.7939 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c343t-5ebfedc4eda94f1732b225cc3b5584eca2df7019da5e39e4f1970163b8e826d73 |
container_end_page | |
container_issue | 24 |
container_start_page | 7939 |
container_title | Sensors (Basel, Switzerland) |
container_volume | 24 |
creator | Kim, Da-Young Lym, Hyo Jeong Lee, Hanna Lee, Ye Jun Kim, Juhyun Kim, Min-Gyu Baek, Yunju |
description | Dialogue systems must understand children's utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child-robot interactions. Even state-of-the-art large language models (LLMs) for language understanding and contextual awareness cannot comprehend children's intent as accurately as humans because of their distinctive features. An LLM-based dialogue system should acquire the manner by which humans understand children's speech to enhance its intention reasoning performance in verbal interactions with children. To this end, we propose a fine-tuning methodology that utilizes the LLM-human judgment discrepancy and interactive response data. The former data represent cases in which the LLM and human judgments of the contextual appropriateness of a child's answer to a robot's question diverge. The latter data involve robot responses suitable for children's utterance intentions, generated by the LLM. We developed a fine-tuned dialogue system using these datasets to achieve human-like interpretations of children's utterances and to respond adaptively. Our system was evaluated through human assessment using the Robotic Social Attributes Scale (RoSAS) and Sensibleness and Specificity Average (SSA) metrics. Consequently, it supports the effective interpretation of children's utterance intentions and enables natural verbal interactions, even in cases with syntactic incompleteness and mispronunciations. |
doi_str_mv | 10.3390/s24247939 |
format | article |
fullrecord | <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_fc647ac8167d4ef28b042bde5ba2c1ce</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A821975767</galeid><doaj_id>oai_doaj_org_article_fc647ac8167d4ef28b042bde5ba2c1ce</doaj_id><sourcerecordid>A821975767</sourcerecordid><originalsourceid>FETCH-LOGICAL-c343t-5ebfedc4eda94f1732b225cc3b5584eca2df7019da5e39e4f1970163b8e826d73</originalsourceid><addsrcrecordid>eNpdkl9v1DAMwCsEYmPwwBdAkXiBh442f5qWt3EwOOkQEuyeqzRxSk5tsiXpw_j0uNw4IRTJdpyfndhxUbysq0vGuupdopxy2bHuUXFeo1m2lFaP_7HPimcpHaqKMsbap8UZ66SsG9mcF782P91kyg34HJ0m38MQMvno1BTGBciP-5RhTu_JtfNQ3ize-ZHsVBwBpR8XhcbXYGBKxIZIPkDOEMl-lcprIHtvIKasvFkDUZGtX890dsE_L55YNSV48aAviv31p5vNl3L37fN2c7UrNeMslwIGC0ZzMKrjtpaMDpQKrdkgRMtBK2qsrOrOKAGsA0Q63DZsaKGljZHsotge85qgDv1tdLOK931Qrv_jCHHsVcxOT9Bb3XCpdIu9MRwsbYeK08GAGBTVtQbM9eaY6zaGuwVS7meXNEyT8hCW1LNasFZybDOir_9DD2GJHitFindSiJpypC6P1KjwfudtyNgeXAZmp4MH69B_1VKsSshmrebtMUDHkFIEe6qorvp1GvrTNCD76uEJyzCDOZF_v5_9Brfsr0c</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3149755124</pqid></control><display><type>article</type><title>Child-Centric Robot Dialogue Systems: Fine-Tuning Large Language Models for Better Utterance Understanding and Interaction</title><source>PubMed Central (Open Access)</source><source>Publicly Available Content Database</source><source>Coronavirus Research Database</source><creator>Kim, Da-Young ; Lym, Hyo Jeong ; Lee, Hanna ; Lee, Ye Jun ; Kim, Juhyun ; Kim, Min-Gyu ; Baek, Yunju</creator><creatorcontrib>Kim, Da-Young ; Lym, Hyo Jeong ; Lee, Hanna ; Lee, Ye Jun ; Kim, Juhyun ; Kim, Min-Gyu ; Baek, Yunju</creatorcontrib><description>Dialogue systems must understand children's utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child-robot interactions. Even state-of-the-art large language models (LLMs) for language understanding and contextual awareness cannot comprehend children's intent as accurately as humans because of their distinctive features. An LLM-based dialogue system should acquire the manner by which humans understand children's speech to enhance its intention reasoning performance in verbal interactions with children. To this end, we propose a fine-tuning methodology that utilizes the LLM-human judgment discrepancy and interactive response data. The former data represent cases in which the LLM and human judgments of the contextual appropriateness of a child's answer to a robot's question diverge. The latter data involve robot responses suitable for children's utterance intentions, generated by the LLM. We developed a fine-tuned dialogue system using these datasets to achieve human-like interpretations of children's utterances and to respond adaptively. Our system was evaluated through human assessment using the Robotic Social Attributes Scale (RoSAS) and Sensibleness and Specificity Average (SSA) metrics. Consequently, it supports the effective interpretation of children's utterance intentions and enables natural verbal interactions, even in cases with syntactic incompleteness and mispronunciations.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s24247939</identifier><identifier>PMID: 39771676</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>Adaptation ; Artificial intelligence ; Behavior ; Child ; Child, Preschool ; Children ; Children & youth ; child–robot interaction ; Comprehension - physiology ; Customization ; Datasets ; Design ; dialogue system ; Emotions ; Feedback ; Humans ; human–robot interaction ; Interactive computer systems ; Language ; Large language models ; Linguistics ; Natural language processing ; Pandemics ; Robotics - methods ; Robots ; social robots ; Speech ; Verbal communication ; Voice recognition</subject><ispartof>Sensors (Basel, Switzerland), 2024-12, Vol.24 (24), p.7939</ispartof><rights>COPYRIGHT 2024 MDPI AG</rights><rights>2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c343t-5ebfedc4eda94f1732b225cc3b5584eca2df7019da5e39e4f1970163b8e826d73</cites><orcidid>0009-0006-5149-3054 ; 0000-0002-2071-8779 ; 0000-0002-5624-7008 ; 0000-0003-0744-8812 ; 0000-0002-3998-3927 ; 0000-0002-3771-8070</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/3149755124/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/3149755124?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,25731,27901,27902,36989,36990,38493,43871,44566,74155,74869</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39771676$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Da-Young</creatorcontrib><creatorcontrib>Lym, Hyo Jeong</creatorcontrib><creatorcontrib>Lee, Hanna</creatorcontrib><creatorcontrib>Lee, Ye Jun</creatorcontrib><creatorcontrib>Kim, Juhyun</creatorcontrib><creatorcontrib>Kim, Min-Gyu</creatorcontrib><creatorcontrib>Baek, Yunju</creatorcontrib><title>Child-Centric Robot Dialogue Systems: Fine-Tuning Large Language Models for Better Utterance Understanding and Interaction</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>Dialogue systems must understand children's utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child-robot interactions. Even state-of-the-art large language models (LLMs) for language understanding and contextual awareness cannot comprehend children's intent as accurately as humans because of their distinctive features. An LLM-based dialogue system should acquire the manner by which humans understand children's speech to enhance its intention reasoning performance in verbal interactions with children. To this end, we propose a fine-tuning methodology that utilizes the LLM-human judgment discrepancy and interactive response data. The former data represent cases in which the LLM and human judgments of the contextual appropriateness of a child's answer to a robot's question diverge. The latter data involve robot responses suitable for children's utterance intentions, generated by the LLM. We developed a fine-tuned dialogue system using these datasets to achieve human-like interpretations of children's utterances and to respond adaptively. Our system was evaluated through human assessment using the Robotic Social Attributes Scale (RoSAS) and Sensibleness and Specificity Average (SSA) metrics. Consequently, it supports the effective interpretation of children's utterance intentions and enables natural verbal interactions, even in cases with syntactic incompleteness and mispronunciations.</description><subject>Adaptation</subject><subject>Artificial intelligence</subject><subject>Behavior</subject><subject>Child</subject><subject>Child, Preschool</subject><subject>Children</subject><subject>Children & youth</subject><subject>child–robot interaction</subject><subject>Comprehension - physiology</subject><subject>Customization</subject><subject>Datasets</subject><subject>Design</subject><subject>dialogue system</subject><subject>Emotions</subject><subject>Feedback</subject><subject>Humans</subject><subject>human–robot interaction</subject><subject>Interactive computer systems</subject><subject>Language</subject><subject>Large language models</subject><subject>Linguistics</subject><subject>Natural language processing</subject><subject>Pandemics</subject><subject>Robotics - methods</subject><subject>Robots</subject><subject>social robots</subject><subject>Speech</subject><subject>Verbal communication</subject><subject>Voice recognition</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>COVID</sourceid><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkl9v1DAMwCsEYmPwwBdAkXiBh442f5qWt3EwOOkQEuyeqzRxSk5tsiXpw_j0uNw4IRTJdpyfndhxUbysq0vGuupdopxy2bHuUXFeo1m2lFaP_7HPimcpHaqKMsbap8UZ66SsG9mcF782P91kyg34HJ0m38MQMvno1BTGBciP-5RhTu_JtfNQ3ize-ZHsVBwBpR8XhcbXYGBKxIZIPkDOEMl-lcprIHtvIKasvFkDUZGtX890dsE_L55YNSV48aAviv31p5vNl3L37fN2c7UrNeMslwIGC0ZzMKrjtpaMDpQKrdkgRMtBK2qsrOrOKAGsA0Q63DZsaKGljZHsotge85qgDv1tdLOK931Qrv_jCHHsVcxOT9Bb3XCpdIu9MRwsbYeK08GAGBTVtQbM9eaY6zaGuwVS7meXNEyT8hCW1LNasFZybDOir_9DD2GJHitFindSiJpypC6P1KjwfudtyNgeXAZmp4MH69B_1VKsSshmrebtMUDHkFIEe6qorvp1GvrTNCD76uEJyzCDOZF_v5_9Brfsr0c</recordid><startdate>20241212</startdate><enddate>20241212</enddate><creator>Kim, Da-Young</creator><creator>Lym, Hyo Jeong</creator><creator>Lee, Hanna</creator><creator>Lee, Ye Jun</creator><creator>Kim, Juhyun</creator><creator>Kim, Min-Gyu</creator><creator>Baek, Yunju</creator><general>MDPI AG</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>COVID</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>7X8</scope><scope>DOA</scope><orcidid>https://orcid.org/0009-0006-5149-3054</orcidid><orcidid>https://orcid.org/0000-0002-2071-8779</orcidid><orcidid>https://orcid.org/0000-0002-5624-7008</orcidid><orcidid>https://orcid.org/0000-0003-0744-8812</orcidid><orcidid>https://orcid.org/0000-0002-3998-3927</orcidid><orcidid>https://orcid.org/0000-0002-3771-8070</orcidid></search><sort><creationdate>20241212</creationdate><title>Child-Centric Robot Dialogue Systems: Fine-Tuning Large Language Models for Better Utterance Understanding and Interaction</title><author>Kim, Da-Young ; Lym, Hyo Jeong ; Lee, Hanna ; Lee, Ye Jun ; Kim, Juhyun ; Kim, Min-Gyu ; Baek, Yunju</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c343t-5ebfedc4eda94f1732b225cc3b5584eca2df7019da5e39e4f1970163b8e826d73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Adaptation</topic><topic>Artificial intelligence</topic><topic>Behavior</topic><topic>Child</topic><topic>Child, Preschool</topic><topic>Children</topic><topic>Children & youth</topic><topic>child–robot interaction</topic><topic>Comprehension - physiology</topic><topic>Customization</topic><topic>Datasets</topic><topic>Design</topic><topic>dialogue system</topic><topic>Emotions</topic><topic>Feedback</topic><topic>Humans</topic><topic>human–robot interaction</topic><topic>Interactive computer systems</topic><topic>Language</topic><topic>Large language models</topic><topic>Linguistics</topic><topic>Natural language processing</topic><topic>Pandemics</topic><topic>Robotics - methods</topic><topic>Robots</topic><topic>social robots</topic><topic>Speech</topic><topic>Verbal communication</topic><topic>Voice recognition</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kim, Da-Young</creatorcontrib><creatorcontrib>Lym, Hyo Jeong</creatorcontrib><creatorcontrib>Lee, Hanna</creatorcontrib><creatorcontrib>Lee, Ye Jun</creatorcontrib><creatorcontrib>Kim, Juhyun</creatorcontrib><creatorcontrib>Kim, Min-Gyu</creatorcontrib><creatorcontrib>Baek, Yunju</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health Medical collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>Coronavirus Research Database</collection><collection>ProQuest Central</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>MEDLINE - Academic</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Da-Young</au><au>Lym, Hyo Jeong</au><au>Lee, Hanna</au><au>Lee, Ye Jun</au><au>Kim, Juhyun</au><au>Kim, Min-Gyu</au><au>Baek, Yunju</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Child-Centric Robot Dialogue Systems: Fine-Tuning Large Language Models for Better Utterance Understanding and Interaction</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2024-12-12</date><risdate>2024</risdate><volume>24</volume><issue>24</issue><spage>7939</spage><pages>7939-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>Dialogue systems must understand children's utterance intentions by considering their unique linguistic characteristics, such as syntactic incompleteness, pronunciation inaccuracies, and creative expressions, to enable natural conversational engagement in child-robot interactions. Even state-of-the-art large language models (LLMs) for language understanding and contextual awareness cannot comprehend children's intent as accurately as humans because of their distinctive features. An LLM-based dialogue system should acquire the manner by which humans understand children's speech to enhance its intention reasoning performance in verbal interactions with children. To this end, we propose a fine-tuning methodology that utilizes the LLM-human judgment discrepancy and interactive response data. The former data represent cases in which the LLM and human judgments of the contextual appropriateness of a child's answer to a robot's question diverge. The latter data involve robot responses suitable for children's utterance intentions, generated by the LLM. We developed a fine-tuned dialogue system using these datasets to achieve human-like interpretations of children's utterances and to respond adaptively. Our system was evaluated through human assessment using the Robotic Social Attributes Scale (RoSAS) and Sensibleness and Specificity Average (SSA) metrics. Consequently, it supports the effective interpretation of children's utterance intentions and enables natural verbal interactions, even in cases with syntactic incompleteness and mispronunciations.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>39771676</pmid><doi>10.3390/s24247939</doi><orcidid>https://orcid.org/0009-0006-5149-3054</orcidid><orcidid>https://orcid.org/0000-0002-2071-8779</orcidid><orcidid>https://orcid.org/0000-0002-5624-7008</orcidid><orcidid>https://orcid.org/0000-0003-0744-8812</orcidid><orcidid>https://orcid.org/0000-0002-3998-3927</orcidid><orcidid>https://orcid.org/0000-0002-3771-8070</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1424-8220 |
ispartof | Sensors (Basel, Switzerland), 2024-12, Vol.24 (24), p.7939 |
issn | 1424-8220 1424-8220 |
language | eng |
recordid | cdi_doaj_primary_oai_doaj_org_article_fc647ac8167d4ef28b042bde5ba2c1ce |
source | PubMed Central (Open Access); Publicly Available Content Database; Coronavirus Research Database |
subjects | Adaptation Artificial intelligence Behavior Child Child, Preschool Children Children & youth child–robot interaction Comprehension - physiology Customization Datasets Design dialogue system Emotions Feedback Humans human–robot interaction Interactive computer systems Language Large language models Linguistics Natural language processing Pandemics Robotics - methods Robots social robots Speech Verbal communication Voice recognition |
title | Child-Centric Robot Dialogue Systems: Fine-Tuning Large Language Models for Better Utterance Understanding and Interaction |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-10T13%3A11%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Child-Centric%20Robot%20Dialogue%20Systems:%20Fine-Tuning%20Large%20Language%20Models%20for%20Better%20Utterance%20Understanding%20and%20Interaction&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Kim,%20Da-Young&rft.date=2024-12-12&rft.volume=24&rft.issue=24&rft.spage=7939&rft.pages=7939-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s24247939&rft_dat=%3Cgale_doaj_%3EA821975767%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c343t-5ebfedc4eda94f1732b225cc3b5584eca2df7019da5e39e4f1970163b8e826d73%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3149755124&rft_id=info:pmid/39771676&rft_galeid=A821975767&rfr_iscdi=true |