Loading…

Investigating the Robustness of Natural Language Generation from Logical Forms via Counterfactual Samples

The aim of Logic2Text is to generate controllable and faithful texts conditioned on tables and logical forms, which not only requires a deep understanding of the tables and logical forms, but also warrants symbolic reasoning over the tables. State-of-the-art methods based on pre-trained models have...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2022-10
Main Authors: Liu, Chengyuan, Gan, Leilei, Kuang, Kun, Wu, Fei
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Chengyuan
Gan, Leilei
Kuang, Kun
Wu, Fei
description The aim of Logic2Text is to generate controllable and faithful texts conditioned on tables and logical forms, which not only requires a deep understanding of the tables and logical forms, but also warrants symbolic reasoning over the tables. State-of-the-art methods based on pre-trained models have achieved remarkable performance on the standard test dataset. However, we question whether these methods really learn how to perform logical reasoning, rather than just relying on the spurious correlations between the headers of the tables and operators of the logical form. To verify this hypothesis, we manually construct a set of counterfactual samples, which modify the original logical forms to generate counterfactual logical forms with rarely co-occurred table headers and logical operators. SOTA methods give much worse results on these counterfactual samples compared with the results on the original test dataset, which verifies our hypothesis. To deal with this problem, we firstly analyze this bias from a causal perspective, based on which we propose two approaches to reduce the model's reliance on the shortcut. The first one incorporates the hierarchical structure of the logical forms into the model. The second one exploits automatically generated counterfactual data for training. Automatic and manual experimental results on the original test dataset and the counterfactual dataset show that our method is effective to alleviate the spurious correlation. Our work points out the weakness of previous methods and takes a further step toward developing Logic2Text models with real logical reasoning ability.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2725724981</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2725724981</sourcerecordid><originalsourceid>FETCH-proquest_journals_27257249813</originalsourceid><addsrcrecordid>eNqNjssKwjAQAIMgKOo_LHgWaurzXKwK4kG9yyrbGGmzNZv0-83BD_A0h5nD9NRQ5_l8tlloPVATkXeWZXq11stlPlT26DqSYA0G6wyEF8GFH1GCIxHgCs4YoscaTuhMREOwJ0c-1eyg8tzAiY19pqBk3wh0FqHg6AL5Cp8hJnHFpq1JxqpfYS00-XGkpuXuVhxmredPTA_3N0fvkrrrNLfWi-1mnv9XfQH4PEjI</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2725724981</pqid></control><display><type>article</type><title>Investigating the Robustness of Natural Language Generation from Logical Forms via Counterfactual Samples</title><source>ProQuest - Publicly Available Content Database</source><creator>Liu, Chengyuan ; Gan, Leilei ; Kuang, Kun ; Wu, Fei</creator><creatorcontrib>Liu, Chengyuan ; Gan, Leilei ; Kuang, Kun ; Wu, Fei</creatorcontrib><description>The aim of Logic2Text is to generate controllable and faithful texts conditioned on tables and logical forms, which not only requires a deep understanding of the tables and logical forms, but also warrants symbolic reasoning over the tables. State-of-the-art methods based on pre-trained models have achieved remarkable performance on the standard test dataset. However, we question whether these methods really learn how to perform logical reasoning, rather than just relying on the spurious correlations between the headers of the tables and operators of the logical form. To verify this hypothesis, we manually construct a set of counterfactual samples, which modify the original logical forms to generate counterfactual logical forms with rarely co-occurred table headers and logical operators. SOTA methods give much worse results on these counterfactual samples compared with the results on the original test dataset, which verifies our hypothesis. To deal with this problem, we firstly analyze this bias from a causal perspective, based on which we propose two approaches to reduce the model's reliance on the shortcut. The first one incorporates the hierarchical structure of the logical forms into the model. The second one exploits automatically generated counterfactual data for training. Automatic and manual experimental results on the original test dataset and the counterfactual dataset show that our method is effective to alleviate the spurious correlation. Our work points out the weakness of previous methods and takes a further step toward developing Logic2Text models with real logical reasoning ability.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cognition &amp; reasoning ; Datasets ; Headers ; Hypotheses ; Operators ; Reasoning ; Speech recognition ; Structural hierarchy</subject><ispartof>arXiv.org, 2022-10</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2725724981?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>777,781,25734,36993,44571</link.rule.ids></links><search><creatorcontrib>Liu, Chengyuan</creatorcontrib><creatorcontrib>Gan, Leilei</creatorcontrib><creatorcontrib>Kuang, Kun</creatorcontrib><creatorcontrib>Wu, Fei</creatorcontrib><title>Investigating the Robustness of Natural Language Generation from Logical Forms via Counterfactual Samples</title><title>arXiv.org</title><description>The aim of Logic2Text is to generate controllable and faithful texts conditioned on tables and logical forms, which not only requires a deep understanding of the tables and logical forms, but also warrants symbolic reasoning over the tables. State-of-the-art methods based on pre-trained models have achieved remarkable performance on the standard test dataset. However, we question whether these methods really learn how to perform logical reasoning, rather than just relying on the spurious correlations between the headers of the tables and operators of the logical form. To verify this hypothesis, we manually construct a set of counterfactual samples, which modify the original logical forms to generate counterfactual logical forms with rarely co-occurred table headers and logical operators. SOTA methods give much worse results on these counterfactual samples compared with the results on the original test dataset, which verifies our hypothesis. To deal with this problem, we firstly analyze this bias from a causal perspective, based on which we propose two approaches to reduce the model's reliance on the shortcut. The first one incorporates the hierarchical structure of the logical forms into the model. The second one exploits automatically generated counterfactual data for training. Automatic and manual experimental results on the original test dataset and the counterfactual dataset show that our method is effective to alleviate the spurious correlation. Our work points out the weakness of previous methods and takes a further step toward developing Logic2Text models with real logical reasoning ability.</description><subject>Cognition &amp; reasoning</subject><subject>Datasets</subject><subject>Headers</subject><subject>Hypotheses</subject><subject>Operators</subject><subject>Reasoning</subject><subject>Speech recognition</subject><subject>Structural hierarchy</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjssKwjAQAIMgKOo_LHgWaurzXKwK4kG9yyrbGGmzNZv0-83BD_A0h5nD9NRQ5_l8tlloPVATkXeWZXq11stlPlT26DqSYA0G6wyEF8GFH1GCIxHgCs4YoscaTuhMREOwJ0c-1eyg8tzAiY19pqBk3wh0FqHg6AL5Cp8hJnHFpq1JxqpfYS00-XGkpuXuVhxmredPTA_3N0fvkrrrNLfWi-1mnv9XfQH4PEjI</recordid><startdate>20221016</startdate><enddate>20221016</enddate><creator>Liu, Chengyuan</creator><creator>Gan, Leilei</creator><creator>Kuang, Kun</creator><creator>Wu, Fei</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20221016</creationdate><title>Investigating the Robustness of Natural Language Generation from Logical Forms via Counterfactual Samples</title><author>Liu, Chengyuan ; Gan, Leilei ; Kuang, Kun ; Wu, Fei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27257249813</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Cognition &amp; reasoning</topic><topic>Datasets</topic><topic>Headers</topic><topic>Hypotheses</topic><topic>Operators</topic><topic>Reasoning</topic><topic>Speech recognition</topic><topic>Structural hierarchy</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Chengyuan</creatorcontrib><creatorcontrib>Gan, Leilei</creatorcontrib><creatorcontrib>Kuang, Kun</creatorcontrib><creatorcontrib>Wu, Fei</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Chengyuan</au><au>Gan, Leilei</au><au>Kuang, Kun</au><au>Wu, Fei</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Investigating the Robustness of Natural Language Generation from Logical Forms via Counterfactual Samples</atitle><jtitle>arXiv.org</jtitle><date>2022-10-16</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>The aim of Logic2Text is to generate controllable and faithful texts conditioned on tables and logical forms, which not only requires a deep understanding of the tables and logical forms, but also warrants symbolic reasoning over the tables. State-of-the-art methods based on pre-trained models have achieved remarkable performance on the standard test dataset. However, we question whether these methods really learn how to perform logical reasoning, rather than just relying on the spurious correlations between the headers of the tables and operators of the logical form. To verify this hypothesis, we manually construct a set of counterfactual samples, which modify the original logical forms to generate counterfactual logical forms with rarely co-occurred table headers and logical operators. SOTA methods give much worse results on these counterfactual samples compared with the results on the original test dataset, which verifies our hypothesis. To deal with this problem, we firstly analyze this bias from a causal perspective, based on which we propose two approaches to reduce the model's reliance on the shortcut. The first one incorporates the hierarchical structure of the logical forms into the model. The second one exploits automatically generated counterfactual data for training. Automatic and manual experimental results on the original test dataset and the counterfactual dataset show that our method is effective to alleviate the spurious correlation. Our work points out the weakness of previous methods and takes a further step toward developing Logic2Text models with real logical reasoning ability.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_2725724981
source ProQuest - Publicly Available Content Database
subjects Cognition & reasoning
Datasets
Headers
Hypotheses
Operators
Reasoning
Speech recognition
Structural hierarchy
title Investigating the Robustness of Natural Language Generation from Logical Forms via Counterfactual Samples
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T13%3A06%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Investigating%20the%20Robustness%20of%20Natural%20Language%20Generation%20from%20Logical%20Forms%20via%20Counterfactual%20Samples&rft.jtitle=arXiv.org&rft.au=Liu,%20Chengyuan&rft.date=2022-10-16&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2725724981%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_27257249813%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2725724981&rft_id=info:pmid/&rfr_iscdi=true