Loading…
FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering
Table Question Answering (TQA) aims at composing an answer to a question based on tabular data. While prior research has shown that TQA models lack robustness, understanding the underlying cause and nature of this issue remains predominantly unclear, posing a significant obstacle to the development...
Saved in:
Published in: | arXiv.org 2024-04 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Zhou, Wei Mesgar, Mohsen Adel, Heike Friedrich, Annemarie |
description | Table Question Answering (TQA) aims at composing an answer to a question based on tabular data. While prior research has shown that TQA models lack robustness, understanding the underlying cause and nature of this issue remains predominantly unclear, posing a significant obstacle to the development of robust TQA systems. In this paper, we formalize three major desiderata for a fine-grained evaluation of robustness of TQA systems. They should (i) answer questions regardless of alterations in table structure, (ii) base their responses on the content of relevant cells rather than on biases, and (iii) demonstrate robust numerical reasoning capabilities. To investigate these aspects, we create and publish a novel TQA evaluation benchmark in English. Our extensive experimental analysis reveals that none of the examined state-of-the-art TQA systems consistently excels in these three aspects. Our benchmark is a crucial instrument for monitoring the behavior of TQA systems and paves the way for the development of robust TQA systems. We release our benchmark publicly. |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3049783791</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3049783791</sourcerecordid><originalsourceid>FETCH-proquest_journals_30497837913</originalsourceid><addsrcrecordid>eNqNysEKgjAcgPERBEn5DoPOg7lpajcNrathZ5k1S7Ot9nf1-ln0AJ2-w--bIIdx7pHIZ2yGXICOUspWIQsC7qBDvs9SUhbJGic4b5UkWyPGnPBe1xYGJQFw9hS9FUOrFU6lOl5uwlxxow0uRd1LXFgJX0wUvKRp1XmBpo3oQbq_ztEyz8rNjtyNfnzuqtPWqJEqTv04jHgYe_y_6w3N9D-G</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3049783791</pqid></control><display><type>article</type><title>FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering</title><source>Publicly Available Content (ProQuest)</source><creator>Zhou, Wei ; Mesgar, Mohsen ; Adel, Heike ; Friedrich, Annemarie</creator><creatorcontrib>Zhou, Wei ; Mesgar, Mohsen ; Adel, Heike ; Friedrich, Annemarie</creatorcontrib><description>Table Question Answering (TQA) aims at composing an answer to a question based on tabular data. While prior research has shown that TQA models lack robustness, understanding the underlying cause and nature of this issue remains predominantly unclear, posing a significant obstacle to the development of robust TQA systems. In this paper, we formalize three major desiderata for a fine-grained evaluation of robustness of TQA systems. They should (i) answer questions regardless of alterations in table structure, (ii) base their responses on the content of relevant cells rather than on biases, and (iii) demonstrate robust numerical reasoning capabilities. To investigate these aspects, we create and publish a novel TQA evaluation benchmark in English. Our extensive experimental analysis reveals that none of the examined state-of-the-art TQA systems consistently excels in these three aspects. Our benchmark is a crucial instrument for monitoring the behavior of TQA systems and paves the way for the development of robust TQA systems. We release our benchmark publicly.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Benchmarks ; Questions</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3049783791?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25752,37011,44589</link.rule.ids></links><search><creatorcontrib>Zhou, Wei</creatorcontrib><creatorcontrib>Mesgar, Mohsen</creatorcontrib><creatorcontrib>Adel, Heike</creatorcontrib><creatorcontrib>Friedrich, Annemarie</creatorcontrib><title>FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering</title><title>arXiv.org</title><description>Table Question Answering (TQA) aims at composing an answer to a question based on tabular data. While prior research has shown that TQA models lack robustness, understanding the underlying cause and nature of this issue remains predominantly unclear, posing a significant obstacle to the development of robust TQA systems. In this paper, we formalize three major desiderata for a fine-grained evaluation of robustness of TQA systems. They should (i) answer questions regardless of alterations in table structure, (ii) base their responses on the content of relevant cells rather than on biases, and (iii) demonstrate robust numerical reasoning capabilities. To investigate these aspects, we create and publish a novel TQA evaluation benchmark in English. Our extensive experimental analysis reveals that none of the examined state-of-the-art TQA systems consistently excels in these three aspects. Our benchmark is a crucial instrument for monitoring the behavior of TQA systems and paves the way for the development of robust TQA systems. We release our benchmark publicly.</description><subject>Benchmarks</subject><subject>Questions</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNysEKgjAcgPERBEn5DoPOg7lpajcNrathZ5k1S7Ot9nf1-ln0AJ2-w--bIIdx7pHIZ2yGXICOUspWIQsC7qBDvs9SUhbJGic4b5UkWyPGnPBe1xYGJQFw9hS9FUOrFU6lOl5uwlxxow0uRd1LXFgJX0wUvKRp1XmBpo3oQbq_ztEyz8rNjtyNfnzuqtPWqJEqTv04jHgYe_y_6w3N9D-G</recordid><startdate>20240429</startdate><enddate>20240429</enddate><creator>Zhou, Wei</creator><creator>Mesgar, Mohsen</creator><creator>Adel, Heike</creator><creator>Friedrich, Annemarie</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240429</creationdate><title>FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering</title><author>Zhou, Wei ; Mesgar, Mohsen ; Adel, Heike ; Friedrich, Annemarie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30497837913</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Benchmarks</topic><topic>Questions</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Wei</creatorcontrib><creatorcontrib>Mesgar, Mohsen</creatorcontrib><creatorcontrib>Adel, Heike</creatorcontrib><creatorcontrib>Friedrich, Annemarie</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhou, Wei</au><au>Mesgar, Mohsen</au><au>Adel, Heike</au><au>Friedrich, Annemarie</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering</atitle><jtitle>arXiv.org</jtitle><date>2024-04-29</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Table Question Answering (TQA) aims at composing an answer to a question based on tabular data. While prior research has shown that TQA models lack robustness, understanding the underlying cause and nature of this issue remains predominantly unclear, posing a significant obstacle to the development of robust TQA systems. In this paper, we formalize three major desiderata for a fine-grained evaluation of robustness of TQA systems. They should (i) answer questions regardless of alterations in table structure, (ii) base their responses on the content of relevant cells rather than on biases, and (iii) demonstrate robust numerical reasoning capabilities. To investigate these aspects, we create and publish a novel TQA evaluation benchmark in English. Our extensive experimental analysis reveals that none of the examined state-of-the-art TQA systems consistently excels in these three aspects. Our benchmark is a crucial instrument for monitoring the behavior of TQA systems and paves the way for the development of robust TQA systems. We release our benchmark publicly.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3049783791 |
source | Publicly Available Content (ProQuest) |
subjects | Benchmarks Questions |
title | FREB-TQA: A Fine-Grained Robustness Evaluation Benchmark for Table Question Answering |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-13T04%3A14%3A27IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=FREB-TQA:%20A%20Fine-Grained%20Robustness%20Evaluation%20Benchmark%20for%20Table%20Question%20Answering&rft.jtitle=arXiv.org&rft.au=Zhou,%20Wei&rft.date=2024-04-29&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3049783791%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_30497837913%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3049783791&rft_id=info:pmid/&rfr_iscdi=true |