Loading…

The Relationship Between Embedding and Deductive Reasoning Ability on BERT

BERT is a pre-trained language model that has been shown to be effective for a variety of tasks, including natural language inference (NLI) and textual entailment. However, it is not clear how well BERT can identify logical correctness or inconsistency. In this paper, we investigate the ability of B...

Full description

Saved in:
Bibliographic Details
Main Authors: Ryu, Jongwon, Kim, Junyeong
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:BERT is a pre-trained language model that has been shown to be effective for a variety of tasks, including natural language inference (NLI) and textual entailment. However, it is not clear how well BERT can identify logical correctness or inconsistency. In this paper, we investigate the ability of BERT to distinguish between logically correct sentences and those with logical contradictions. We perform this by comparing the similarity scores of these two types of sentences using a variety of BERT models. Our results were conducted to understand the relationship between BERT embeddings and ability of logical reasoning. To this end, we identified that BERT is able to capture some aspects of logical relationships between words or sentences. However, our results also show that the logical reasoning ability of BERT varies depending on the complexity of the logic involved. Our findings contribute to the understanding of how language models can be used for deductive reasoning tasks. In addition, we are the first to introduce a new zero-shot evaluation dataset for deductive reasoning. We believe that our works could be used to improve the performance of language models for these tasks.
ISSN:2158-4001
DOI:10.1109/ICCE59016.2024.10444307