Loading…

Syntactically Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction

In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations betwe...

Full description

Saved in:
Bibliographic Details
Published in:Computational linguistics - Association for Computational Linguistics 2020-01, Vol.45 (4), p.705-736
Main Authors: Wang, Wenya, Pan, Sinno Jialin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c330t-73bf2956fbc740124987562506b4715a16cf9d4a6e121f2f4d8dfd2e774ef94a3
cites cdi_FETCH-LOGICAL-c330t-73bf2956fbc740124987562506b4715a16cf9d4a6e121f2f4d8dfd2e774ef94a3
container_end_page 736
container_issue 4
container_start_page 705
container_title Computational linguistics - Association for Computational Linguistics
container_volume 45
creator Wang, Wenya
Pan, Sinno Jialin
description In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on predefined rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction, which failed to adapt well to other domains of interest where only unlabeled data are available. In real-world scenarios, annotated resources are extremely scarce for many domains, motivating knowledge transfer strategies from labeled source domain(s) to any unlabeled target domain. We observe that syntactic relations among target words to be extracted are not only crucial for single-domain extraction, but also serve as invariant “pivot” information to bridge the gap between different domains. In this article, we explore the constructions of recursive neural networks based on the dependency tree of each sentence for associating syntactic structure with feature learning. Furthermore, we construct transferable recursive neural networks to automatically learn the domain-invariant fine-grained interactions among aspect words and opinion words. The transferability is built on an auxiliary task and a conditional domain adversarial network to reduce domain distribution difference in the hidden spaces effectively in word level through syntactic relations. Specifically, the auxiliary task builds structural correspondences across domains by predicting the dependency relation for each path of the dependency tree in the recursive neural network. The conditional domain adversarial network helps to learn domain-invariant hidden representation for each word conditioned on the syntactic structure. In the end, we integrate the recursive neural network with a sequence labeling classifier on top that models contextual influence in the final predictions. Extensive experiments and analysis are conducted to demonstrate the effectiveness of the proposed model and each component on three benchmark data sets.
doi_str_mv 10.1162/coli_a_00362
format article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_proquest_journals_2447277812</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_ca0de7dfc21b4d0ba31d1315dbc7a084</doaj_id><sourcerecordid>2447277812</sourcerecordid><originalsourceid>FETCH-LOGICAL-c330t-73bf2956fbc740124987562506b4715a16cf9d4a6e121f2f4d8dfd2e774ef94a3</originalsourceid><addsrcrecordid>eNptkU1P3DAQhi1UJLbAjR8QqZcemnbGduLkiBBtkWiR-DhbE38gL8He2klh_30DW1Ucehpp9MwzM3oZO0H4jNjyLyaNQZMGEC3fYytsBNS9QP6OraDrseaA6oC9L2UNAAqEWjF3s40TmSkYGsdt9cNRDPHez2NF0Va3mWLxLtMwuuramTmX8NtVP92caVzK9JTyQ6l8ytVp2TgzvU5dbUIMKVbnz1N-Uad4xPY9jcUd_62H7O7r-e3Z9_ry6tvF2ellbYSAqVZi8LxvWj8YJQG57DvVtLyBdpAKG8LW-N5Kah1y9NxL21lvuVNKOt9LEofsYue1idZ6k8Mj5a1OFPRrI-V7TXn5dXTaEFinrDccB2lhIIEWBTZ22U3QycX1Yefa5PRrdmXS6zTnuJyvuZSKK9UhX6hPO8rkVEp2_t9WBP0Sin4byoJ_3OGP4Y3vv-gfhJeOvA</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2447277812</pqid></control><display><type>article</type><title>Syntactically Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction</title><source>EBSCOhost MLA International Bibliography With Full Text</source><source>Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list)</source><source>Linguistics and Language Behavior Abstracts (LLBA)</source><creator>Wang, Wenya ; Pan, Sinno Jialin</creator><creatorcontrib>Wang, Wenya ; Pan, Sinno Jialin</creatorcontrib><description>In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on predefined rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction, which failed to adapt well to other domains of interest where only unlabeled data are available. In real-world scenarios, annotated resources are extremely scarce for many domains, motivating knowledge transfer strategies from labeled source domain(s) to any unlabeled target domain. We observe that syntactic relations among target words to be extracted are not only crucial for single-domain extraction, but also serve as invariant “pivot” information to bridge the gap between different domains. In this article, we explore the constructions of recursive neural networks based on the dependency tree of each sentence for associating syntactic structure with feature learning. Furthermore, we construct transferable recursive neural networks to automatically learn the domain-invariant fine-grained interactions among aspect words and opinion words. The transferability is built on an auxiliary task and a conditional domain adversarial network to reduce domain distribution difference in the hidden spaces effectively in word level through syntactic relations. Specifically, the auxiliary task builds structural correspondences across domains by predicting the dependency relation for each path of the dependency tree in the recursive neural network. The conditional domain adversarial network helps to learn domain-invariant hidden representation for each word conditioned on the syntactic structure. In the end, we integrate the recursive neural network with a sequence labeling classifier on top that models contextual influence in the final predictions. Extensive experiments and analysis are conducted to demonstrate the effectiveness of the proposed model and each component on three benchmark data sets.</description><identifier>ISSN: 0891-2017</identifier><identifier>EISSN: 1530-9312</identifier><identifier>DOI: 10.1162/coli_a_00362</identifier><language>eng</language><publisher>One Rogers Street, Cambridge, MA 02142-1209, USA: MIT Press</publisher><subject>Data mining ; Dependence ; Domains ; Feature extraction ; Grammatical relations ; Invariants ; Knowledge management ; Learning ; Neural networks ; Recursion ; Sentence structure ; Sentiment analysis ; Summarization ; Syntactic structures ; Syntax</subject><ispartof>Computational linguistics - Association for Computational Linguistics, 2020-01, Vol.45 (4), p.705-736</ispartof><rights>Copyright MIT Press Journals, The Dec 2019</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c330t-73bf2956fbc740124987562506b4715a16cf9d4a6e121f2f4d8dfd2e774ef94a3</citedby><cites>FETCH-LOGICAL-c330t-73bf2956fbc740124987562506b4715a16cf9d4a6e121f2f4d8dfd2e774ef94a3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925,31269</link.rule.ids></links><search><creatorcontrib>Wang, Wenya</creatorcontrib><creatorcontrib>Pan, Sinno Jialin</creatorcontrib><title>Syntactically Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction</title><title>Computational linguistics - Association for Computational Linguistics</title><description>In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on predefined rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction, which failed to adapt well to other domains of interest where only unlabeled data are available. In real-world scenarios, annotated resources are extremely scarce for many domains, motivating knowledge transfer strategies from labeled source domain(s) to any unlabeled target domain. We observe that syntactic relations among target words to be extracted are not only crucial for single-domain extraction, but also serve as invariant “pivot” information to bridge the gap between different domains. In this article, we explore the constructions of recursive neural networks based on the dependency tree of each sentence for associating syntactic structure with feature learning. Furthermore, we construct transferable recursive neural networks to automatically learn the domain-invariant fine-grained interactions among aspect words and opinion words. The transferability is built on an auxiliary task and a conditional domain adversarial network to reduce domain distribution difference in the hidden spaces effectively in word level through syntactic relations. Specifically, the auxiliary task builds structural correspondences across domains by predicting the dependency relation for each path of the dependency tree in the recursive neural network. The conditional domain adversarial network helps to learn domain-invariant hidden representation for each word conditioned on the syntactic structure. In the end, we integrate the recursive neural network with a sequence labeling classifier on top that models contextual influence in the final predictions. Extensive experiments and analysis are conducted to demonstrate the effectiveness of the proposed model and each component on three benchmark data sets.</description><subject>Data mining</subject><subject>Dependence</subject><subject>Domains</subject><subject>Feature extraction</subject><subject>Grammatical relations</subject><subject>Invariants</subject><subject>Knowledge management</subject><subject>Learning</subject><subject>Neural networks</subject><subject>Recursion</subject><subject>Sentence structure</subject><subject>Sentiment analysis</subject><subject>Summarization</subject><subject>Syntactic structures</subject><subject>Syntax</subject><issn>0891-2017</issn><issn>1530-9312</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>7T9</sourceid><sourceid>DOA</sourceid><recordid>eNptkU1P3DAQhi1UJLbAjR8QqZcemnbGduLkiBBtkWiR-DhbE38gL8He2klh_30DW1Ucehpp9MwzM3oZO0H4jNjyLyaNQZMGEC3fYytsBNS9QP6OraDrseaA6oC9L2UNAAqEWjF3s40TmSkYGsdt9cNRDPHez2NF0Va3mWLxLtMwuuramTmX8NtVP92caVzK9JTyQ6l8ytVp2TgzvU5dbUIMKVbnz1N-Uad4xPY9jcUd_62H7O7r-e3Z9_ry6tvF2ellbYSAqVZi8LxvWj8YJQG57DvVtLyBdpAKG8LW-N5Kah1y9NxL21lvuVNKOt9LEofsYue1idZ6k8Mj5a1OFPRrI-V7TXn5dXTaEFinrDccB2lhIIEWBTZ22U3QycX1Yefa5PRrdmXS6zTnuJyvuZSKK9UhX6hPO8rkVEp2_t9WBP0Sin4byoJ_3OGP4Y3vv-gfhJeOvA</recordid><startdate>20200101</startdate><enddate>20200101</enddate><creator>Wang, Wenya</creator><creator>Pan, Sinno Jialin</creator><general>MIT Press</general><general>MIT Press Journals, The</general><general>The MIT Press</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7T9</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope></search><sort><creationdate>20200101</creationdate><title>Syntactically Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction</title><author>Wang, Wenya ; Pan, Sinno Jialin</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c330t-73bf2956fbc740124987562506b4715a16cf9d4a6e121f2f4d8dfd2e774ef94a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Data mining</topic><topic>Dependence</topic><topic>Domains</topic><topic>Feature extraction</topic><topic>Grammatical relations</topic><topic>Invariants</topic><topic>Knowledge management</topic><topic>Learning</topic><topic>Neural networks</topic><topic>Recursion</topic><topic>Sentence structure</topic><topic>Sentiment analysis</topic><topic>Summarization</topic><topic>Syntactic structures</topic><topic>Syntax</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Wenya</creatorcontrib><creatorcontrib>Pan, Sinno Jialin</creatorcontrib><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Linguistics and Language Behavior Abstracts (LLBA)</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Computational linguistics - Association for Computational Linguistics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Wenya</au><au>Pan, Sinno Jialin</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Syntactically Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction</atitle><jtitle>Computational linguistics - Association for Computational Linguistics</jtitle><date>2020-01-01</date><risdate>2020</risdate><volume>45</volume><issue>4</issue><spage>705</spage><epage>736</epage><pages>705-736</pages><issn>0891-2017</issn><eissn>1530-9312</eissn><abstract>In fine-grained opinion mining, extracting aspect terms (a.k.a. opinion targets) and opinion terms (a.k.a. opinion expressions) from user-generated texts is the most fundamental task in order to generate structured opinion summarization. Existing studies have shown that the syntactic relations between aspect and opinion words play an important role for aspect and opinion terms extraction. However, most of the works either relied on predefined rules or separated relation mining with feature learning. Moreover, these works only focused on single-domain extraction, which failed to adapt well to other domains of interest where only unlabeled data are available. In real-world scenarios, annotated resources are extremely scarce for many domains, motivating knowledge transfer strategies from labeled source domain(s) to any unlabeled target domain. We observe that syntactic relations among target words to be extracted are not only crucial for single-domain extraction, but also serve as invariant “pivot” information to bridge the gap between different domains. In this article, we explore the constructions of recursive neural networks based on the dependency tree of each sentence for associating syntactic structure with feature learning. Furthermore, we construct transferable recursive neural networks to automatically learn the domain-invariant fine-grained interactions among aspect words and opinion words. The transferability is built on an auxiliary task and a conditional domain adversarial network to reduce domain distribution difference in the hidden spaces effectively in word level through syntactic relations. Specifically, the auxiliary task builds structural correspondences across domains by predicting the dependency relation for each path of the dependency tree in the recursive neural network. The conditional domain adversarial network helps to learn domain-invariant hidden representation for each word conditioned on the syntactic structure. In the end, we integrate the recursive neural network with a sequence labeling classifier on top that models contextual influence in the final predictions. Extensive experiments and analysis are conducted to demonstrate the effectiveness of the proposed model and each component on three benchmark data sets.</abstract><cop>One Rogers Street, Cambridge, MA 02142-1209, USA</cop><pub>MIT Press</pub><doi>10.1162/coli_a_00362</doi><tpages>32</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0891-2017
ispartof Computational linguistics - Association for Computational Linguistics, 2020-01, Vol.45 (4), p.705-736
issn 0891-2017
1530-9312
language eng
recordid cdi_proquest_journals_2447277812
source EBSCOhost MLA International Bibliography With Full Text; Association for Computing Machinery:Jisc Collections:ACM OPEN Journals 2023-2025 (reading list); Linguistics and Language Behavior Abstracts (LLBA)
subjects Data mining
Dependence
Domains
Feature extraction
Grammatical relations
Invariants
Knowledge management
Learning
Neural networks
Recursion
Sentence structure
Sentiment analysis
Summarization
Syntactic structures
Syntax
title Syntactically Meaningful and Transferable Recursive Neural Networks for Aspect and Opinion Extraction
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T22%3A44%3A00IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Syntactically%20Meaningful%20and%20Transferable%20Recursive%20Neural%20Networks%20for%20Aspect%20and%20Opinion%20Extraction&rft.jtitle=Computational%20linguistics%20-%20Association%20for%20Computational%20Linguistics&rft.au=Wang,%20Wenya&rft.date=2020-01-01&rft.volume=45&rft.issue=4&rft.spage=705&rft.epage=736&rft.pages=705-736&rft.issn=0891-2017&rft.eissn=1530-9312&rft_id=info:doi/10.1162/coli_a_00362&rft_dat=%3Cproquest_doaj_%3E2447277812%3C/proquest_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c330t-73bf2956fbc740124987562506b4715a16cf9d4a6e121f2f4d8dfd2e774ef94a3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2447277812&rft_id=info:pmid/&rfr_iscdi=true