Loading…
Evaluate the accurate prediction of text summarization using novel long short term memory algorithm in comparison with random forest
The goal of this proposed research is to find an alternative to the Random Forest method for text summarising that is both more accurate and more efficient in reducing lengthy texts to a manageable size. In order to summarise the text using a sample size of 63459 utilisingClincalc with a G power of...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | 1 |
container_start_page | |
container_title | |
container_volume | 3193 |
creator | Rupesh, Kollaikal Christy, S. |
description | The goal of this proposed research is to find an alternative to the Random Forest method for text summarising that is both more accurate and more efficient in reducing lengthy texts to a manageable size. In order to summarise the text using a sample size of 63459 utilisingClincalc with a G power of 0.8, alpha of 0.05, and a 95% confidence level, we employed two algorithms: Novel Long Short Term Memory (N=10) and Random Forest (N=10). The level of correctness in the text summary is used to evaluate them. Results and Discussion: In the dataset, we find that Novel Long Short Term Memory achieves an accuracy of 94.45% and Random Forest achieves 73.57% when it comes to summarising the text. According to the Independent Sample T-Test (p |
doi_str_mv | 10.1063/5.0233047 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>proquest_scita</sourceid><recordid>TN_cdi_proquest_journals_3126775126</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3126775126</sourcerecordid><originalsourceid>FETCH-LOGICAL-p637-eb0a0522b690818396dc5d6674a27fa2cc160deec00155cd7f9463fccfe779103</originalsourceid><addsrcrecordid>eNotUMtOwzAQtBBIlMKBP7DEDSllbSd2c0RVeUiVuPTALXIdp3UVx8F2CuXMh-M-LvsYzexoFqF7AhMCnD0VE6CMQS4u0IgUBckEJ_wSjQDKPKM5-7xGNyFsAWgpxHSE_uY72Q4yahw3GkulBn9Yeq9ro6JxHXYNjvon4jBYK735lUd0CKZb487tdItbl8awcT4mprfYauv8Hst27byJG4tNh5WzfVKHJP1OGPayq53FjfM6xFt01cg26LtzH6Ply3w5e8sWH6_vs-dF1nMmMr0CCQWlK17ClExZyWtV1JyLXFLRSKoU4VBrrQBSclWLpsw5a5RqtBAlATZGD6ezvXdfQ_Kttm7wXXKsGKFciCLVxHo8sYIy8Ri26r1J0fcVgerw5Kqozk9m_4KycXg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype><pqid>3126775126</pqid></control><display><type>conference_proceeding</type><title>Evaluate the accurate prediction of text summarization using novel long short term memory algorithm in comparison with random forest</title><source>American Institute of Physics:Jisc Collections:Transitional Journals Agreement 2021-23 (Reading list)</source><creator>Rupesh, Kollaikal ; Christy, S.</creator><contributor>Srinivasan, R ; Balasubramanian, PL ; Seenivasan, M ; Sharma, T. Rakesh ; Vijayan, V. ; Babu, A. B. Karthick Anand</contributor><creatorcontrib>Rupesh, Kollaikal ; Christy, S. ; Srinivasan, R ; Balasubramanian, PL ; Seenivasan, M ; Sharma, T. Rakesh ; Vijayan, V. ; Babu, A. B. Karthick Anand</creatorcontrib><description>The goal of this proposed research is to find an alternative to the Random Forest method for text summarising that is both more accurate and more efficient in reducing lengthy texts to a manageable size. In order to summarise the text using a sample size of 63459 utilisingClincalc with a G power of 0.8, alpha of 0.05, and a 95% confidence level, we employed two algorithms: Novel Long Short Term Memory (N=10) and Random Forest (N=10). The level of correctness in the text summary is used to evaluate them. Results and Discussion: In the dataset, we find that Novel Long Short Term Memory achieves an accuracy of 94.45% and Random Forest achieves 73.57% when it comes to summarising the text. According to the Independent Sample T-Test (p<0.05), the accuracy significance level is 0.001. Final Thoughts: The Novel Long Short Term Memory method is Light Years ahead of Random Forest when it comes to text summarization accuracy.</description><identifier>ISSN: 0094-243X</identifier><identifier>EISSN: 1551-7616</identifier><identifier>DOI: 10.1063/5.0233047</identifier><identifier>CODEN: APCPCS</identifier><language>eng</language><publisher>Melville: American Institute of Physics</publisher><subject>Accuracy ; Algorithms ; Confidence intervals ; Forest management</subject><ispartof>AIP conference proceedings, 2024, Vol.3193 (1)</ispartof><rights>Author(s)</rights><rights>2024 Author(s). Published under an exclusive license by AIP Publishing.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>309,310,314,776,780,785,786,23909,23910,25118,27901,27902</link.rule.ids></links><search><contributor>Srinivasan, R</contributor><contributor>Balasubramanian, PL</contributor><contributor>Seenivasan, M</contributor><contributor>Sharma, T. Rakesh</contributor><contributor>Vijayan, V.</contributor><contributor>Babu, A. B. Karthick Anand</contributor><creatorcontrib>Rupesh, Kollaikal</creatorcontrib><creatorcontrib>Christy, S.</creatorcontrib><title>Evaluate the accurate prediction of text summarization using novel long short term memory algorithm in comparison with random forest</title><title>AIP conference proceedings</title><description>The goal of this proposed research is to find an alternative to the Random Forest method for text summarising that is both more accurate and more efficient in reducing lengthy texts to a manageable size. In order to summarise the text using a sample size of 63459 utilisingClincalc with a G power of 0.8, alpha of 0.05, and a 95% confidence level, we employed two algorithms: Novel Long Short Term Memory (N=10) and Random Forest (N=10). The level of correctness in the text summary is used to evaluate them. Results and Discussion: In the dataset, we find that Novel Long Short Term Memory achieves an accuracy of 94.45% and Random Forest achieves 73.57% when it comes to summarising the text. According to the Independent Sample T-Test (p<0.05), the accuracy significance level is 0.001. Final Thoughts: The Novel Long Short Term Memory method is Light Years ahead of Random Forest when it comes to text summarization accuracy.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Confidence intervals</subject><subject>Forest management</subject><issn>0094-243X</issn><issn>1551-7616</issn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><recordid>eNotUMtOwzAQtBBIlMKBP7DEDSllbSd2c0RVeUiVuPTALXIdp3UVx8F2CuXMh-M-LvsYzexoFqF7AhMCnD0VE6CMQS4u0IgUBckEJ_wSjQDKPKM5-7xGNyFsAWgpxHSE_uY72Q4yahw3GkulBn9Yeq9ro6JxHXYNjvon4jBYK735lUd0CKZb487tdItbl8awcT4mprfYauv8Hst27byJG4tNh5WzfVKHJP1OGPayq53FjfM6xFt01cg26LtzH6Ply3w5e8sWH6_vs-dF1nMmMr0CCQWlK17ClExZyWtV1JyLXFLRSKoU4VBrrQBSclWLpsw5a5RqtBAlATZGD6ezvXdfQ_Kttm7wXXKsGKFciCLVxHo8sYIy8Ri26r1J0fcVgerw5Kqozk9m_4KycXg</recordid><startdate>20241111</startdate><enddate>20241111</enddate><creator>Rupesh, Kollaikal</creator><creator>Christy, S.</creator><general>American Institute of Physics</general><scope>8FD</scope><scope>H8D</scope><scope>L7M</scope></search><sort><creationdate>20241111</creationdate><title>Evaluate the accurate prediction of text summarization using novel long short term memory algorithm in comparison with random forest</title><author>Rupesh, Kollaikal ; Christy, S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p637-eb0a0522b690818396dc5d6674a27fa2cc160deec00155cd7f9463fccfe779103</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Confidence intervals</topic><topic>Forest management</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Rupesh, Kollaikal</creatorcontrib><creatorcontrib>Christy, S.</creatorcontrib><collection>Technology Research Database</collection><collection>Aerospace Database</collection><collection>Advanced Technologies Database with Aerospace</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Rupesh, Kollaikal</au><au>Christy, S.</au><au>Srinivasan, R</au><au>Balasubramanian, PL</au><au>Seenivasan, M</au><au>Sharma, T. Rakesh</au><au>Vijayan, V.</au><au>Babu, A. B. Karthick Anand</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Evaluate the accurate prediction of text summarization using novel long short term memory algorithm in comparison with random forest</atitle><btitle>AIP conference proceedings</btitle><date>2024-11-11</date><risdate>2024</risdate><volume>3193</volume><issue>1</issue><issn>0094-243X</issn><eissn>1551-7616</eissn><coden>APCPCS</coden><abstract>The goal of this proposed research is to find an alternative to the Random Forest method for text summarising that is both more accurate and more efficient in reducing lengthy texts to a manageable size. In order to summarise the text using a sample size of 63459 utilisingClincalc with a G power of 0.8, alpha of 0.05, and a 95% confidence level, we employed two algorithms: Novel Long Short Term Memory (N=10) and Random Forest (N=10). The level of correctness in the text summary is used to evaluate them. Results and Discussion: In the dataset, we find that Novel Long Short Term Memory achieves an accuracy of 94.45% and Random Forest achieves 73.57% when it comes to summarising the text. According to the Independent Sample T-Test (p<0.05), the accuracy significance level is 0.001. Final Thoughts: The Novel Long Short Term Memory method is Light Years ahead of Random Forest when it comes to text summarization accuracy.</abstract><cop>Melville</cop><pub>American Institute of Physics</pub><doi>10.1063/5.0233047</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0094-243X |
ispartof | AIP conference proceedings, 2024, Vol.3193 (1) |
issn | 0094-243X 1551-7616 |
language | eng |
recordid | cdi_proquest_journals_3126775126 |
source | American Institute of Physics:Jisc Collections:Transitional Journals Agreement 2021-23 (Reading list) |
subjects | Accuracy Algorithms Confidence intervals Forest management |
title | Evaluate the accurate prediction of text summarization using novel long short term memory algorithm in comparison with random forest |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-06T23%3A32%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_scita&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Evaluate%20the%20accurate%20prediction%20of%20text%20summarization%20using%20novel%20long%20short%20term%20memory%20algorithm%20in%20comparison%20with%20random%20forest&rft.btitle=AIP%20conference%20proceedings&rft.au=Rupesh,%20Kollaikal&rft.date=2024-11-11&rft.volume=3193&rft.issue=1&rft.issn=0094-243X&rft.eissn=1551-7616&rft.coden=APCPCS&rft_id=info:doi/10.1063/5.0233047&rft_dat=%3Cproquest_scita%3E3126775126%3C/proquest_scita%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-p637-eb0a0522b690818396dc5d6674a27fa2cc160deec00155cd7f9463fccfe779103%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3126775126&rft_id=info:pmid/&rfr_iscdi=true |