Loading…
Examining parallelization in kernel regression
For a few decades, parallelization in statistical computing has been an increasing trend, and researchers have put significant effort into converting or adjusting known statistical methods and algorithms in parallel. The main reasons for the transition to parallel processes are the rapid growth in t...
Saved in:
Published in: | Soft computing (Berlin, Germany) Germany), 2024, Vol.28 (1), p.205-215 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | cdi_FETCH-LOGICAL-c172t-eea72f4d5a01b4042132b78311b1fd3976267c48d10c8777e633f58083421b983 |
container_end_page | 215 |
container_issue | 1 |
container_start_page | 205 |
container_title | Soft computing (Berlin, Germany) |
container_volume | 28 |
creator | Oltulu, Orcun Gokalp Yavuz, Fulya |
description | For a few decades, parallelization in statistical computing has been an increasing trend, and researchers have put significant effort into converting or adjusting known statistical methods and algorithms in parallel. The main reasons for the transition to parallel processes are the rapid growth in the size and the volume of data and the accelerated hardware developments. Divide and (re)combine (DnR) is one of the parallelization methods that allows the existing data or method to be implemented by dividing it into smaller pieces. It is possible to use the DnR method in most regression methods to reveal the relationship between the data. Although several libraries have been created in existing programming languages for many regression methods, such an approach is not yet used for kernel regression. However, it should be kept in mind that the kernel regression calculation method takes a relatively long time. For this reason, parallelization would be a handy strategy to decrease the calculation time in kernel regression. In this study, we aim to demonstrate how time efficiency is achieved using DnR methods for kernel regression with the help of several parallelization strategies in R. The results indicate that the computation time can be reduced proportionally with a trade-off between time and accuracy. |
doi_str_mv | 10.1007/s00500-023-09285-4 |
format | article |
fullrecord | <record><control><sourceid>crossref_sprin</sourceid><recordid>TN_cdi_crossref_primary_10_1007_s00500_023_09285_4</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>10_1007_s00500_023_09285_4</sourcerecordid><originalsourceid>FETCH-LOGICAL-c172t-eea72f4d5a01b4042132b78311b1fd3976267c48d10c8777e633f58083421b983</originalsourceid><addsrcrecordid>eNp9j8tOwzAQRS0EEqXwA6zyAy5jj5NxlqgqD6kSG1hbTuJELqlT2UUCvh7TsGY1V6N7RnMYuxWwEgB0lwBKAA4SOdRSl1ydsYVQiJwU1eenLDlVCi_ZVUo7ACmoxAVbbT7t3gcfhuJgox1HN_pve_RTKHwo3l0MbiyiG6JLKS-v2UVvx-Ru_uaSvT1sXtdPfPvy-Ly-3_JWkDxy5yzJXnWlBdEoUFKgbEijEI3oO6ypkhW1SncCWk1ErkLsSw0ac7WpNS6ZnO-2cUoput4cot_b-GUEmF9jMxubbGxOxkZlCGco5XIYXDS76SOG_Od_1A99gle6</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Examining parallelization in kernel regression</title><source>Springer Nature</source><creator>Oltulu, Orcun ; Gokalp Yavuz, Fulya</creator><creatorcontrib>Oltulu, Orcun ; Gokalp Yavuz, Fulya</creatorcontrib><description>For a few decades, parallelization in statistical computing has been an increasing trend, and researchers have put significant effort into converting or adjusting known statistical methods and algorithms in parallel. The main reasons for the transition to parallel processes are the rapid growth in the size and the volume of data and the accelerated hardware developments. Divide and (re)combine (DnR) is one of the parallelization methods that allows the existing data or method to be implemented by dividing it into smaller pieces. It is possible to use the DnR method in most regression methods to reveal the relationship between the data. Although several libraries have been created in existing programming languages for many regression methods, such an approach is not yet used for kernel regression. However, it should be kept in mind that the kernel regression calculation method takes a relatively long time. For this reason, parallelization would be a handy strategy to decrease the calculation time in kernel regression. In this study, we aim to demonstrate how time efficiency is achieved using DnR methods for kernel regression with the help of several parallelization strategies in R. The results indicate that the computation time can be reduced proportionally with a trade-off between time and accuracy.</description><identifier>ISSN: 1432-7643</identifier><identifier>EISSN: 1433-7479</identifier><identifier>DOI: 10.1007/s00500-023-09285-4</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Artificial Intelligence ; Computational Intelligence ; Control ; Data Analytics and Machine Learning ; Engineering ; Mathematical Logic and Foundations ; Mechatronics ; Robotics</subject><ispartof>Soft computing (Berlin, Germany), 2024, Vol.28 (1), p.205-215</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c172t-eea72f4d5a01b4042132b78311b1fd3976267c48d10c8777e633f58083421b983</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Oltulu, Orcun</creatorcontrib><creatorcontrib>Gokalp Yavuz, Fulya</creatorcontrib><title>Examining parallelization in kernel regression</title><title>Soft computing (Berlin, Germany)</title><addtitle>Soft Comput</addtitle><description>For a few decades, parallelization in statistical computing has been an increasing trend, and researchers have put significant effort into converting or adjusting known statistical methods and algorithms in parallel. The main reasons for the transition to parallel processes are the rapid growth in the size and the volume of data and the accelerated hardware developments. Divide and (re)combine (DnR) is one of the parallelization methods that allows the existing data or method to be implemented by dividing it into smaller pieces. It is possible to use the DnR method in most regression methods to reveal the relationship between the data. Although several libraries have been created in existing programming languages for many regression methods, such an approach is not yet used for kernel regression. However, it should be kept in mind that the kernel regression calculation method takes a relatively long time. For this reason, parallelization would be a handy strategy to decrease the calculation time in kernel regression. In this study, we aim to demonstrate how time efficiency is achieved using DnR methods for kernel regression with the help of several parallelization strategies in R. The results indicate that the computation time can be reduced proportionally with a trade-off between time and accuracy.</description><subject>Artificial Intelligence</subject><subject>Computational Intelligence</subject><subject>Control</subject><subject>Data Analytics and Machine Learning</subject><subject>Engineering</subject><subject>Mathematical Logic and Foundations</subject><subject>Mechatronics</subject><subject>Robotics</subject><issn>1432-7643</issn><issn>1433-7479</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9j8tOwzAQRS0EEqXwA6zyAy5jj5NxlqgqD6kSG1hbTuJELqlT2UUCvh7TsGY1V6N7RnMYuxWwEgB0lwBKAA4SOdRSl1ydsYVQiJwU1eenLDlVCi_ZVUo7ACmoxAVbbT7t3gcfhuJgox1HN_pve_RTKHwo3l0MbiyiG6JLKS-v2UVvx-Ru_uaSvT1sXtdPfPvy-Ly-3_JWkDxy5yzJXnWlBdEoUFKgbEijEI3oO6ypkhW1SncCWk1ErkLsSw0ac7WpNS6ZnO-2cUoput4cot_b-GUEmF9jMxubbGxOxkZlCGco5XIYXDS76SOG_Od_1A99gle6</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Oltulu, Orcun</creator><creator>Gokalp Yavuz, Fulya</creator><general>Springer Berlin Heidelberg</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>2024</creationdate><title>Examining parallelization in kernel regression</title><author>Oltulu, Orcun ; Gokalp Yavuz, Fulya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c172t-eea72f4d5a01b4042132b78311b1fd3976267c48d10c8777e633f58083421b983</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial Intelligence</topic><topic>Computational Intelligence</topic><topic>Control</topic><topic>Data Analytics and Machine Learning</topic><topic>Engineering</topic><topic>Mathematical Logic and Foundations</topic><topic>Mechatronics</topic><topic>Robotics</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Oltulu, Orcun</creatorcontrib><creatorcontrib>Gokalp Yavuz, Fulya</creatorcontrib><collection>CrossRef</collection><jtitle>Soft computing (Berlin, Germany)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Oltulu, Orcun</au><au>Gokalp Yavuz, Fulya</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Examining parallelization in kernel regression</atitle><jtitle>Soft computing (Berlin, Germany)</jtitle><stitle>Soft Comput</stitle><date>2024</date><risdate>2024</risdate><volume>28</volume><issue>1</issue><spage>205</spage><epage>215</epage><pages>205-215</pages><issn>1432-7643</issn><eissn>1433-7479</eissn><abstract>For a few decades, parallelization in statistical computing has been an increasing trend, and researchers have put significant effort into converting or adjusting known statistical methods and algorithms in parallel. The main reasons for the transition to parallel processes are the rapid growth in the size and the volume of data and the accelerated hardware developments. Divide and (re)combine (DnR) is one of the parallelization methods that allows the existing data or method to be implemented by dividing it into smaller pieces. It is possible to use the DnR method in most regression methods to reveal the relationship between the data. Although several libraries have been created in existing programming languages for many regression methods, such an approach is not yet used for kernel regression. However, it should be kept in mind that the kernel regression calculation method takes a relatively long time. For this reason, parallelization would be a handy strategy to decrease the calculation time in kernel regression. In this study, we aim to demonstrate how time efficiency is achieved using DnR methods for kernel regression with the help of several parallelization strategies in R. The results indicate that the computation time can be reduced proportionally with a trade-off between time and accuracy.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00500-023-09285-4</doi><tpages>11</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1432-7643 |
ispartof | Soft computing (Berlin, Germany), 2024, Vol.28 (1), p.205-215 |
issn | 1432-7643 1433-7479 |
language | eng |
recordid | cdi_crossref_primary_10_1007_s00500_023_09285_4 |
source | Springer Nature |
subjects | Artificial Intelligence Computational Intelligence Control Data Analytics and Machine Learning Engineering Mathematical Logic and Foundations Mechatronics Robotics |
title | Examining parallelization in kernel regression |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T18%3A41%3A08IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_sprin&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Examining%20parallelization%20in%20kernel%20regression&rft.jtitle=Soft%20computing%20(Berlin,%20Germany)&rft.au=Oltulu,%20Orcun&rft.date=2024&rft.volume=28&rft.issue=1&rft.spage=205&rft.epage=215&rft.pages=205-215&rft.issn=1432-7643&rft.eissn=1433-7479&rft_id=info:doi/10.1007/s00500-023-09285-4&rft_dat=%3Ccrossref_sprin%3E10_1007_s00500_023_09285_4%3C/crossref_sprin%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c172t-eea72f4d5a01b4042132b78311b1fd3976267c48d10c8777e633f58083421b983%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |