Loading…

On Linear Learning with Manycore Processors

A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator. In this paper we target the efficient training of generalized linear models on these machines. We propose a novel approach for achieving paralleli...

Full description

Saved in:
Bibliographic Details
Main Authors: Wszola, Eliza, Mendler-Dunner, Celestine, Jaggi, Martin, Puschel, Markus
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 194
container_issue
container_start_page 184
container_title
container_volume
creator Wszola, Eliza
Mendler-Dunner, Celestine
Jaggi, Martin
Puschel, Markus
description A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator. In this paper we target the efficient training of generalized linear models on these machines. We propose a novel approach for achieving parallelism which we call Heterogeneous Tasks on Homogeneous Cores (HTHC). It divides the problem into multiple fundamentally different tasks, which themselves are parallelized. For evaluation, we design a detailed, architecture-cognizant implementation of our scheme on a recent 72-core Knights Landing processor that is adaptive to the cache, memory, and core structure. Our library efficiently supports dense and sparse datasets as well as 4-bit quantized data for further possible gains in performance. We show benchmarks for Lasso and SVM with different data sets against straightforward parallel implementations and prior software. In particular, for Lasso on dense data, we improve the state-of-the-art by an order of magnitude.
doi_str_mv 10.1109/HiPC.2019.00032
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_8990546</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8990546</ieee_id><sourcerecordid>8990546</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-68a2caa6255802ebaa3971ed868c8d4080af315757ea8766b7d4e7df9aa5409b3</originalsourceid><addsrcrecordid>eNotjE1Lw0AUAFdBsNaePXjJXRLffr89SlArRNqDnstL8qIrupHdgvTfW9DLzGUYIa4kNFJCuF3HbdsokKEBAK1OxCp4lF6hNFZbPBUL5QzUoKU7FxelfAAca2UX4maTqi4mplx1R6SY3qqfuH-vnikdhjlztc3zwKXMuVyKs4k-C6_-vRSvD_cv7bruNo9P7V1XRwV6XzskNRA5ZS2C4p5IBy95RIcDjgYQaNLSeuuZ0DvX-9GwH6dAZA2EXi_F9d83MvPuO8cvyocdhgDWOP0LdqNBFw</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>On Linear Learning with Manycore Processors</title><source>IEEE Xplore All Conference Series</source><creator>Wszola, Eliza ; Mendler-Dunner, Celestine ; Jaggi, Martin ; Puschel, Markus</creator><creatorcontrib>Wszola, Eliza ; Mendler-Dunner, Celestine ; Jaggi, Martin ; Puschel, Markus</creatorcontrib><description>A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator. In this paper we target the efficient training of generalized linear models on these machines. We propose a novel approach for achieving parallelism which we call Heterogeneous Tasks on Homogeneous Cores (HTHC). It divides the problem into multiple fundamentally different tasks, which themselves are parallelized. For evaluation, we design a detailed, architecture-cognizant implementation of our scheme on a recent 72-core Knights Landing processor that is adaptive to the cache, memory, and core structure. Our library efficiently supports dense and sparse datasets as well as 4-bit quantized data for further possible gains in performance. We show benchmarks for Lasso and SVM with different data sets against straightforward parallel implementations and prior software. In particular, for Lasso on dense data, we improve the state-of-the-art by an order of magnitude.</description><identifier>EISSN: 2640-0316</identifier><identifier>EISBN: 9781728145358</identifier><identifier>EISBN: 172814535X</identifier><identifier>DOI: 10.1109/HiPC.2019.00032</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Computational modeling ; coordinate descent ; GLM ; Instruction sets ; Lasso ; Machine learning ; Manycore ; Manycore processors ; Support vector machines ; SVM ; Task analysis</subject><ispartof>2019 IEEE 26th International Conference on High Performance Computing, Data, and Analytics (HiPC), 2019, p.184-194</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8990546$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8990546$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Wszola, Eliza</creatorcontrib><creatorcontrib>Mendler-Dunner, Celestine</creatorcontrib><creatorcontrib>Jaggi, Martin</creatorcontrib><creatorcontrib>Puschel, Markus</creatorcontrib><title>On Linear Learning with Manycore Processors</title><title>2019 IEEE 26th International Conference on High Performance Computing, Data, and Analytics (HiPC)</title><addtitle>HIPC</addtitle><description>A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator. In this paper we target the efficient training of generalized linear models on these machines. We propose a novel approach for achieving parallelism which we call Heterogeneous Tasks on Homogeneous Cores (HTHC). It divides the problem into multiple fundamentally different tasks, which themselves are parallelized. For evaluation, we design a detailed, architecture-cognizant implementation of our scheme on a recent 72-core Knights Landing processor that is adaptive to the cache, memory, and core structure. Our library efficiently supports dense and sparse datasets as well as 4-bit quantized data for further possible gains in performance. We show benchmarks for Lasso and SVM with different data sets against straightforward parallel implementations and prior software. In particular, for Lasso on dense data, we improve the state-of-the-art by an order of magnitude.</description><subject>Computational modeling</subject><subject>coordinate descent</subject><subject>GLM</subject><subject>Instruction sets</subject><subject>Lasso</subject><subject>Machine learning</subject><subject>Manycore</subject><subject>Manycore processors</subject><subject>Support vector machines</subject><subject>SVM</subject><subject>Task analysis</subject><issn>2640-0316</issn><isbn>9781728145358</isbn><isbn>172814535X</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjE1Lw0AUAFdBsNaePXjJXRLffr89SlArRNqDnstL8qIrupHdgvTfW9DLzGUYIa4kNFJCuF3HbdsokKEBAK1OxCp4lF6hNFZbPBUL5QzUoKU7FxelfAAca2UX4maTqi4mplx1R6SY3qqfuH-vnikdhjlztc3zwKXMuVyKs4k-C6_-vRSvD_cv7bruNo9P7V1XRwV6XzskNRA5ZS2C4p5IBy95RIcDjgYQaNLSeuuZ0DvX-9GwH6dAZA2EXi_F9d83MvPuO8cvyocdhgDWOP0LdqNBFw</recordid><startdate>201912</startdate><enddate>201912</enddate><creator>Wszola, Eliza</creator><creator>Mendler-Dunner, Celestine</creator><creator>Jaggi, Martin</creator><creator>Puschel, Markus</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201912</creationdate><title>On Linear Learning with Manycore Processors</title><author>Wszola, Eliza ; Mendler-Dunner, Celestine ; Jaggi, Martin ; Puschel, Markus</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-68a2caa6255802ebaa3971ed868c8d4080af315757ea8766b7d4e7df9aa5409b3</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computational modeling</topic><topic>coordinate descent</topic><topic>GLM</topic><topic>Instruction sets</topic><topic>Lasso</topic><topic>Machine learning</topic><topic>Manycore</topic><topic>Manycore processors</topic><topic>Support vector machines</topic><topic>SVM</topic><topic>Task analysis</topic><toplevel>online_resources</toplevel><creatorcontrib>Wszola, Eliza</creatorcontrib><creatorcontrib>Mendler-Dunner, Celestine</creatorcontrib><creatorcontrib>Jaggi, Martin</creatorcontrib><creatorcontrib>Puschel, Markus</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore (Online service)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wszola, Eliza</au><au>Mendler-Dunner, Celestine</au><au>Jaggi, Martin</au><au>Puschel, Markus</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>On Linear Learning with Manycore Processors</atitle><btitle>2019 IEEE 26th International Conference on High Performance Computing, Data, and Analytics (HiPC)</btitle><stitle>HIPC</stitle><date>2019-12</date><risdate>2019</risdate><spage>184</spage><epage>194</epage><pages>184-194</pages><eissn>2640-0316</eissn><eisbn>9781728145358</eisbn><eisbn>172814535X</eisbn><coden>IEEPAD</coden><abstract>A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator. In this paper we target the efficient training of generalized linear models on these machines. We propose a novel approach for achieving parallelism which we call Heterogeneous Tasks on Homogeneous Cores (HTHC). It divides the problem into multiple fundamentally different tasks, which themselves are parallelized. For evaluation, we design a detailed, architecture-cognizant implementation of our scheme on a recent 72-core Knights Landing processor that is adaptive to the cache, memory, and core structure. Our library efficiently supports dense and sparse datasets as well as 4-bit quantized data for further possible gains in performance. We show benchmarks for Lasso and SVM with different data sets against straightforward parallel implementations and prior software. In particular, for Lasso on dense data, we improve the state-of-the-art by an order of magnitude.</abstract><pub>IEEE</pub><doi>10.1109/HiPC.2019.00032</doi><tpages>11</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2640-0316
ispartof 2019 IEEE 26th International Conference on High Performance Computing, Data, and Analytics (HiPC), 2019, p.184-194
issn 2640-0316
language eng
recordid cdi_ieee_primary_8990546
source IEEE Xplore All Conference Series
subjects Computational modeling
coordinate descent
GLM
Instruction sets
Lasso
Machine learning
Manycore
Manycore processors
Support vector machines
SVM
Task analysis
title On Linear Learning with Manycore Processors
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T18%3A21%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=On%20Linear%20Learning%20with%20Manycore%20Processors&rft.btitle=2019%20IEEE%2026th%20International%20Conference%20on%20High%20Performance%20Computing,%20Data,%20and%20Analytics%20(HiPC)&rft.au=Wszola,%20Eliza&rft.date=2019-12&rft.spage=184&rft.epage=194&rft.pages=184-194&rft.eissn=2640-0316&rft.coden=IEEPAD&rft_id=info:doi/10.1109/HiPC.2019.00032&rft.eisbn=9781728145358&rft.eisbn_list=172814535X&rft_dat=%3Cieee_CHZPO%3E8990546%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-68a2caa6255802ebaa3971ed868c8d4080af315757ea8766b7d4e7df9aa5409b3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8990546&rfr_iscdi=true