Loading…
MAFIA: Machine Learning Acceleration on FPGAs for IoT Applications
Recent breakthroughs in ML have produced new classes of models that allow ML inference to run directly on milliwatt-powered IoT devices. On one hand, existing ML-to-FPGA compilers are designed for deep neural-networks on large FPGAs. On the other hand, general-purpose HLS tools fail to exploit prope...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 354 |
container_issue | |
container_start_page | 347 |
container_title | |
container_volume | |
creator | Ghanathe, Nikhil P Seshadri, Vivek Sharma, Rahul Wilton, Steve Kumar, Aayan |
description | Recent breakthroughs in ML have produced new classes of models that allow ML inference to run directly on milliwatt-powered IoT devices. On one hand, existing ML-to-FPGA compilers are designed for deep neural-networks on large FPGAs. On the other hand, general-purpose HLS tools fail to exploit properties specific to ML inference, thereby resulting in suboptimal performance. We propose MAFIA, a tool to compile ML inference on small form-factor FPGAs for IoT applications. MAFIA provides native support for linear algebra operations and can express a variety of ML algorithms, including state-of-the-art models. We show that MAFIA-generated programs outperform best-performing variant of a commercial HLS compiler by 2.5 × on average. |
doi_str_mv | 10.1109/FPL53798.2021.00067 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_9556450</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9556450</ieee_id><sourcerecordid>9556450</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-318fd8559df0829d232ac4b42c81f6e73088c3c692b0fc25f91e6cfb0f91f5133</originalsourceid><addsrcrecordid>eNotjFFLwzAUhaMgOOd-wV7yB1rvTXLTxLc47Cx0uIf5PLo00UhtS7sX_71DhQOHj_NxGFsj5IhgH8p9TbKwJhcgMAcAXVyxO9SalCzI6mu2QKt0hsqYW7aa58-LA6QKQ3rBnnaurNwj3zX-I_WB16GZ-tS_c-d96MLUnNPQ80vK_dbNPA4Tr4YDd-PYJf87zvfsJjbdHFb_vWRv5fNh85LVr9tq4-osCZDnTKKJrSGybQQjbCukaLw6KeENRh0KCcZ46bUVJ4heULQYtI8XsBgJpVyy9d9vCiEcxyl9NdP30RJpRSB_AJWTSJ4</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>MAFIA: Machine Learning Acceleration on FPGAs for IoT Applications</title><source>IEEE Xplore All Conference Series</source><creator>Ghanathe, Nikhil P ; Seshadri, Vivek ; Sharma, Rahul ; Wilton, Steve ; Kumar, Aayan</creator><creatorcontrib>Ghanathe, Nikhil P ; Seshadri, Vivek ; Sharma, Rahul ; Wilton, Steve ; Kumar, Aayan</creatorcontrib><description>Recent breakthroughs in ML have produced new classes of models that allow ML inference to run directly on milliwatt-powered IoT devices. On one hand, existing ML-to-FPGA compilers are designed for deep neural-networks on large FPGAs. On the other hand, general-purpose HLS tools fail to exploit properties specific to ML inference, thereby resulting in suboptimal performance. We propose MAFIA, a tool to compile ML inference on small form-factor FPGAs for IoT applications. MAFIA provides native support for linear algebra operations and can express a variety of ML algorithms, including state-of-the-art models. We show that MAFIA-generated programs outperform best-performing variant of a commercial HLS compiler by 2.5 × on average.</description><identifier>EISSN: 1946-1488</identifier><identifier>EISBN: 1665437596</identifier><identifier>EISBN: 9781665437592</identifier><identifier>DOI: 10.1109/FPL53798.2021.00067</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Compiler ; FPGA ; Hardware Acceleration ; High level synthesis ; Internet of Things ; Linear algebra ; Machine learning ; Machine learning algorithms ; Machine learning Inference ; Neural networks ; Performance evaluation ; Program processors ; Resource constrained devices</subject><ispartof>2021 31st International Conference on Field-Programmable Logic and Applications (FPL), 2021, p.347-354</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9556450$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,23909,23910,25118,27902,54530,54907</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9556450$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Ghanathe, Nikhil P</creatorcontrib><creatorcontrib>Seshadri, Vivek</creatorcontrib><creatorcontrib>Sharma, Rahul</creatorcontrib><creatorcontrib>Wilton, Steve</creatorcontrib><creatorcontrib>Kumar, Aayan</creatorcontrib><title>MAFIA: Machine Learning Acceleration on FPGAs for IoT Applications</title><title>2021 31st International Conference on Field-Programmable Logic and Applications (FPL)</title><addtitle>FPL</addtitle><description>Recent breakthroughs in ML have produced new classes of models that allow ML inference to run directly on milliwatt-powered IoT devices. On one hand, existing ML-to-FPGA compilers are designed for deep neural-networks on large FPGAs. On the other hand, general-purpose HLS tools fail to exploit properties specific to ML inference, thereby resulting in suboptimal performance. We propose MAFIA, a tool to compile ML inference on small form-factor FPGAs for IoT applications. MAFIA provides native support for linear algebra operations and can express a variety of ML algorithms, including state-of-the-art models. We show that MAFIA-generated programs outperform best-performing variant of a commercial HLS compiler by 2.5 × on average.</description><subject>Compiler</subject><subject>FPGA</subject><subject>Hardware Acceleration</subject><subject>High level synthesis</subject><subject>Internet of Things</subject><subject>Linear algebra</subject><subject>Machine learning</subject><subject>Machine learning algorithms</subject><subject>Machine learning Inference</subject><subject>Neural networks</subject><subject>Performance evaluation</subject><subject>Program processors</subject><subject>Resource constrained devices</subject><issn>1946-1488</issn><isbn>1665437596</isbn><isbn>9781665437592</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2021</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjFFLwzAUhaMgOOd-wV7yB1rvTXLTxLc47Cx0uIf5PLo00UhtS7sX_71DhQOHj_NxGFsj5IhgH8p9TbKwJhcgMAcAXVyxO9SalCzI6mu2QKt0hsqYW7aa58-LA6QKQ3rBnnaurNwj3zX-I_WB16GZ-tS_c-d96MLUnNPQ80vK_dbNPA4Tr4YDd-PYJf87zvfsJjbdHFb_vWRv5fNh85LVr9tq4-osCZDnTKKJrSGybQQjbCukaLw6KeENRh0KCcZ46bUVJ4heULQYtI8XsBgJpVyy9d9vCiEcxyl9NdP30RJpRSB_AJWTSJ4</recordid><startdate>202108</startdate><enddate>202108</enddate><creator>Ghanathe, Nikhil P</creator><creator>Seshadri, Vivek</creator><creator>Sharma, Rahul</creator><creator>Wilton, Steve</creator><creator>Kumar, Aayan</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>202108</creationdate><title>MAFIA: Machine Learning Acceleration on FPGAs for IoT Applications</title><author>Ghanathe, Nikhil P ; Seshadri, Vivek ; Sharma, Rahul ; Wilton, Steve ; Kumar, Aayan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-318fd8559df0829d232ac4b42c81f6e73088c3c692b0fc25f91e6cfb0f91f5133</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Compiler</topic><topic>FPGA</topic><topic>Hardware Acceleration</topic><topic>High level synthesis</topic><topic>Internet of Things</topic><topic>Linear algebra</topic><topic>Machine learning</topic><topic>Machine learning algorithms</topic><topic>Machine learning Inference</topic><topic>Neural networks</topic><topic>Performance evaluation</topic><topic>Program processors</topic><topic>Resource constrained devices</topic><toplevel>online_resources</toplevel><creatorcontrib>Ghanathe, Nikhil P</creatorcontrib><creatorcontrib>Seshadri, Vivek</creatorcontrib><creatorcontrib>Sharma, Rahul</creatorcontrib><creatorcontrib>Wilton, Steve</creatorcontrib><creatorcontrib>Kumar, Aayan</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ghanathe, Nikhil P</au><au>Seshadri, Vivek</au><au>Sharma, Rahul</au><au>Wilton, Steve</au><au>Kumar, Aayan</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>MAFIA: Machine Learning Acceleration on FPGAs for IoT Applications</atitle><btitle>2021 31st International Conference on Field-Programmable Logic and Applications (FPL)</btitle><stitle>FPL</stitle><date>2021-08</date><risdate>2021</risdate><spage>347</spage><epage>354</epage><pages>347-354</pages><eissn>1946-1488</eissn><eisbn>1665437596</eisbn><eisbn>9781665437592</eisbn><coden>IEEPAD</coden><abstract>Recent breakthroughs in ML have produced new classes of models that allow ML inference to run directly on milliwatt-powered IoT devices. On one hand, existing ML-to-FPGA compilers are designed for deep neural-networks on large FPGAs. On the other hand, general-purpose HLS tools fail to exploit properties specific to ML inference, thereby resulting in suboptimal performance. We propose MAFIA, a tool to compile ML inference on small form-factor FPGAs for IoT applications. MAFIA provides native support for linear algebra operations and can express a variety of ML algorithms, including state-of-the-art models. We show that MAFIA-generated programs outperform best-performing variant of a commercial HLS compiler by 2.5 × on average.</abstract><pub>IEEE</pub><doi>10.1109/FPL53798.2021.00067</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 1946-1488 |
ispartof | 2021 31st International Conference on Field-Programmable Logic and Applications (FPL), 2021, p.347-354 |
issn | 1946-1488 |
language | eng |
recordid | cdi_ieee_primary_9556450 |
source | IEEE Xplore All Conference Series |
subjects | Compiler FPGA Hardware Acceleration High level synthesis Internet of Things Linear algebra Machine learning Machine learning algorithms Machine learning Inference Neural networks Performance evaluation Program processors Resource constrained devices |
title | MAFIA: Machine Learning Acceleration on FPGAs for IoT Applications |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T14%3A30%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=MAFIA:%20Machine%20Learning%20Acceleration%20on%20FPGAs%20for%20IoT%20Applications&rft.btitle=2021%2031st%20International%20Conference%20on%20Field-Programmable%20Logic%20and%20Applications%20(FPL)&rft.au=Ghanathe,%20Nikhil%20P&rft.date=2021-08&rft.spage=347&rft.epage=354&rft.pages=347-354&rft.eissn=1946-1488&rft.coden=IEEPAD&rft_id=info:doi/10.1109/FPL53798.2021.00067&rft.eisbn=1665437596&rft.eisbn_list=9781665437592&rft_dat=%3Cieee_CHZPO%3E9556450%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-318fd8559df0829d232ac4b42c81f6e73088c3c692b0fc25f91e6cfb0f91f5133%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=9556450&rfr_iscdi=true |