Loading…
Improving Robustness and Uncertainty Modelling in Neural Ordinary Differential Equations
Neural ordinary differential equations (NODE) have been proposed as a continuous depth generalization to popular deep learning models such as Residual networks (ResNets). They provide parameter efficiency and automate the model selection process in deep learning models to some extent. However, they...
Saved in:
Published in: | arXiv.org 2021-12 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Anumasa, Srinivas Srijith, P K |
description | Neural ordinary differential equations (NODE) have been proposed as a continuous depth generalization to popular deep learning models such as Residual networks (ResNets). They provide parameter efficiency and automate the model selection process in deep learning models to some extent. However, they lack the much-required uncertainty modelling and robustness capabilities which are crucial for their use in several real-world applications such as autonomous driving and healthcare. We propose a novel and unique approach to model uncertainty in NODE by considering a distribution over the end-time \(T\) of the ODE solver. The proposed approach, latent time NODE (LT-NODE), treats \(T\) as a latent variable and apply Bayesian learning to obtain a posterior distribution over \(T\) from the data. In particular, we use variational inference to learn an approximate posterior and the model parameters. Prediction is done by considering the NODE representations from different samples of the posterior and can be done efficiently using a single forward pass. As \(T\) implicitly defines the depth of a NODE, posterior distribution over \(T\) would also help in model selection in NODE. We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times. ALT-NODE uses amortized variational inference to learn an approximate posterior using inference networks. We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data. |
doi_str_mv | 10.48550/arxiv.2112.12707 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2613417383</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2613417383</sourcerecordid><originalsourceid>FETCH-LOGICAL-a523-633de114fed1251794684da4598fb3f270b9a43f127bd5759e182b9b75dc93d83</originalsourceid><addsrcrecordid>eNotj8FKAzEYhIMgWGofwFvA865J_mSTPUqtWqgWpIK3kjSJpKxZm2SLffuu6GlgGGbmQ-iGkporIcidTj_hWDNKWU2ZJPICTRgArRRn7ArNct4TQlgjmRAwQR_Lr-_UH0P8xG-9GXKJLmeso8XvcedS0SGWE37preu631CI-NUNSXd4nWyIOp3wQ_DeJRdLGN3FYdAl9DFfo0uvu-xm_zpFm8fFZv5crdZPy_n9qtKCQdUAWEcp985SJqhseaO41Vy0yhvw43_Tag5-JDFWSNE6qphpjRR214JVMEW3f7UjxWFwuWz3_ZDiuLhlDQVOJSiAM2NhU68</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2613417383</pqid></control><display><type>article</type><title>Improving Robustness and Uncertainty Modelling in Neural Ordinary Differential Equations</title><source>Publicly Available Content (ProQuest)</source><creator>Anumasa, Srinivas ; Srijith, P K</creator><creatorcontrib>Anumasa, Srinivas ; Srijith, P K</creatorcontrib><description>Neural ordinary differential equations (NODE) have been proposed as a continuous depth generalization to popular deep learning models such as Residual networks (ResNets). They provide parameter efficiency and automate the model selection process in deep learning models to some extent. However, they lack the much-required uncertainty modelling and robustness capabilities which are crucial for their use in several real-world applications such as autonomous driving and healthcare. We propose a novel and unique approach to model uncertainty in NODE by considering a distribution over the end-time \(T\) of the ODE solver. The proposed approach, latent time NODE (LT-NODE), treats \(T\) as a latent variable and apply Bayesian learning to obtain a posterior distribution over \(T\) from the data. In particular, we use variational inference to learn an approximate posterior and the model parameters. Prediction is done by considering the NODE representations from different samples of the posterior and can be done efficiently using a single forward pass. As \(T\) implicitly defines the depth of a NODE, posterior distribution over \(T\) would also help in model selection in NODE. We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times. ALT-NODE uses amortized variational inference to learn an approximate posterior using inference networks. We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2112.12707</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Data points ; Deep learning ; Differential equations ; Image classification ; Inference ; Machine learning ; Mathematical models ; Modelling ; Nodes ; Ordinary differential equations ; Parameters ; Robustness ; Uncertainty</subject><ispartof>arXiv.org, 2021-12</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2613417383?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,27925,37012,44590</link.rule.ids></links><search><creatorcontrib>Anumasa, Srinivas</creatorcontrib><creatorcontrib>Srijith, P K</creatorcontrib><title>Improving Robustness and Uncertainty Modelling in Neural Ordinary Differential Equations</title><title>arXiv.org</title><description>Neural ordinary differential equations (NODE) have been proposed as a continuous depth generalization to popular deep learning models such as Residual networks (ResNets). They provide parameter efficiency and automate the model selection process in deep learning models to some extent. However, they lack the much-required uncertainty modelling and robustness capabilities which are crucial for their use in several real-world applications such as autonomous driving and healthcare. We propose a novel and unique approach to model uncertainty in NODE by considering a distribution over the end-time \(T\) of the ODE solver. The proposed approach, latent time NODE (LT-NODE), treats \(T\) as a latent variable and apply Bayesian learning to obtain a posterior distribution over \(T\) from the data. In particular, we use variational inference to learn an approximate posterior and the model parameters. Prediction is done by considering the NODE representations from different samples of the posterior and can be done efficiently using a single forward pass. As \(T\) implicitly defines the depth of a NODE, posterior distribution over \(T\) would also help in model selection in NODE. We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times. ALT-NODE uses amortized variational inference to learn an approximate posterior using inference networks. We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.</description><subject>Data points</subject><subject>Deep learning</subject><subject>Differential equations</subject><subject>Image classification</subject><subject>Inference</subject><subject>Machine learning</subject><subject>Mathematical models</subject><subject>Modelling</subject><subject>Nodes</subject><subject>Ordinary differential equations</subject><subject>Parameters</subject><subject>Robustness</subject><subject>Uncertainty</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotj8FKAzEYhIMgWGofwFvA865J_mSTPUqtWqgWpIK3kjSJpKxZm2SLffuu6GlgGGbmQ-iGkporIcidTj_hWDNKWU2ZJPICTRgArRRn7ArNct4TQlgjmRAwQR_Lr-_UH0P8xG-9GXKJLmeso8XvcedS0SGWE37preu631CI-NUNSXd4nWyIOp3wQ_DeJRdLGN3FYdAl9DFfo0uvu-xm_zpFm8fFZv5crdZPy_n9qtKCQdUAWEcp985SJqhseaO41Vy0yhvw43_Tag5-JDFWSNE6qphpjRR214JVMEW3f7UjxWFwuWz3_ZDiuLhlDQVOJSiAM2NhU68</recordid><startdate>20211223</startdate><enddate>20211223</enddate><creator>Anumasa, Srinivas</creator><creator>Srijith, P K</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20211223</creationdate><title>Improving Robustness and Uncertainty Modelling in Neural Ordinary Differential Equations</title><author>Anumasa, Srinivas ; Srijith, P K</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a523-633de114fed1251794684da4598fb3f270b9a43f127bd5759e182b9b75dc93d83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Data points</topic><topic>Deep learning</topic><topic>Differential equations</topic><topic>Image classification</topic><topic>Inference</topic><topic>Machine learning</topic><topic>Mathematical models</topic><topic>Modelling</topic><topic>Nodes</topic><topic>Ordinary differential equations</topic><topic>Parameters</topic><topic>Robustness</topic><topic>Uncertainty</topic><toplevel>online_resources</toplevel><creatorcontrib>Anumasa, Srinivas</creatorcontrib><creatorcontrib>Srijith, P K</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Anumasa, Srinivas</au><au>Srijith, P K</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Improving Robustness and Uncertainty Modelling in Neural Ordinary Differential Equations</atitle><jtitle>arXiv.org</jtitle><date>2021-12-23</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Neural ordinary differential equations (NODE) have been proposed as a continuous depth generalization to popular deep learning models such as Residual networks (ResNets). They provide parameter efficiency and automate the model selection process in deep learning models to some extent. However, they lack the much-required uncertainty modelling and robustness capabilities which are crucial for their use in several real-world applications such as autonomous driving and healthcare. We propose a novel and unique approach to model uncertainty in NODE by considering a distribution over the end-time \(T\) of the ODE solver. The proposed approach, latent time NODE (LT-NODE), treats \(T\) as a latent variable and apply Bayesian learning to obtain a posterior distribution over \(T\) from the data. In particular, we use variational inference to learn an approximate posterior and the model parameters. Prediction is done by considering the NODE representations from different samples of the posterior and can be done efficiently using a single forward pass. As \(T\) implicitly defines the depth of a NODE, posterior distribution over \(T\) would also help in model selection in NODE. We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times. ALT-NODE uses amortized variational inference to learn an approximate posterior using inference networks. We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2112.12707</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-12 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2613417383 |
source | Publicly Available Content (ProQuest) |
subjects | Data points Deep learning Differential equations Image classification Inference Machine learning Mathematical models Modelling Nodes Ordinary differential equations Parameters Robustness Uncertainty |
title | Improving Robustness and Uncertainty Modelling in Neural Ordinary Differential Equations |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T21%3A22%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Improving%20Robustness%20and%20Uncertainty%20Modelling%20in%20Neural%20Ordinary%20Differential%20Equations&rft.jtitle=arXiv.org&rft.au=Anumasa,%20Srinivas&rft.date=2021-12-23&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2112.12707&rft_dat=%3Cproquest%3E2613417383%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a523-633de114fed1251794684da4598fb3f270b9a43f127bd5759e182b9b75dc93d83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2613417383&rft_id=info:pmid/&rfr_iscdi=true |