Loading…

A non-gradient method for solving elliptic partial differential equations with deep neural networks

Deep learning has achieved wide success in solving Partial Differential Equations (PDEs), with particular strength in handling high dimensional problems and parametric problems. Nevertheless, there is still a lack of a clear picture on the designing of network architecture and the training of networ...

Full description

Saved in:
Bibliographic Details
Published in:Journal of computational physics 2023-01, Vol.472, p.111690, Article 111690
Main Authors: Peng, Yifan, Hu, Dan, Xu, Zin-Qin John
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c297t-cb298458957cf2496acf0c587d7d26ea4b6c85ed50a4994ba1126b21c149f4a93
cites cdi_FETCH-LOGICAL-c297t-cb298458957cf2496acf0c587d7d26ea4b6c85ed50a4994ba1126b21c149f4a93
container_end_page
container_issue
container_start_page 111690
container_title Journal of computational physics
container_volume 472
creator Peng, Yifan
Hu, Dan
Xu, Zin-Qin John
description Deep learning has achieved wide success in solving Partial Differential Equations (PDEs), with particular strength in handling high dimensional problems and parametric problems. Nevertheless, there is still a lack of a clear picture on the designing of network architecture and the training of network parameters. In this work, we developed a non-gradient framework for solving elliptic PDEs based on Neural Tangent Kernel (NTK): 1. ReLU activation function is used to control the compactness of the NTK so that solutions with relatively high frequency components can be well expressed; 2. Numerical discretization is used for differential operators to reduce computational cost; 3. A dissipative evolution dynamics corresponding to the elliptic PDE is used for parameter training instead of the gradient-type descent of a loss function. The dissipative dynamics can guarantee the convergence of the training process while avoiding employment of loss functions with high order derivatives. It is also helpful for both controlling of kernel property and reduction of computational cost. Numerical tests have shown excellent performance of the non-gradient method. •A non-gradient method is developed to solve elliptic PDEs using DNNs without introducing loss functions.•The residual of the PDEs are used for training and the training kernel is the neural tangent kernel.•ReLU activation function is used to control the locality of the training kernel.•Numerical discretization is used to reduce computational cost.
doi_str_mv 10.1016/j.jcp.2022.111690
format article
fullrecord <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_jcp_2022_111690</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0021999122007537</els_id><sourcerecordid>S0021999122007537</sourcerecordid><originalsourceid>FETCH-LOGICAL-c297t-cb298458957cf2496acf0c587d7d26ea4b6c85ed50a4994ba1126b21c149f4a93</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWKs_wFv-wK6ZmP0InkrxCwQveg7ZZNKmbjdrkrb4791Sz56GYeZ5eXkIuQVWAoP6blNuzFhyxnkJALVkZ2QGTLKCN1CfkxljHAopJVySq5Q2jLG2Eu2MmAUdwlCsorYeh0y3mNfBUhciTaHf-2FFse_9mL2ho47Z655a7xzG6fu44PdOZx-GRA8-r6lFHOmAuzidBsyHEL_SNblwuk948zfn5PPp8WP5Ury9P78uF2-F4bLJhem4bEXVyqoxjgtZa-OYqdrGNpbXqEVXm7ZCWzEtpBSdBuB1x8GAkE5oeT8ncMo1MaQU0akx-q2OPwqYOlpSGzVZUkdL6mRpYh5ODE7F9h6jSmYSYdD6iCYrG_w_9C-vBnGM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A non-gradient method for solving elliptic partial differential equations with deep neural networks</title><source>ScienceDirect Freedom Collection</source><creator>Peng, Yifan ; Hu, Dan ; Xu, Zin-Qin John</creator><creatorcontrib>Peng, Yifan ; Hu, Dan ; Xu, Zin-Qin John</creatorcontrib><description>Deep learning has achieved wide success in solving Partial Differential Equations (PDEs), with particular strength in handling high dimensional problems and parametric problems. Nevertheless, there is still a lack of a clear picture on the designing of network architecture and the training of network parameters. In this work, we developed a non-gradient framework for solving elliptic PDEs based on Neural Tangent Kernel (NTK): 1. ReLU activation function is used to control the compactness of the NTK so that solutions with relatively high frequency components can be well expressed; 2. Numerical discretization is used for differential operators to reduce computational cost; 3. A dissipative evolution dynamics corresponding to the elliptic PDE is used for parameter training instead of the gradient-type descent of a loss function. The dissipative dynamics can guarantee the convergence of the training process while avoiding employment of loss functions with high order derivatives. It is also helpful for both controlling of kernel property and reduction of computational cost. Numerical tests have shown excellent performance of the non-gradient method. •A non-gradient method is developed to solve elliptic PDEs using DNNs without introducing loss functions.•The residual of the PDEs are used for training and the training kernel is the neural tangent kernel.•ReLU activation function is used to control the locality of the training kernel.•Numerical discretization is used to reduce computational cost.</description><identifier>ISSN: 0021-9991</identifier><identifier>EISSN: 1090-2716</identifier><identifier>DOI: 10.1016/j.jcp.2022.111690</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Deep neural networks ; Elliptic partial differential equations ; High dimension ; Nongradient method</subject><ispartof>Journal of computational physics, 2023-01, Vol.472, p.111690, Article 111690</ispartof><rights>2022 Elsevier Inc.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c297t-cb298458957cf2496acf0c587d7d26ea4b6c85ed50a4994ba1126b21c149f4a93</citedby><cites>FETCH-LOGICAL-c297t-cb298458957cf2496acf0c587d7d26ea4b6c85ed50a4994ba1126b21c149f4a93</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27923,27924</link.rule.ids></links><search><creatorcontrib>Peng, Yifan</creatorcontrib><creatorcontrib>Hu, Dan</creatorcontrib><creatorcontrib>Xu, Zin-Qin John</creatorcontrib><title>A non-gradient method for solving elliptic partial differential equations with deep neural networks</title><title>Journal of computational physics</title><description>Deep learning has achieved wide success in solving Partial Differential Equations (PDEs), with particular strength in handling high dimensional problems and parametric problems. Nevertheless, there is still a lack of a clear picture on the designing of network architecture and the training of network parameters. In this work, we developed a non-gradient framework for solving elliptic PDEs based on Neural Tangent Kernel (NTK): 1. ReLU activation function is used to control the compactness of the NTK so that solutions with relatively high frequency components can be well expressed; 2. Numerical discretization is used for differential operators to reduce computational cost; 3. A dissipative evolution dynamics corresponding to the elliptic PDE is used for parameter training instead of the gradient-type descent of a loss function. The dissipative dynamics can guarantee the convergence of the training process while avoiding employment of loss functions with high order derivatives. It is also helpful for both controlling of kernel property and reduction of computational cost. Numerical tests have shown excellent performance of the non-gradient method. •A non-gradient method is developed to solve elliptic PDEs using DNNs without introducing loss functions.•The residual of the PDEs are used for training and the training kernel is the neural tangent kernel.•ReLU activation function is used to control the locality of the training kernel.•Numerical discretization is used to reduce computational cost.</description><subject>Deep neural networks</subject><subject>Elliptic partial differential equations</subject><subject>High dimension</subject><subject>Nongradient method</subject><issn>0021-9991</issn><issn>1090-2716</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWKs_wFv-wK6ZmP0InkrxCwQveg7ZZNKmbjdrkrb4791Sz56GYeZ5eXkIuQVWAoP6blNuzFhyxnkJALVkZ2QGTLKCN1CfkxljHAopJVySq5Q2jLG2Eu2MmAUdwlCsorYeh0y3mNfBUhciTaHf-2FFse_9mL2ho47Z655a7xzG6fu44PdOZx-GRA8-r6lFHOmAuzidBsyHEL_SNblwuk948zfn5PPp8WP5Ury9P78uF2-F4bLJhem4bEXVyqoxjgtZa-OYqdrGNpbXqEVXm7ZCWzEtpBSdBuB1x8GAkE5oeT8ncMo1MaQU0akx-q2OPwqYOlpSGzVZUkdL6mRpYh5ODE7F9h6jSmYSYdD6iCYrG_w_9C-vBnGM</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Peng, Yifan</creator><creator>Hu, Dan</creator><creator>Xu, Zin-Qin John</creator><general>Elsevier Inc</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20230101</creationdate><title>A non-gradient method for solving elliptic partial differential equations with deep neural networks</title><author>Peng, Yifan ; Hu, Dan ; Xu, Zin-Qin John</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c297t-cb298458957cf2496acf0c587d7d26ea4b6c85ed50a4994ba1126b21c149f4a93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Deep neural networks</topic><topic>Elliptic partial differential equations</topic><topic>High dimension</topic><topic>Nongradient method</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Peng, Yifan</creatorcontrib><creatorcontrib>Hu, Dan</creatorcontrib><creatorcontrib>Xu, Zin-Qin John</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of computational physics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Peng, Yifan</au><au>Hu, Dan</au><au>Xu, Zin-Qin John</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A non-gradient method for solving elliptic partial differential equations with deep neural networks</atitle><jtitle>Journal of computational physics</jtitle><date>2023-01-01</date><risdate>2023</risdate><volume>472</volume><spage>111690</spage><pages>111690-</pages><artnum>111690</artnum><issn>0021-9991</issn><eissn>1090-2716</eissn><abstract>Deep learning has achieved wide success in solving Partial Differential Equations (PDEs), with particular strength in handling high dimensional problems and parametric problems. Nevertheless, there is still a lack of a clear picture on the designing of network architecture and the training of network parameters. In this work, we developed a non-gradient framework for solving elliptic PDEs based on Neural Tangent Kernel (NTK): 1. ReLU activation function is used to control the compactness of the NTK so that solutions with relatively high frequency components can be well expressed; 2. Numerical discretization is used for differential operators to reduce computational cost; 3. A dissipative evolution dynamics corresponding to the elliptic PDE is used for parameter training instead of the gradient-type descent of a loss function. The dissipative dynamics can guarantee the convergence of the training process while avoiding employment of loss functions with high order derivatives. It is also helpful for both controlling of kernel property and reduction of computational cost. Numerical tests have shown excellent performance of the non-gradient method. •A non-gradient method is developed to solve elliptic PDEs using DNNs without introducing loss functions.•The residual of the PDEs are used for training and the training kernel is the neural tangent kernel.•ReLU activation function is used to control the locality of the training kernel.•Numerical discretization is used to reduce computational cost.</abstract><pub>Elsevier Inc</pub><doi>10.1016/j.jcp.2022.111690</doi></addata></record>
fulltext fulltext
identifier ISSN: 0021-9991
ispartof Journal of computational physics, 2023-01, Vol.472, p.111690, Article 111690
issn 0021-9991
1090-2716
language eng
recordid cdi_crossref_primary_10_1016_j_jcp_2022_111690
source ScienceDirect Freedom Collection
subjects Deep neural networks
Elliptic partial differential equations
High dimension
Nongradient method
title A non-gradient method for solving elliptic partial differential equations with deep neural networks
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T18%3A59%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20non-gradient%20method%20for%20solving%20elliptic%20partial%20differential%20equations%20with%20deep%20neural%20networks&rft.jtitle=Journal%20of%20computational%20physics&rft.au=Peng,%20Yifan&rft.date=2023-01-01&rft.volume=472&rft.spage=111690&rft.pages=111690-&rft.artnum=111690&rft.issn=0021-9991&rft.eissn=1090-2716&rft_id=info:doi/10.1016/j.jcp.2022.111690&rft_dat=%3Celsevier_cross%3ES0021999122007537%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c297t-cb298458957cf2496acf0c587d7d26ea4b6c85ed50a4994ba1126b21c149f4a93%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true