Loading…
Uncovering Flooding Mechanisms Across the Contiguous United States Through Interpretive Deep Learning on Representative Catchments
Long short‐term memory (LSTM) networks represent one of the most prevalent deep learning (DL) architectures in current hydrological modeling, but they remain black boxes from which process understanding can hardly be obtained. This study aims to demonstrate the potential of interpretive DL in gainin...
Saved in:
Published in: | Water resources research 2022-01, Vol.58 (1), p.n/a |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Long short‐term memory (LSTM) networks represent one of the most prevalent deep learning (DL) architectures in current hydrological modeling, but they remain black boxes from which process understanding can hardly be obtained. This study aims to demonstrate the potential of interpretive DL in gaining scientific insights using flood prediction across the contiguous United States (CONUS) as a case study. Two interpretation methods were adopted to decipher the machine‐captured patterns and inner workings of LSTM networks. The DL interpretation by the expected gradients method revealed three distinct input‐output relationships learned by LSTM‐based runoff models in 160 individual catchments. These relationships correspond to three flood‐inducing mechanisms—snowmelt, recent rainfall, and historical rainfall—that account for 10.1%, 60.9%, and 29.0% of the 20,908 flow peaks identified from the data set, respectively. Single flooding mechanisms dominate 70.7% of the investigated catchments (11.9% snowmelt‐dominated, 34.4% recent rainfall‐dominated, and 24.4% historical rainfall‐dominated mechanisms), and the remaining 29.3% have mixed mechanisms. The spatial variability in the dominant mechanisms reflects the catchments' geographic and climatic conditions. Moreover, the additive decomposition method unveils how the LSTM network behaves differently in retaining and discarding information when emulating different types of floods. Information from inputs within previous time steps can be partially stored in the memory of LSTM networks to predict snowmelt‐induced and historical rainfall‐induced floods, while for recent rainfall‐induced floods, only recent information is retained. Overall, this study provides a new perspective for understanding hydrological processes and extremes and demonstrates the prospect of artificial intelligence‐assisted scientific discovery in the future.
Key Points
The expected gradients method was used to retrieve flooding mechanisms learned by long short‐term memory networks in 160 catchments in the United States
Snowmelt, recent rainfall, and historical rainfall induce 10.1%, 60.9%, and 29.0% of the 20,908 identified flow peaks, respectively
The additive decomposition method unveiled models' distinct behaviors in retaining and discarding information for different flood types |
---|---|
ISSN: | 0043-1397 1944-7973 |
DOI: | 10.1029/2021WR030185 |