Loading…
Neuroscience-Informed Interpretability of Intermediate Layers in Artificial Neural Networks
Deep Neural Networks have been successfully implemented in different areas of development and research, including Image Classification, Natural Language Processing, Time-Series Forecasting, and Bioinformatics, among others. However, its complex nature has raised questions about its internal function...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep Neural Networks have been successfully implemented in different areas of development and research, including Image Classification, Natural Language Processing, Time-Series Forecasting, and Bioinformatics, among others. However, its complex nature has raised questions about its internal functioning and decision-making, which is critical in different areas. This research seeks to explain the hidden representations of a neural network by using frameworks inspired by Neuroscience, which attempts to understand a very complex neural network, which is the human brain. In this approach, we investigated intermediate and low representation in four different networks: a simple dense Feedforward Neural Network and the Convolutional Neural Networks LeNet-5, VGG-16, and ResNet50, by using similar inputs that were used in Neuroscience experiments. With this framework, we could detect highly selective cells to some of the inputs, remarking some interesting similarities between Biological and Artificial Neural Networks. |
---|---|
ISSN: | 2161-4407 |
DOI: | 10.1109/IJCNN60899.2024.10650110 |