Loading…

Explainability of Neural Networks for Symbol Detection in Molecular Communication Channels

Recent molecular communication (MC) research suggests machine learning (ML) models for symbol detection, avoiding the unfeasibility of end-to-end channel models. However, ML models are applied as black boxes, lacking proof of correctness of the underlying neural networks (NNs) to detect incoming sym...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on molecular, biological, and multi-scale communications biological, and multi-scale communications, 2023-09, Vol.9 (3), p.1-1
Main Authors: Gomez, Jorge Torres, Hofmann, Pit, Fitzek, Frank H.P., Dressler, Falko
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent molecular communication (MC) research suggests machine learning (ML) models for symbol detection, avoiding the unfeasibility of end-to-end channel models. However, ML models are applied as black boxes, lacking proof of correctness of the underlying neural networks (NNs) to detect incoming symbols. This paper studies approaches to the explainability of NNs for symbol detection in MC channels. Based on MC channel models and real testbed measurements, we generate synthesized data and train a NN model to detect of binary transmissions in MC channels. Using the local interpretable model-agnostic explanation (LIME) method and the individual conditional expectation (ICE), the findings in this paper demonstrate the analogy between the trained NN and the standard peak and slope detectors.
ISSN:2372-2061
2372-2061
DOI:10.1109/TMBMC.2023.3297135