Loading…
Generative Information Fusion
In this work, we demonstrate the ability to exploit sensing modalities for mitigating an unrepresented modality or for potentially re-targeting resources. This is tantamount to developing proxy sensing capabilities for multi-modal learning. In classical fusion, multiple sensors are required to captu...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In this work, we demonstrate the ability to exploit sensing modalities for mitigating an unrepresented modality or for potentially re-targeting resources. This is tantamount to developing proxy sensing capabilities for multi-modal learning. In classical fusion, multiple sensors are required to capture different information about the same target. Maintaining and collecting samples from multiple sensors can be financially demanding. Additionally, the effort necessary to ensure a logical mapping between the modalities may be prohibitively limiting. We examine the scenario where we have access to all modalities during training, but only a single modality at testing. In our approach, we initialize the parameters of our single modality inference network with weights learned from the fusion of multiple modalities through both classification and GANs losses. Our experiments show that emulating a multi-modal system by perturbing a single modality with noise can help us achieve competitive results compared to using multiple modalities. |
---|---|
ISSN: | 2379-190X |
DOI: | 10.1109/ICASSP39728.2021.9414284 |