Loading…

A Deep-Learning Approach to a Volumetric Radio Environment Map Construction for UAV-Assisted Networks

Providing global coverage for ubiquitous users is a key requirement of the fifth generation (5G) and beyond wireless technologies. This can be achieved by integrating airborne networks, such as unmanned aerial vehicles (UAVs) and satellite networks, with terrestrial networks. However, the deployment...

Full description

Saved in:
Bibliographic Details
Published in:International journal of antennas and propagation 2024-02, Vol.2024, p.1-16
Main Authors: Shawel, Bethelhem S., Woldegebreal, Dereje H., Pollin, Sofie
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Providing global coverage for ubiquitous users is a key requirement of the fifth generation (5G) and beyond wireless technologies. This can be achieved by integrating airborne networks, such as unmanned aerial vehicles (UAVs) and satellite networks, with terrestrial networks. However, the deployment of airborne networks in a three-dimensional (3D) or volumetric space requires a new understanding of the propagation channel and its losses in both the areal and altitude dimensions. Despite significant research on radio environment map (REM) construction, much of it has been limited to two-dimensional (2D) contexts. This neglects the altitude-related characteristics of electromagnetic wave propagation and confines REMs to 2D formats, which limits the comprehensive and continuous visualization of propagation environment variation in spatial dimensions. This paper proposes a volumetric REM (VREM) construction approach to compute 3D propagation losses. The proposed approach addresses the limitations of existing approaches by learning the spatial correlation of wireless propagation channel characteristics and visualizing REM in areal and height/altitude dimensions using deep learning models. Specifically, the approach uses two deep learning-based models: volume-to-volume (Vol2Vol) VREM with 3D-generative adversarial networks and sliced VREM with altitude-aware spider-UNets. In both cases, knowledge of the propagation environment and transmitter locations in 3D space is used to capture the spatial and altitude dependency of the propagation channel’s characteristics. We developed the Addis dataset, a large REM dataset comprising 54,000 samples collected from the urban part of Addis Ababa, Ethiopia, to train the proposed models. Each sample of data comprises a 512-meter by 512-meter areal resolution with different 3D obstacles (buildings and terrain), 15 simulated propagation loss maps at every 3-meter altitude resolution, and 80 different 3D transmitter locations. The results of the training and testing of the proposed models reveal that the constructed VREMs are statistically comparable. In particular, the Vol2Vol approach has a minimum L1 loss of 0.01, which further decreases to 0.0084 as the line-of-sight (LoS) probability increases to 0.95.
ISSN:1687-5869
1687-5877
DOI:10.1155/2024/9062023