Loading…

Joint-way Compression for LDPC Neural Decoding Algorithm with Tensor-Ring Decomposition

In this paper, we propose low complexity joint-way compression algorithms with Tensor-Ring (TR) decomposition and weight sharing to further lower the storage and computational complexity requirements for low density parity check (LDPC) neural decoding. Compared with Tensor-Train (TT) decomposition,...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2023-01, Vol.11, p.1-1
Main Authors: Liang, Yuanhui, Lam, Chan-Tong, Ng, Benjamin K.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we propose low complexity joint-way compression algorithms with Tensor-Ring (TR) decomposition and weight sharing to further lower the storage and computational complexity requirements for low density parity check (LDPC) neural decoding. Compared with Tensor-Train (TT) decomposition, TR decomposition is more flexible for the selection of ranks, and is also conducive to the use of rank optimization algorithms. In particular, we use TR decomposition to decompose not only the weight parameter matrix of Neural Normalized Min-Sum (NNMS)+ algorithm [16], but also the message matrix transmitted between variable nodes and check nodes. Furthermore, we combine the TR decomposition and temporal and spatial weight sharing algorithm, called joint-way compression, to further lower the complexity of LDPC neural decoding algorithm. We show that the joint-way compression algorithm can achieve better compression efficiency than a single compression algorithm, while maintaining a comparable bit error rate (BER) performance. From the numerical experiments, we found that all the compression algorithms with appropriate selection of ranks give almost no performance BER degradation and that the TRwm-ssNNMS+ algorithm, which combines the spatial sharing and TR decomposition of both weight and message matrix, has the best compression performance. Comparing with our TT-NNMS+ algorithm proposed in [16], the number of parameters is reduced by about 70 times and the number of multiplications is reduced by about 6 times.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2023.3252907