Loading…

Frequency Disentangled Features in Neural Image Compression

The design of a neural image compression network is governed by how well the entropy model matches the true distribution of the latent code. Apart from the model capacity, this ability is indirectly under the effect of how close the relaxed quantization is to the actual hard quantization. Optimizing...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-08
Main Authors: Zafari, Ali, Khoshkhahtinat, Atefeh, Mehta, Piyush, Mohammad Saeed Ebrahimi Saadabadi, Akyash, Mohammad, Nasrabadi, Nasser M
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zafari, Ali
Khoshkhahtinat, Atefeh
Mehta, Piyush
Mohammad Saeed Ebrahimi Saadabadi
Akyash, Mohammad
Nasrabadi, Nasser M
description The design of a neural image compression network is governed by how well the entropy model matches the true distribution of the latent code. Apart from the model capacity, this ability is indirectly under the effect of how close the relaxed quantization is to the actual hard quantization. Optimizing the parameters of a rate-distortion variational autoencoder (R-D VAE) is ruled by this approximated quantization scheme. In this paper, we propose a feature-level frequency disentanglement to help the relaxed scalar quantization achieve lower bit rates by guiding the high entropy latent features to include most of the low-frequency texture of the image. In addition, to strengthen the de-correlating power of the transformer-based analysis/synthesis transform, an augmented self-attention score calculation based on the Hadamard product is utilized during both encoding and decoding. Channel-wise autoregressive entropy modeling takes advantage of the proposed frequency separation as it inherently directs high-informational low-frequency channels to the first chunks and conditions the future chunks on it. The proposed network not only outperforms hand-engineered codecs, but also neural network-based codecs built on computation-heavy spatially autoregressive entropy models.
doi_str_mv 10.48550/arxiv.2308.02620
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2847572801</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2847572801</sourcerecordid><originalsourceid>FETCH-LOGICAL-a950-46d10dd40d1a1b6cb431b1b0b10740c4a47b5b8e1bad0bf25b8d4224ec319db03</originalsourceid><addsrcrecordid>eNotjk9Lw0AUxBdBsNR-AG8LnhPf2z_JFk8SjRaKXnovu9nXkpJu6m4i-u1d0NMMMzDzY-wOoVRGa3iw8bv_KoUEU4KoBFyxhZASC6OEuGGrlE4AuaiF1nLBHttInzOF7oc_94nCZMNxIM9bstMcKfE-8Heaox345myPxJvxfMl56sdwy64Pdki0-tcl27Uvu-at2H68bpqnbWHXGgpVeQTvFXi06KrOKYkOHTiEWkGnrKqddobQWQ_uILL3GVVRJ3HtHcglu_-bvcQxo6ZpfxrnGPLjXhhV61oYQPkLv3xJEQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2847572801</pqid></control><display><type>article</type><title>Frequency Disentangled Features in Neural Image Compression</title><source>Publicly Available Content Database</source><creator>Zafari, Ali ; Khoshkhahtinat, Atefeh ; Mehta, Piyush ; Mohammad Saeed Ebrahimi Saadabadi ; Akyash, Mohammad ; Nasrabadi, Nasser M</creator><creatorcontrib>Zafari, Ali ; Khoshkhahtinat, Atefeh ; Mehta, Piyush ; Mohammad Saeed Ebrahimi Saadabadi ; Akyash, Mohammad ; Nasrabadi, Nasser M</creatorcontrib><description>The design of a neural image compression network is governed by how well the entropy model matches the true distribution of the latent code. Apart from the model capacity, this ability is indirectly under the effect of how close the relaxed quantization is to the actual hard quantization. Optimizing the parameters of a rate-distortion variational autoencoder (R-D VAE) is ruled by this approximated quantization scheme. In this paper, we propose a feature-level frequency disentanglement to help the relaxed scalar quantization achieve lower bit rates by guiding the high entropy latent features to include most of the low-frequency texture of the image. In addition, to strengthen the de-correlating power of the transformer-based analysis/synthesis transform, an augmented self-attention score calculation based on the Hadamard product is utilized during both encoding and decoding. Channel-wise autoregressive entropy modeling takes advantage of the proposed frequency separation as it inherently directs high-informational low-frequency channels to the first chunks and conditions the future chunks on it. The proposed network not only outperforms hand-engineered codecs, but also neural network-based codecs built on computation-heavy spatially autoregressive entropy models.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2308.02620</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Autoregressive models ; Codec ; Computer networks ; Decoding ; Entropy ; Image compression ; Neural networks</subject><ispartof>arXiv.org, 2023-08</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2847572801?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>777,781,25734,27906,36993,44571</link.rule.ids></links><search><creatorcontrib>Zafari, Ali</creatorcontrib><creatorcontrib>Khoshkhahtinat, Atefeh</creatorcontrib><creatorcontrib>Mehta, Piyush</creatorcontrib><creatorcontrib>Mohammad Saeed Ebrahimi Saadabadi</creatorcontrib><creatorcontrib>Akyash, Mohammad</creatorcontrib><creatorcontrib>Nasrabadi, Nasser M</creatorcontrib><title>Frequency Disentangled Features in Neural Image Compression</title><title>arXiv.org</title><description>The design of a neural image compression network is governed by how well the entropy model matches the true distribution of the latent code. Apart from the model capacity, this ability is indirectly under the effect of how close the relaxed quantization is to the actual hard quantization. Optimizing the parameters of a rate-distortion variational autoencoder (R-D VAE) is ruled by this approximated quantization scheme. In this paper, we propose a feature-level frequency disentanglement to help the relaxed scalar quantization achieve lower bit rates by guiding the high entropy latent features to include most of the low-frequency texture of the image. In addition, to strengthen the de-correlating power of the transformer-based analysis/synthesis transform, an augmented self-attention score calculation based on the Hadamard product is utilized during both encoding and decoding. Channel-wise autoregressive entropy modeling takes advantage of the proposed frequency separation as it inherently directs high-informational low-frequency channels to the first chunks and conditions the future chunks on it. The proposed network not only outperforms hand-engineered codecs, but also neural network-based codecs built on computation-heavy spatially autoregressive entropy models.</description><subject>Autoregressive models</subject><subject>Codec</subject><subject>Computer networks</subject><subject>Decoding</subject><subject>Entropy</subject><subject>Image compression</subject><subject>Neural networks</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotjk9Lw0AUxBdBsNR-AG8LnhPf2z_JFk8SjRaKXnovu9nXkpJu6m4i-u1d0NMMMzDzY-wOoVRGa3iw8bv_KoUEU4KoBFyxhZASC6OEuGGrlE4AuaiF1nLBHttInzOF7oc_94nCZMNxIM9bstMcKfE-8Heaox345myPxJvxfMl56sdwy64Pdki0-tcl27Uvu-at2H68bpqnbWHXGgpVeQTvFXi06KrOKYkOHTiEWkGnrKqddobQWQ_uILL3GVVRJ3HtHcglu_-bvcQxo6ZpfxrnGPLjXhhV61oYQPkLv3xJEQ</recordid><startdate>20230804</startdate><enddate>20230804</enddate><creator>Zafari, Ali</creator><creator>Khoshkhahtinat, Atefeh</creator><creator>Mehta, Piyush</creator><creator>Mohammad Saeed Ebrahimi Saadabadi</creator><creator>Akyash, Mohammad</creator><creator>Nasrabadi, Nasser M</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope></search><sort><creationdate>20230804</creationdate><title>Frequency Disentangled Features in Neural Image Compression</title><author>Zafari, Ali ; Khoshkhahtinat, Atefeh ; Mehta, Piyush ; Mohammad Saeed Ebrahimi Saadabadi ; Akyash, Mohammad ; Nasrabadi, Nasser M</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a950-46d10dd40d1a1b6cb431b1b0b10740c4a47b5b8e1bad0bf25b8d4224ec319db03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Autoregressive models</topic><topic>Codec</topic><topic>Computer networks</topic><topic>Decoding</topic><topic>Entropy</topic><topic>Image compression</topic><topic>Neural networks</topic><toplevel>online_resources</toplevel><creatorcontrib>Zafari, Ali</creatorcontrib><creatorcontrib>Khoshkhahtinat, Atefeh</creatorcontrib><creatorcontrib>Mehta, Piyush</creatorcontrib><creatorcontrib>Mohammad Saeed Ebrahimi Saadabadi</creatorcontrib><creatorcontrib>Akyash, Mohammad</creatorcontrib><creatorcontrib>Nasrabadi, Nasser M</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zafari, Ali</au><au>Khoshkhahtinat, Atefeh</au><au>Mehta, Piyush</au><au>Mohammad Saeed Ebrahimi Saadabadi</au><au>Akyash, Mohammad</au><au>Nasrabadi, Nasser M</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Frequency Disentangled Features in Neural Image Compression</atitle><jtitle>arXiv.org</jtitle><date>2023-08-04</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>The design of a neural image compression network is governed by how well the entropy model matches the true distribution of the latent code. Apart from the model capacity, this ability is indirectly under the effect of how close the relaxed quantization is to the actual hard quantization. Optimizing the parameters of a rate-distortion variational autoencoder (R-D VAE) is ruled by this approximated quantization scheme. In this paper, we propose a feature-level frequency disentanglement to help the relaxed scalar quantization achieve lower bit rates by guiding the high entropy latent features to include most of the low-frequency texture of the image. In addition, to strengthen the de-correlating power of the transformer-based analysis/synthesis transform, an augmented self-attention score calculation based on the Hadamard product is utilized during both encoding and decoding. Channel-wise autoregressive entropy modeling takes advantage of the proposed frequency separation as it inherently directs high-informational low-frequency channels to the first chunks and conditions the future chunks on it. The proposed network not only outperforms hand-engineered codecs, but also neural network-based codecs built on computation-heavy spatially autoregressive entropy models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2308.02620</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2847572801
source Publicly Available Content Database
subjects Autoregressive models
Codec
Computer networks
Decoding
Entropy
Image compression
Neural networks
title Frequency Disentangled Features in Neural Image Compression
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T22%3A50%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Frequency%20Disentangled%20Features%20in%20Neural%20Image%20Compression&rft.jtitle=arXiv.org&rft.au=Zafari,%20Ali&rft.date=2023-08-04&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2308.02620&rft_dat=%3Cproquest%3E2847572801%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a950-46d10dd40d1a1b6cb431b1b0b10740c4a47b5b8e1bad0bf25b8d4224ec319db03%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2847572801&rft_id=info:pmid/&rfr_iscdi=true