Loading…

PNeRV: A Polynomial Neural Representation for Videos

Extracting Implicit Neural Representations (INRs) on video data poses unique challenges due to the additional temporal dimension. In the context of videos, INRs have predominantly relied on a frame-only parameterization, which sacrifices the spatiotemporal continuity observed in pixel-level (spatial...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-06
Main Authors: Gupta, Sonam, Snehal Singh Tomar, Chrysos, Grigorios G, Das, Sukhendu, Rajagopalan, A N
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Gupta, Sonam
Snehal Singh Tomar
Chrysos, Grigorios G
Das, Sukhendu
Rajagopalan, A N
description Extracting Implicit Neural Representations (INRs) on video data poses unique challenges due to the additional temporal dimension. In the context of videos, INRs have predominantly relied on a frame-only parameterization, which sacrifices the spatiotemporal continuity observed in pixel-level (spatial) representations. To mitigate this, we introduce Polynomial Neural Representation for Videos (PNeRV), a parameter-wise efficient, patch-wise INR for videos that preserves spatiotemporal continuity. PNeRV leverages the modeling capabilities of Polynomial Neural Networks to perform the modulation of a continuous spatial (patch) signal with a continuous time (frame) signal. We further propose a custom Hierarchical Patch-wise Spatial Sampling Scheme that ensures spatial continuity while retaining parameter efficiency. We also employ a carefully designed Positional Embedding methodology to further enhance PNeRV's performance. Our extensive experimentation demonstrates that PNeRV outperforms the baselines in conventional Implicit Neural Representation tasks like compression along with downstream applications that require spatiotemporal continuity in the underlying representation. PNeRV not only addresses the challenges posed by video data in the realm of INRs but also opens new avenues for advanced video processing and analysis.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3073387037</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3073387037</sourcerecordid><originalsourceid>FETCH-proquest_journals_30733870373</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwCfBLDQqzUnBUCMjPqczLz81MzFHwSy0tAlJBqQVFqcWpeSWJJZn5eQpp-UUKYZkpqfnFPAysaYk5xam8UJqbQdnNNcTZQ7egKL-wNLW4JD4rv7QoDygVb2xgbmxsYW5gbG5MnCoA_XE0AA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3073387037</pqid></control><display><type>article</type><title>PNeRV: A Polynomial Neural Representation for Videos</title><source>Publicly Available Content Database</source><creator>Gupta, Sonam ; Snehal Singh Tomar ; Chrysos, Grigorios G ; Das, Sukhendu ; Rajagopalan, A N</creator><creatorcontrib>Gupta, Sonam ; Snehal Singh Tomar ; Chrysos, Grigorios G ; Das, Sukhendu ; Rajagopalan, A N</creatorcontrib><description>Extracting Implicit Neural Representations (INRs) on video data poses unique challenges due to the additional temporal dimension. In the context of videos, INRs have predominantly relied on a frame-only parameterization, which sacrifices the spatiotemporal continuity observed in pixel-level (spatial) representations. To mitigate this, we introduce Polynomial Neural Representation for Videos (PNeRV), a parameter-wise efficient, patch-wise INR for videos that preserves spatiotemporal continuity. PNeRV leverages the modeling capabilities of Polynomial Neural Networks to perform the modulation of a continuous spatial (patch) signal with a continuous time (frame) signal. We further propose a custom Hierarchical Patch-wise Spatial Sampling Scheme that ensures spatial continuity while retaining parameter efficiency. We also employ a carefully designed Positional Embedding methodology to further enhance PNeRV's performance. Our extensive experimentation demonstrates that PNeRV outperforms the baselines in conventional Implicit Neural Representation tasks like compression along with downstream applications that require spatiotemporal continuity in the underlying representation. PNeRV not only addresses the challenges posed by video data in the realm of INRs but also opens new avenues for advanced video processing and analysis.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Image processing ; Neural networks ; Parameterization ; Parameters ; Polynomials ; Representations ; Video data</subject><ispartof>arXiv.org, 2024-06</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/3073387037?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Gupta, Sonam</creatorcontrib><creatorcontrib>Snehal Singh Tomar</creatorcontrib><creatorcontrib>Chrysos, Grigorios G</creatorcontrib><creatorcontrib>Das, Sukhendu</creatorcontrib><creatorcontrib>Rajagopalan, A N</creatorcontrib><title>PNeRV: A Polynomial Neural Representation for Videos</title><title>arXiv.org</title><description>Extracting Implicit Neural Representations (INRs) on video data poses unique challenges due to the additional temporal dimension. In the context of videos, INRs have predominantly relied on a frame-only parameterization, which sacrifices the spatiotemporal continuity observed in pixel-level (spatial) representations. To mitigate this, we introduce Polynomial Neural Representation for Videos (PNeRV), a parameter-wise efficient, patch-wise INR for videos that preserves spatiotemporal continuity. PNeRV leverages the modeling capabilities of Polynomial Neural Networks to perform the modulation of a continuous spatial (patch) signal with a continuous time (frame) signal. We further propose a custom Hierarchical Patch-wise Spatial Sampling Scheme that ensures spatial continuity while retaining parameter efficiency. We also employ a carefully designed Positional Embedding methodology to further enhance PNeRV's performance. Our extensive experimentation demonstrates that PNeRV outperforms the baselines in conventional Implicit Neural Representation tasks like compression along with downstream applications that require spatiotemporal continuity in the underlying representation. PNeRV not only addresses the challenges posed by video data in the realm of INRs but also opens new avenues for advanced video processing and analysis.</description><subject>Image processing</subject><subject>Neural networks</subject><subject>Parameterization</subject><subject>Parameters</subject><subject>Polynomials</subject><subject>Representations</subject><subject>Video data</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQwCfBLDQqzUnBUCMjPqczLz81MzFHwSy0tAlJBqQVFqcWpeSWJJZn5eQpp-UUKYZkpqfnFPAysaYk5xam8UJqbQdnNNcTZQ7egKL-wNLW4JD4rv7QoDygVb2xgbmxsYW5gbG5MnCoA_XE0AA</recordid><startdate>20240627</startdate><enddate>20240627</enddate><creator>Gupta, Sonam</creator><creator>Snehal Singh Tomar</creator><creator>Chrysos, Grigorios G</creator><creator>Das, Sukhendu</creator><creator>Rajagopalan, A N</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240627</creationdate><title>PNeRV: A Polynomial Neural Representation for Videos</title><author>Gupta, Sonam ; Snehal Singh Tomar ; Chrysos, Grigorios G ; Das, Sukhendu ; Rajagopalan, A N</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30733870373</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Image processing</topic><topic>Neural networks</topic><topic>Parameterization</topic><topic>Parameters</topic><topic>Polynomials</topic><topic>Representations</topic><topic>Video data</topic><toplevel>online_resources</toplevel><creatorcontrib>Gupta, Sonam</creatorcontrib><creatorcontrib>Snehal Singh Tomar</creatorcontrib><creatorcontrib>Chrysos, Grigorios G</creatorcontrib><creatorcontrib>Das, Sukhendu</creatorcontrib><creatorcontrib>Rajagopalan, A N</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>Proquest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection (Proquest) (PQ_SDU_P3)</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gupta, Sonam</au><au>Snehal Singh Tomar</au><au>Chrysos, Grigorios G</au><au>Das, Sukhendu</au><au>Rajagopalan, A N</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>PNeRV: A Polynomial Neural Representation for Videos</atitle><jtitle>arXiv.org</jtitle><date>2024-06-27</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Extracting Implicit Neural Representations (INRs) on video data poses unique challenges due to the additional temporal dimension. In the context of videos, INRs have predominantly relied on a frame-only parameterization, which sacrifices the spatiotemporal continuity observed in pixel-level (spatial) representations. To mitigate this, we introduce Polynomial Neural Representation for Videos (PNeRV), a parameter-wise efficient, patch-wise INR for videos that preserves spatiotemporal continuity. PNeRV leverages the modeling capabilities of Polynomial Neural Networks to perform the modulation of a continuous spatial (patch) signal with a continuous time (frame) signal. We further propose a custom Hierarchical Patch-wise Spatial Sampling Scheme that ensures spatial continuity while retaining parameter efficiency. We also employ a carefully designed Positional Embedding methodology to further enhance PNeRV's performance. Our extensive experimentation demonstrates that PNeRV outperforms the baselines in conventional Implicit Neural Representation tasks like compression along with downstream applications that require spatiotemporal continuity in the underlying representation. PNeRV not only addresses the challenges posed by video data in the realm of INRs but also opens new avenues for advanced video processing and analysis.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-06
issn 2331-8422
language eng
recordid cdi_proquest_journals_3073387037
source Publicly Available Content Database
subjects Image processing
Neural networks
Parameterization
Parameters
Polynomials
Representations
Video data
title PNeRV: A Polynomial Neural Representation for Videos
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T09%3A37%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=PNeRV:%20A%20Polynomial%20Neural%20Representation%20for%20Videos&rft.jtitle=arXiv.org&rft.au=Gupta,%20Sonam&rft.date=2024-06-27&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3073387037%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_30733870373%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=3073387037&rft_id=info:pmid/&rfr_iscdi=true