Loading…
UnrealFall: Overcoming Data Scarcity through Generative Models
Humans perform a variety of actions, some of which are infrequent but crucial for data collection. Synthetic generation techniques are highly effective in these situations, enhancing the data for such rare actions. In response to this need, we present UnrealFall, a robust framework developed within...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 8 |
container_issue | |
container_start_page | 1 |
container_title | |
container_volume | |
creator | Mulero-Perez, David Benavent-Lledo, Manuel Ortiz-Perez, David Garcia-Rodriguez, Jose |
description | Humans perform a variety of actions, some of which are infrequent but crucial for data collection. Synthetic generation techniques are highly effective in these situations, enhancing the data for such rare actions. In response to this need, we present UnrealFall, a robust framework developed within Unreal Engine 5, designed for the generation of human action video data in hyper-realistic virtual scenes. It addresses the scarcity and limited diversity in existing datasets for actions like falls by leveraging synthetic motion generation through text-guided generative models, Gaussian Splatting technology, and MetaHumans. The usefulness of the framework is demonstrated by its capability to produce a synthetic video dataset featuring elderly individuals falling in various settings. The value of the dataset is demonstrated by its successful use in training a VideoMAE model, in conjunction with the UCF101 and various fall-specific datasets. This versatility in generating data across a spectrum of actions and environments positions our framework as a valuable tool for broader applications such as digital twin creation and dataset augmentation. The code and data are available for research at project website, darkviid.github.io/UnrealFall/. |
doi_str_mv | 10.1109/IJCNN60899.2024.10651116 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10651116</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10651116</ieee_id><sourcerecordid>10651116</sourcerecordid><originalsourceid>FETCH-LOGICAL-i91t-663ef0bde84d0685468db670077cf524c4f50faad02a214874a0f29045f3d3103</originalsourceid><addsrcrecordid>eNo1z81OAjEUQOFqYiIib-CiLzDjvf2vCxMzCmIQFuKalGkLNcOM6YwkvD0LdXV2X3IIoQglItj7-Vu1XCow1pYMmCgRlEREdUEmVlvDJXBpObJLMmKosBAC9DW56fsvAMat5SPy-Nnm4Jqpa5oHujqGXHeH1O7osxsc_ahdrtNwosM-dz-7PZ2FNmQ3pGOg750PTX9LrqJr-jD565ispy_r6rVYrGbz6mlRJItDoRQPEbY-GOFBGSmU8VulAbSuo2SiFlFCdM4DcwyF0cJBZBaEjNxzBD4md79sCiFsvnM6uHza_O_yM5jzSW8</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>UnrealFall: Overcoming Data Scarcity through Generative Models</title><source>IEEE Xplore All Conference Series</source><creator>Mulero-Perez, David ; Benavent-Lledo, Manuel ; Ortiz-Perez, David ; Garcia-Rodriguez, Jose</creator><creatorcontrib>Mulero-Perez, David ; Benavent-Lledo, Manuel ; Ortiz-Perez, David ; Garcia-Rodriguez, Jose</creatorcontrib><description>Humans perform a variety of actions, some of which are infrequent but crucial for data collection. Synthetic generation techniques are highly effective in these situations, enhancing the data for such rare actions. In response to this need, we present UnrealFall, a robust framework developed within Unreal Engine 5, designed for the generation of human action video data in hyper-realistic virtual scenes. It addresses the scarcity and limited diversity in existing datasets for actions like falls by leveraging synthetic motion generation through text-guided generative models, Gaussian Splatting technology, and MetaHumans. The usefulness of the framework is demonstrated by its capability to produce a synthetic video dataset featuring elderly individuals falling in various settings. The value of the dataset is demonstrated by its successful use in training a VideoMAE model, in conjunction with the UCF101 and various fall-specific datasets. This versatility in generating data across a spectrum of actions and environments positions our framework as a valuable tool for broader applications such as digital twin creation and dataset augmentation. The code and data are available for research at project website, darkviid.github.io/UnrealFall/.</description><identifier>EISSN: 2161-4407</identifier><identifier>EISBN: 9798350359312</identifier><identifier>DOI: 10.1109/IJCNN60899.2024.10651116</identifier><language>eng</language><publisher>IEEE</publisher><subject>3D scenes reconstruction ; Action video dataset ; Analytical models ; Data models ; Digital twins ; Fall detection ; Neural networks ; Synthetic data generation ; Three-dimensional displays ; Training ; Video sequences</subject><ispartof>2024 International Joint Conference on Neural Networks (IJCNN), 2024, p.1-8</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10651116$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27901,54529,54906</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10651116$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Mulero-Perez, David</creatorcontrib><creatorcontrib>Benavent-Lledo, Manuel</creatorcontrib><creatorcontrib>Ortiz-Perez, David</creatorcontrib><creatorcontrib>Garcia-Rodriguez, Jose</creatorcontrib><title>UnrealFall: Overcoming Data Scarcity through Generative Models</title><title>2024 International Joint Conference on Neural Networks (IJCNN)</title><addtitle>IJCNN</addtitle><description>Humans perform a variety of actions, some of which are infrequent but crucial for data collection. Synthetic generation techniques are highly effective in these situations, enhancing the data for such rare actions. In response to this need, we present UnrealFall, a robust framework developed within Unreal Engine 5, designed for the generation of human action video data in hyper-realistic virtual scenes. It addresses the scarcity and limited diversity in existing datasets for actions like falls by leveraging synthetic motion generation through text-guided generative models, Gaussian Splatting technology, and MetaHumans. The usefulness of the framework is demonstrated by its capability to produce a synthetic video dataset featuring elderly individuals falling in various settings. The value of the dataset is demonstrated by its successful use in training a VideoMAE model, in conjunction with the UCF101 and various fall-specific datasets. This versatility in generating data across a spectrum of actions and environments positions our framework as a valuable tool for broader applications such as digital twin creation and dataset augmentation. The code and data are available for research at project website, darkviid.github.io/UnrealFall/.</description><subject>3D scenes reconstruction</subject><subject>Action video dataset</subject><subject>Analytical models</subject><subject>Data models</subject><subject>Digital twins</subject><subject>Fall detection</subject><subject>Neural networks</subject><subject>Synthetic data generation</subject><subject>Three-dimensional displays</subject><subject>Training</subject><subject>Video sequences</subject><issn>2161-4407</issn><isbn>9798350359312</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1z81OAjEUQOFqYiIib-CiLzDjvf2vCxMzCmIQFuKalGkLNcOM6YwkvD0LdXV2X3IIoQglItj7-Vu1XCow1pYMmCgRlEREdUEmVlvDJXBpObJLMmKosBAC9DW56fsvAMat5SPy-Nnm4Jqpa5oHujqGXHeH1O7osxsc_ahdrtNwosM-dz-7PZ2FNmQ3pGOg750PTX9LrqJr-jD565ispy_r6rVYrGbz6mlRJItDoRQPEbY-GOFBGSmU8VulAbSuo2SiFlFCdM4DcwyF0cJBZBaEjNxzBD4md79sCiFsvnM6uHza_O_yM5jzSW8</recordid><startdate>20240630</startdate><enddate>20240630</enddate><creator>Mulero-Perez, David</creator><creator>Benavent-Lledo, Manuel</creator><creator>Ortiz-Perez, David</creator><creator>Garcia-Rodriguez, Jose</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240630</creationdate><title>UnrealFall: Overcoming Data Scarcity through Generative Models</title><author>Mulero-Perez, David ; Benavent-Lledo, Manuel ; Ortiz-Perez, David ; Garcia-Rodriguez, Jose</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i91t-663ef0bde84d0685468db670077cf524c4f50faad02a214874a0f29045f3d3103</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>3D scenes reconstruction</topic><topic>Action video dataset</topic><topic>Analytical models</topic><topic>Data models</topic><topic>Digital twins</topic><topic>Fall detection</topic><topic>Neural networks</topic><topic>Synthetic data generation</topic><topic>Three-dimensional displays</topic><topic>Training</topic><topic>Video sequences</topic><toplevel>online_resources</toplevel><creatorcontrib>Mulero-Perez, David</creatorcontrib><creatorcontrib>Benavent-Lledo, Manuel</creatorcontrib><creatorcontrib>Ortiz-Perez, David</creatorcontrib><creatorcontrib>Garcia-Rodriguez, Jose</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Mulero-Perez, David</au><au>Benavent-Lledo, Manuel</au><au>Ortiz-Perez, David</au><au>Garcia-Rodriguez, Jose</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>UnrealFall: Overcoming Data Scarcity through Generative Models</atitle><btitle>2024 International Joint Conference on Neural Networks (IJCNN)</btitle><stitle>IJCNN</stitle><date>2024-06-30</date><risdate>2024</risdate><spage>1</spage><epage>8</epage><pages>1-8</pages><eissn>2161-4407</eissn><eisbn>9798350359312</eisbn><abstract>Humans perform a variety of actions, some of which are infrequent but crucial for data collection. Synthetic generation techniques are highly effective in these situations, enhancing the data for such rare actions. In response to this need, we present UnrealFall, a robust framework developed within Unreal Engine 5, designed for the generation of human action video data in hyper-realistic virtual scenes. It addresses the scarcity and limited diversity in existing datasets for actions like falls by leveraging synthetic motion generation through text-guided generative models, Gaussian Splatting technology, and MetaHumans. The usefulness of the framework is demonstrated by its capability to produce a synthetic video dataset featuring elderly individuals falling in various settings. The value of the dataset is demonstrated by its successful use in training a VideoMAE model, in conjunction with the UCF101 and various fall-specific datasets. This versatility in generating data across a spectrum of actions and environments positions our framework as a valuable tool for broader applications such as digital twin creation and dataset augmentation. The code and data are available for research at project website, darkviid.github.io/UnrealFall/.</abstract><pub>IEEE</pub><doi>10.1109/IJCNN60899.2024.10651116</doi><tpages>8</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2161-4407 |
ispartof | 2024 International Joint Conference on Neural Networks (IJCNN), 2024, p.1-8 |
issn | 2161-4407 |
language | eng |
recordid | cdi_ieee_primary_10651116 |
source | IEEE Xplore All Conference Series |
subjects | 3D scenes reconstruction Action video dataset Analytical models Data models Digital twins Fall detection Neural networks Synthetic data generation Three-dimensional displays Training Video sequences |
title | UnrealFall: Overcoming Data Scarcity through Generative Models |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-24T07%3A53%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=UnrealFall:%20Overcoming%20Data%20Scarcity%20through%20Generative%20Models&rft.btitle=2024%20International%20Joint%20Conference%20on%20Neural%20Networks%20(IJCNN)&rft.au=Mulero-Perez,%20David&rft.date=2024-06-30&rft.spage=1&rft.epage=8&rft.pages=1-8&rft.eissn=2161-4407&rft_id=info:doi/10.1109/IJCNN60899.2024.10651116&rft.eisbn=9798350359312&rft_dat=%3Cieee_CHZPO%3E10651116%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i91t-663ef0bde84d0685468db670077cf524c4f50faad02a214874a0f29045f3d3103%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10651116&rfr_iscdi=true |