Loading…
Deep-Learning-Based Motion Capture Technology in Film and Television Animation Production
With the popularity of King Kong, Pirates of the Caribbean 2, Avatar, and other films, the virtual characters in these films have become popular and well loved by audiences. The creation of these virtual characters is different from traditional 3D animation but is based on real character movements a...
Saved in:
Published in: | Security and communication networks 2022-02, Vol.2022, p.1-9 |
---|---|
Main Author: | |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c337t-29da8bd8e753145da0c4a9ca46817f62dba2fe543e7660dadb7826cf0dc492c53 |
---|---|
cites | cdi_FETCH-LOGICAL-c337t-29da8bd8e753145da0c4a9ca46817f62dba2fe543e7660dadb7826cf0dc492c53 |
container_end_page | 9 |
container_issue | |
container_start_page | 1 |
container_title | Security and communication networks |
container_volume | 2022 |
creator | Wei, Yating |
description | With the popularity of King Kong, Pirates of the Caribbean 2, Avatar, and other films, the virtual characters in these films have become popular and well loved by audiences. The creation of these virtual characters is different from traditional 3D animation but is based on real character movements and expressions. An overview of several mainstream motion capture systems in the field of motion capture is presented, and the application of motion capture technology in film and animation is explained in detail. The current motion capture technology is mainly based on complex human markers and sensors, which are costly, while deep-learning-based human pose estimation is becoming a new option. However, most existing methods are based on a single person or picture estimation, and there are many challenges for video multiperson estimation. The experimental results show that a simple design of the human motion capture system is achieved. |
doi_str_mv | 10.1155/2022/6040371 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2630681951</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2630681951</sourcerecordid><originalsourceid>FETCH-LOGICAL-c337t-29da8bd8e753145da0c4a9ca46817f62dba2fe543e7660dadb7826cf0dc492c53</originalsourceid><addsrcrecordid>eNp9kM1OwzAQhC0EEqVw4wEicYRQ_yc5lkIBqQgO5cDJcm2ndZXawU5AfXsSUnHktKPVt7OjAeASwVuEGJtgiPGEQwpJho7ACBWkSCHC-PhPI3oKzmLcQsgRzegIfNwbU6cLI4Ozbp3eyWh08uIb610yk3XTBpMsjdo4X_n1PrEumdtql0inu3VlvmzsyamzO_l78xa8blUvz8FJKatoLg5zDN7nD8vZU7p4fXyeTRepIiRrUlxoma90bjJGEGVaQkVloSTlOcpKjvVK4tIwSkzGOdRSr7Icc1VCrWiBFSNjcDX41sF_tiY2Yuvb4LqXAnMCO5uCoY66GSgVfIzBlKIOXeawFwiKvjzRlycO5XX49YBvrNPy2_5P_wAm525h</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2630681951</pqid></control><display><type>article</type><title>Deep-Learning-Based Motion Capture Technology in Film and Television Animation Production</title><source>Wiley Online Library Open Access</source><source>ProQuest - Publicly Available Content Database</source><creator>Wei, Yating</creator><contributor>Chen, Chin-Ling ; Chin-Ling Chen</contributor><creatorcontrib>Wei, Yating ; Chen, Chin-Ling ; Chin-Ling Chen</creatorcontrib><description>With the popularity of King Kong, Pirates of the Caribbean 2, Avatar, and other films, the virtual characters in these films have become popular and well loved by audiences. The creation of these virtual characters is different from traditional 3D animation but is based on real character movements and expressions. An overview of several mainstream motion capture systems in the field of motion capture is presented, and the application of motion capture technology in film and animation is explained in detail. The current motion capture technology is mainly based on complex human markers and sensors, which are costly, while deep-learning-based human pose estimation is becoming a new option. However, most existing methods are based on a single person or picture estimation, and there are many challenges for video multiperson estimation. The experimental results show that a simple design of the human motion capture system is achieved.</description><identifier>ISSN: 1939-0114</identifier><identifier>EISSN: 1939-0122</identifier><identifier>DOI: 10.1155/2022/6040371</identifier><language>eng</language><publisher>London: Hindawi</publisher><subject>Acoustics ; Animation ; Deep learning ; Human motion ; Motion capture ; Motion pictures ; Neural networks ; Pose estimation</subject><ispartof>Security and communication networks, 2022-02, Vol.2022, p.1-9</ispartof><rights>Copyright © 2022 Yating Wei.</rights><rights>Copyright © 2022 Yating Wei. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c337t-29da8bd8e753145da0c4a9ca46817f62dba2fe543e7660dadb7826cf0dc492c53</citedby><cites>FETCH-LOGICAL-c337t-29da8bd8e753145da0c4a9ca46817f62dba2fe543e7660dadb7826cf0dc492c53</cites><orcidid>0000-0001-6359-7738</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2630681951?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590</link.rule.ids></links><search><contributor>Chen, Chin-Ling</contributor><contributor>Chin-Ling Chen</contributor><creatorcontrib>Wei, Yating</creatorcontrib><title>Deep-Learning-Based Motion Capture Technology in Film and Television Animation Production</title><title>Security and communication networks</title><description>With the popularity of King Kong, Pirates of the Caribbean 2, Avatar, and other films, the virtual characters in these films have become popular and well loved by audiences. The creation of these virtual characters is different from traditional 3D animation but is based on real character movements and expressions. An overview of several mainstream motion capture systems in the field of motion capture is presented, and the application of motion capture technology in film and animation is explained in detail. The current motion capture technology is mainly based on complex human markers and sensors, which are costly, while deep-learning-based human pose estimation is becoming a new option. However, most existing methods are based on a single person or picture estimation, and there are many challenges for video multiperson estimation. The experimental results show that a simple design of the human motion capture system is achieved.</description><subject>Acoustics</subject><subject>Animation</subject><subject>Deep learning</subject><subject>Human motion</subject><subject>Motion capture</subject><subject>Motion pictures</subject><subject>Neural networks</subject><subject>Pose estimation</subject><issn>1939-0114</issn><issn>1939-0122</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNp9kM1OwzAQhC0EEqVw4wEicYRQ_yc5lkIBqQgO5cDJcm2ndZXawU5AfXsSUnHktKPVt7OjAeASwVuEGJtgiPGEQwpJho7ACBWkSCHC-PhPI3oKzmLcQsgRzegIfNwbU6cLI4Ozbp3eyWh08uIb610yk3XTBpMsjdo4X_n1PrEumdtql0inu3VlvmzsyamzO_l78xa8blUvz8FJKatoLg5zDN7nD8vZU7p4fXyeTRepIiRrUlxoma90bjJGEGVaQkVloSTlOcpKjvVK4tIwSkzGOdRSr7Icc1VCrWiBFSNjcDX41sF_tiY2Yuvb4LqXAnMCO5uCoY66GSgVfIzBlKIOXeawFwiKvjzRlycO5XX49YBvrNPy2_5P_wAm525h</recordid><startdate>20220211</startdate><enddate>20220211</enddate><creator>Wei, Yating</creator><general>Hindawi</general><general>Hindawi Limited</general><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><orcidid>https://orcid.org/0000-0001-6359-7738</orcidid></search><sort><creationdate>20220211</creationdate><title>Deep-Learning-Based Motion Capture Technology in Film and Television Animation Production</title><author>Wei, Yating</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c337t-29da8bd8e753145da0c4a9ca46817f62dba2fe543e7660dadb7826cf0dc492c53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Acoustics</topic><topic>Animation</topic><topic>Deep learning</topic><topic>Human motion</topic><topic>Motion capture</topic><topic>Motion pictures</topic><topic>Neural networks</topic><topic>Pose estimation</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wei, Yating</creatorcontrib><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer science database</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>ProQuest - Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Security and communication networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wei, Yating</au><au>Chen, Chin-Ling</au><au>Chin-Ling Chen</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep-Learning-Based Motion Capture Technology in Film and Television Animation Production</atitle><jtitle>Security and communication networks</jtitle><date>2022-02-11</date><risdate>2022</risdate><volume>2022</volume><spage>1</spage><epage>9</epage><pages>1-9</pages><issn>1939-0114</issn><eissn>1939-0122</eissn><abstract>With the popularity of King Kong, Pirates of the Caribbean 2, Avatar, and other films, the virtual characters in these films have become popular and well loved by audiences. The creation of these virtual characters is different from traditional 3D animation but is based on real character movements and expressions. An overview of several mainstream motion capture systems in the field of motion capture is presented, and the application of motion capture technology in film and animation is explained in detail. The current motion capture technology is mainly based on complex human markers and sensors, which are costly, while deep-learning-based human pose estimation is becoming a new option. However, most existing methods are based on a single person or picture estimation, and there are many challenges for video multiperson estimation. The experimental results show that a simple design of the human motion capture system is achieved.</abstract><cop>London</cop><pub>Hindawi</pub><doi>10.1155/2022/6040371</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0001-6359-7738</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1939-0114 |
ispartof | Security and communication networks, 2022-02, Vol.2022, p.1-9 |
issn | 1939-0114 1939-0122 |
language | eng |
recordid | cdi_proquest_journals_2630681951 |
source | Wiley Online Library Open Access; ProQuest - Publicly Available Content Database |
subjects | Acoustics Animation Deep learning Human motion Motion capture Motion pictures Neural networks Pose estimation |
title | Deep-Learning-Based Motion Capture Technology in Film and Television Animation Production |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T09%3A34%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep-Learning-Based%20Motion%20Capture%20Technology%20in%20Film%20and%20Television%20Animation%20Production&rft.jtitle=Security%20and%20communication%20networks&rft.au=Wei,%20Yating&rft.date=2022-02-11&rft.volume=2022&rft.spage=1&rft.epage=9&rft.pages=1-9&rft.issn=1939-0114&rft.eissn=1939-0122&rft_id=info:doi/10.1155/2022/6040371&rft_dat=%3Cproquest_cross%3E2630681951%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c337t-29da8bd8e753145da0c4a9ca46817f62dba2fe543e7660dadb7826cf0dc492c53%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2630681951&rft_id=info:pmid/&rfr_iscdi=true |