Loading…
Brain PET motion correction using 3D face-shape model: the first clinical study
Objective Head motions during brain PET scan cause degradation of brain images, but head fixation or external-maker attachment become burdensome on patients. Therefore, we have developed a motion correction method that uses a 3D face-shape model generated by a range-sensing camera (Kinect) and by CT...
Saved in:
Published in: | Annals of nuclear medicine 2022-10, Vol.36 (10), p.904-912 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c471t-7007460c0e0ce5d2ca4547d798a40446930e9e498775451268ada53b9d2546a83 |
---|---|
cites | cdi_FETCH-LOGICAL-c471t-7007460c0e0ce5d2ca4547d798a40446930e9e498775451268ada53b9d2546a83 |
container_end_page | 912 |
container_issue | 10 |
container_start_page | 904 |
container_title | Annals of nuclear medicine |
container_volume | 36 |
creator | Iwao, Yuma Akamatsu, Go Tashima, Hideaki Takahashi, Miwako Yamaya, Taiga |
description | Objective
Head motions during brain PET scan cause degradation of brain images, but head fixation or external-maker attachment become burdensome on patients. Therefore, we have developed a motion correction method that uses a 3D face-shape model generated by a range-sensing camera (Kinect) and by CT images. We have successfully corrected the PET images of a moving mannequin-head phantom containing radioactivity. Here, we conducted a volunteer study to verify the effectiveness of our method for clinical data.
Methods
Eight healthy men volunteers aged 22–45 years underwent a 10-min head-fixed PET scan as a standard of truth in this study, which was started 45 min after
18
F-fluorodeoxyglucose (285 ± 23 MBq) injection, and followed by a 15-min head-moving PET scan with the developed Kinect based motion-tracking system. First, selecting a motion-less period of the head-moving PET scan provided a reference PET image. Second, CT images separately obtained on the same day were registered to the reference PET image, and create a 3D face-shape model, then, to which Kinect-based 3D face-shape model matched. This matching parameter was used for spatial calibration between the Kinect and the PET system. This calibration parameter and the motion-tracking of the 3D face shape by Kinect comprised our motion correction method. The head-moving PET with motion correction was compared with the head-fixed PET images visually and by standard uptake value ratios (SUVRs) in the seven volume-of-interest regions. To confirm the spatial calibration accuracy, a test–retest experiment was performed by repeating the head-moving PET with motion correction twice where the volunteer’s pose and the sensor’s position were different.
Results
No difference was identified visually and statistically in SUVRs between the head-moving PET images with motion correction and the head-fixed PET images. One of the small nuclei, the inferior colliculus, was identified in the head-fixed PET images and in the head-moving PET images with motion correction, but not in those without motion correction. In the test–retest experiment, the SUVRs were well correlated (determinant coefficient,
r
2
= 0.995).
Conclusion
Our motion correction method provided good accuracy for the volunteer data which suggested it is useable in clinical settings. |
doi_str_mv | 10.1007/s12149-022-01774-0 |
format | article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9515015</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2718669369</sourcerecordid><originalsourceid>FETCH-LOGICAL-c471t-7007460c0e0ce5d2ca4547d798a40446930e9e498775451268ada53b9d2546a83</originalsourceid><addsrcrecordid>eNp9kU9rVDEUxYModqx-AVcBN26iN3n560Kwta1CoS7qOqR5d2ZS3iRj8p7Qb2_aKYouXN3A_Z3DuTmEvObwjgOY940LLh0DIRhwYySDJ2TFrZZMy2F4SlbguGSGW3NEXrR2CyCssuI5ORqUVZIbuyJXJzWkTL-dXdNdmVPJNJZaMT48l5byhg6f6TpEZG0b9tipEacPdN4iXafaZhqnlFMME23zMt69JM_WYWr46nEek-_nZ9enX9jl1cXX00-XLErDZ2Z6fqkhAkJENYoYpJJmNM4GCVJqNwA6lM4ao6TiQtswBjXcuFEoqYMdjsnHg-9-udnhGDHPNUx-X9Mu1DtfQvJ_b3La-k356Z3iCrjqBm8fDWr5sWCb_S61iNMUMpaleaGdACO4cB198w96W5aa-3le9N_VPa2-p8SBirW0VnH9OwwHf9-XP_Tle1_-oS8PXTQcRK3DeYP1j_V_VL8Ac96VDw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2718669369</pqid></control><display><type>article</type><title>Brain PET motion correction using 3D face-shape model: the first clinical study</title><source>Springer Nature</source><creator>Iwao, Yuma ; Akamatsu, Go ; Tashima, Hideaki ; Takahashi, Miwako ; Yamaya, Taiga</creator><creatorcontrib>Iwao, Yuma ; Akamatsu, Go ; Tashima, Hideaki ; Takahashi, Miwako ; Yamaya, Taiga</creatorcontrib><description>Objective
Head motions during brain PET scan cause degradation of brain images, but head fixation or external-maker attachment become burdensome on patients. Therefore, we have developed a motion correction method that uses a 3D face-shape model generated by a range-sensing camera (Kinect) and by CT images. We have successfully corrected the PET images of a moving mannequin-head phantom containing radioactivity. Here, we conducted a volunteer study to verify the effectiveness of our method for clinical data.
Methods
Eight healthy men volunteers aged 22–45 years underwent a 10-min head-fixed PET scan as a standard of truth in this study, which was started 45 min after
18
F-fluorodeoxyglucose (285 ± 23 MBq) injection, and followed by a 15-min head-moving PET scan with the developed Kinect based motion-tracking system. First, selecting a motion-less period of the head-moving PET scan provided a reference PET image. Second, CT images separately obtained on the same day were registered to the reference PET image, and create a 3D face-shape model, then, to which Kinect-based 3D face-shape model matched. This matching parameter was used for spatial calibration between the Kinect and the PET system. This calibration parameter and the motion-tracking of the 3D face shape by Kinect comprised our motion correction method. The head-moving PET with motion correction was compared with the head-fixed PET images visually and by standard uptake value ratios (SUVRs) in the seven volume-of-interest regions. To confirm the spatial calibration accuracy, a test–retest experiment was performed by repeating the head-moving PET with motion correction twice where the volunteer’s pose and the sensor’s position were different.
Results
No difference was identified visually and statistically in SUVRs between the head-moving PET images with motion correction and the head-fixed PET images. One of the small nuclei, the inferior colliculus, was identified in the head-fixed PET images and in the head-moving PET images with motion correction, but not in those without motion correction. In the test–retest experiment, the SUVRs were well correlated (determinant coefficient,
r
2
= 0.995).
Conclusion
Our motion correction method provided good accuracy for the volunteer data which suggested it is useable in clinical settings.</description><identifier>ISSN: 0914-7187</identifier><identifier>EISSN: 1864-6433</identifier><identifier>DOI: 10.1007/s12149-022-01774-0</identifier><identifier>PMID: 35854178</identifier><language>eng</language><publisher>Singapore: Springer Nature Singapore</publisher><subject>Brain ; Calibration ; Computed tomography ; Head ; Head movement ; Imaging ; Inferior colliculus ; Mannequins ; Mathematical models ; Medical imaging ; Medicine ; Medicine & Public Health ; Model matching ; Nuclear Medicine ; Original ; Original Article ; Parameters ; Position sensing ; Positron emission ; Positron emission tomography ; Radioactivity ; Radiology ; Spatial calibration ; Three dimensional models ; Three dimensional motion ; Tomography ; Tracking systems</subject><ispartof>Annals of nuclear medicine, 2022-10, Vol.36 (10), p.904-912</ispartof><rights>The Author(s) 2022</rights><rights>The Author(s) 2022. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c471t-7007460c0e0ce5d2ca4547d798a40446930e9e498775451268ada53b9d2546a83</citedby><cites>FETCH-LOGICAL-c471t-7007460c0e0ce5d2ca4547d798a40446930e9e498775451268ada53b9d2546a83</cites><orcidid>0000-0002-1953-7428</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,780,784,885,27915,27916</link.rule.ids></links><search><creatorcontrib>Iwao, Yuma</creatorcontrib><creatorcontrib>Akamatsu, Go</creatorcontrib><creatorcontrib>Tashima, Hideaki</creatorcontrib><creatorcontrib>Takahashi, Miwako</creatorcontrib><creatorcontrib>Yamaya, Taiga</creatorcontrib><title>Brain PET motion correction using 3D face-shape model: the first clinical study</title><title>Annals of nuclear medicine</title><addtitle>Ann Nucl Med</addtitle><description>Objective
Head motions during brain PET scan cause degradation of brain images, but head fixation or external-maker attachment become burdensome on patients. Therefore, we have developed a motion correction method that uses a 3D face-shape model generated by a range-sensing camera (Kinect) and by CT images. We have successfully corrected the PET images of a moving mannequin-head phantom containing radioactivity. Here, we conducted a volunteer study to verify the effectiveness of our method for clinical data.
Methods
Eight healthy men volunteers aged 22–45 years underwent a 10-min head-fixed PET scan as a standard of truth in this study, which was started 45 min after
18
F-fluorodeoxyglucose (285 ± 23 MBq) injection, and followed by a 15-min head-moving PET scan with the developed Kinect based motion-tracking system. First, selecting a motion-less period of the head-moving PET scan provided a reference PET image. Second, CT images separately obtained on the same day were registered to the reference PET image, and create a 3D face-shape model, then, to which Kinect-based 3D face-shape model matched. This matching parameter was used for spatial calibration between the Kinect and the PET system. This calibration parameter and the motion-tracking of the 3D face shape by Kinect comprised our motion correction method. The head-moving PET with motion correction was compared with the head-fixed PET images visually and by standard uptake value ratios (SUVRs) in the seven volume-of-interest regions. To confirm the spatial calibration accuracy, a test–retest experiment was performed by repeating the head-moving PET with motion correction twice where the volunteer’s pose and the sensor’s position were different.
Results
No difference was identified visually and statistically in SUVRs between the head-moving PET images with motion correction and the head-fixed PET images. One of the small nuclei, the inferior colliculus, was identified in the head-fixed PET images and in the head-moving PET images with motion correction, but not in those without motion correction. In the test–retest experiment, the SUVRs were well correlated (determinant coefficient,
r
2
= 0.995).
Conclusion
Our motion correction method provided good accuracy for the volunteer data which suggested it is useable in clinical settings.</description><subject>Brain</subject><subject>Calibration</subject><subject>Computed tomography</subject><subject>Head</subject><subject>Head movement</subject><subject>Imaging</subject><subject>Inferior colliculus</subject><subject>Mannequins</subject><subject>Mathematical models</subject><subject>Medical imaging</subject><subject>Medicine</subject><subject>Medicine & Public Health</subject><subject>Model matching</subject><subject>Nuclear Medicine</subject><subject>Original</subject><subject>Original Article</subject><subject>Parameters</subject><subject>Position sensing</subject><subject>Positron emission</subject><subject>Positron emission tomography</subject><subject>Radioactivity</subject><subject>Radiology</subject><subject>Spatial calibration</subject><subject>Three dimensional models</subject><subject>Three dimensional motion</subject><subject>Tomography</subject><subject>Tracking systems</subject><issn>0914-7187</issn><issn>1864-6433</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kU9rVDEUxYModqx-AVcBN26iN3n560Kwta1CoS7qOqR5d2ZS3iRj8p7Qb2_aKYouXN3A_Z3DuTmEvObwjgOY940LLh0DIRhwYySDJ2TFrZZMy2F4SlbguGSGW3NEXrR2CyCssuI5ORqUVZIbuyJXJzWkTL-dXdNdmVPJNJZaMT48l5byhg6f6TpEZG0b9tipEacPdN4iXafaZhqnlFMME23zMt69JM_WYWr46nEek-_nZ9enX9jl1cXX00-XLErDZ2Z6fqkhAkJENYoYpJJmNM4GCVJqNwA6lM4ao6TiQtswBjXcuFEoqYMdjsnHg-9-udnhGDHPNUx-X9Mu1DtfQvJ_b3La-k356Z3iCrjqBm8fDWr5sWCb_S61iNMUMpaleaGdACO4cB198w96W5aa-3le9N_VPa2-p8SBirW0VnH9OwwHf9-XP_Tle1_-oS8PXTQcRK3DeYP1j_V_VL8Ac96VDw</recordid><startdate>20221001</startdate><enddate>20221001</enddate><creator>Iwao, Yuma</creator><creator>Akamatsu, Go</creator><creator>Tashima, Hideaki</creator><creator>Takahashi, Miwako</creator><creator>Yamaya, Taiga</creator><general>Springer Nature Singapore</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QP</scope><scope>7TK</scope><scope>K9.</scope><scope>NAPCQ</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0002-1953-7428</orcidid></search><sort><creationdate>20221001</creationdate><title>Brain PET motion correction using 3D face-shape model: the first clinical study</title><author>Iwao, Yuma ; Akamatsu, Go ; Tashima, Hideaki ; Takahashi, Miwako ; Yamaya, Taiga</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c471t-7007460c0e0ce5d2ca4547d798a40446930e9e498775451268ada53b9d2546a83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Brain</topic><topic>Calibration</topic><topic>Computed tomography</topic><topic>Head</topic><topic>Head movement</topic><topic>Imaging</topic><topic>Inferior colliculus</topic><topic>Mannequins</topic><topic>Mathematical models</topic><topic>Medical imaging</topic><topic>Medicine</topic><topic>Medicine & Public Health</topic><topic>Model matching</topic><topic>Nuclear Medicine</topic><topic>Original</topic><topic>Original Article</topic><topic>Parameters</topic><topic>Position sensing</topic><topic>Positron emission</topic><topic>Positron emission tomography</topic><topic>Radioactivity</topic><topic>Radiology</topic><topic>Spatial calibration</topic><topic>Three dimensional models</topic><topic>Three dimensional motion</topic><topic>Tomography</topic><topic>Tracking systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Iwao, Yuma</creatorcontrib><creatorcontrib>Akamatsu, Go</creatorcontrib><creatorcontrib>Tashima, Hideaki</creatorcontrib><creatorcontrib>Takahashi, Miwako</creatorcontrib><creatorcontrib>Yamaya, Taiga</creatorcontrib><collection>SpringerOpen</collection><collection>CrossRef</collection><collection>Calcium & Calcified Tissue Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Nursing & Allied Health Premium</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Annals of nuclear medicine</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Iwao, Yuma</au><au>Akamatsu, Go</au><au>Tashima, Hideaki</au><au>Takahashi, Miwako</au><au>Yamaya, Taiga</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Brain PET motion correction using 3D face-shape model: the first clinical study</atitle><jtitle>Annals of nuclear medicine</jtitle><stitle>Ann Nucl Med</stitle><date>2022-10-01</date><risdate>2022</risdate><volume>36</volume><issue>10</issue><spage>904</spage><epage>912</epage><pages>904-912</pages><issn>0914-7187</issn><eissn>1864-6433</eissn><abstract>Objective
Head motions during brain PET scan cause degradation of brain images, but head fixation or external-maker attachment become burdensome on patients. Therefore, we have developed a motion correction method that uses a 3D face-shape model generated by a range-sensing camera (Kinect) and by CT images. We have successfully corrected the PET images of a moving mannequin-head phantom containing radioactivity. Here, we conducted a volunteer study to verify the effectiveness of our method for clinical data.
Methods
Eight healthy men volunteers aged 22–45 years underwent a 10-min head-fixed PET scan as a standard of truth in this study, which was started 45 min after
18
F-fluorodeoxyglucose (285 ± 23 MBq) injection, and followed by a 15-min head-moving PET scan with the developed Kinect based motion-tracking system. First, selecting a motion-less period of the head-moving PET scan provided a reference PET image. Second, CT images separately obtained on the same day were registered to the reference PET image, and create a 3D face-shape model, then, to which Kinect-based 3D face-shape model matched. This matching parameter was used for spatial calibration between the Kinect and the PET system. This calibration parameter and the motion-tracking of the 3D face shape by Kinect comprised our motion correction method. The head-moving PET with motion correction was compared with the head-fixed PET images visually and by standard uptake value ratios (SUVRs) in the seven volume-of-interest regions. To confirm the spatial calibration accuracy, a test–retest experiment was performed by repeating the head-moving PET with motion correction twice where the volunteer’s pose and the sensor’s position were different.
Results
No difference was identified visually and statistically in SUVRs between the head-moving PET images with motion correction and the head-fixed PET images. One of the small nuclei, the inferior colliculus, was identified in the head-fixed PET images and in the head-moving PET images with motion correction, but not in those without motion correction. In the test–retest experiment, the SUVRs were well correlated (determinant coefficient,
r
2
= 0.995).
Conclusion
Our motion correction method provided good accuracy for the volunteer data which suggested it is useable in clinical settings.</abstract><cop>Singapore</cop><pub>Springer Nature Singapore</pub><pmid>35854178</pmid><doi>10.1007/s12149-022-01774-0</doi><tpages>9</tpages><orcidid>https://orcid.org/0000-0002-1953-7428</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0914-7187 |
ispartof | Annals of nuclear medicine, 2022-10, Vol.36 (10), p.904-912 |
issn | 0914-7187 1864-6433 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_9515015 |
source | Springer Nature |
subjects | Brain Calibration Computed tomography Head Head movement Imaging Inferior colliculus Mannequins Mathematical models Medical imaging Medicine Medicine & Public Health Model matching Nuclear Medicine Original Original Article Parameters Position sensing Positron emission Positron emission tomography Radioactivity Radiology Spatial calibration Three dimensional models Three dimensional motion Tomography Tracking systems |
title | Brain PET motion correction using 3D face-shape model: the first clinical study |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T06%3A41%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Brain%20PET%20motion%20correction%20using%203D%20face-shape%20model:%20the%20first%20clinical%20study&rft.jtitle=Annals%20of%20nuclear%20medicine&rft.au=Iwao,%20Yuma&rft.date=2022-10-01&rft.volume=36&rft.issue=10&rft.spage=904&rft.epage=912&rft.pages=904-912&rft.issn=0914-7187&rft.eissn=1864-6433&rft_id=info:doi/10.1007/s12149-022-01774-0&rft_dat=%3Cproquest_pubme%3E2718669369%3C/proquest_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c471t-7007460c0e0ce5d2ca4547d798a40446930e9e498775451268ada53b9d2546a83%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2718669369&rft_id=info:pmid/35854178&rfr_iscdi=true |