Loading…

Brain PET motion correction using 3D face-shape model: the first clinical study

Objective Head motions during brain PET scan cause degradation of brain images, but head fixation or external-maker attachment become burdensome on patients. Therefore, we have developed a motion correction method that uses a 3D face-shape model generated by a range-sensing camera (Kinect) and by CT...

Full description

Saved in:
Bibliographic Details
Published in:Annals of nuclear medicine 2022-10, Vol.36 (10), p.904-912
Main Authors: Iwao, Yuma, Akamatsu, Go, Tashima, Hideaki, Takahashi, Miwako, Yamaya, Taiga
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Objective Head motions during brain PET scan cause degradation of brain images, but head fixation or external-maker attachment become burdensome on patients. Therefore, we have developed a motion correction method that uses a 3D face-shape model generated by a range-sensing camera (Kinect) and by CT images. We have successfully corrected the PET images of a moving mannequin-head phantom containing radioactivity. Here, we conducted a volunteer study to verify the effectiveness of our method for clinical data. Methods Eight healthy men volunteers aged 22–45 years underwent a 10-min head-fixed PET scan as a standard of truth in this study, which was started 45 min after 18 F-fluorodeoxyglucose (285 ± 23 MBq) injection, and followed by a 15-min head-moving PET scan with the developed Kinect based motion-tracking system. First, selecting a motion-less period of the head-moving PET scan provided a reference PET image. Second, CT images separately obtained on the same day were registered to the reference PET image, and create a 3D face-shape model, then, to which Kinect-based 3D face-shape model matched. This matching parameter was used for spatial calibration between the Kinect and the PET system. This calibration parameter and the motion-tracking of the 3D face shape by Kinect comprised our motion correction method. The head-moving PET with motion correction was compared with the head-fixed PET images visually and by standard uptake value ratios (SUVRs) in the seven volume-of-interest regions. To confirm the spatial calibration accuracy, a test–retest experiment was performed by repeating the head-moving PET with motion correction twice where the volunteer’s pose and the sensor’s position were different. Results No difference was identified visually and statistically in SUVRs between the head-moving PET images with motion correction and the head-fixed PET images. One of the small nuclei, the inferior colliculus, was identified in the head-fixed PET images and in the head-moving PET images with motion correction, but not in those without motion correction. In the test–retest experiment, the SUVRs were well correlated (determinant coefficient, r 2  = 0.995). Conclusion Our motion correction method provided good accuracy for the volunteer data which suggested it is useable in clinical settings.
ISSN:0914-7187
1864-6433
DOI:10.1007/s12149-022-01774-0