Loading…
MCSTransWnet: A new deep learning process for postoperative corneal topography prediction based on raw multimodal data from the Pentacam HR system
This work provides a new multimodal fusion generative adversarial net (GAN) model, Multiple Conditions Transform W-net (MCSTransWnet), which primarily uses femtosecond laser arcuate keratotomy surgical parameters and preoperative corneal topography to predict postoperative corneal topography in asti...
Saved in:
Published in: | Medicine in novel technology and devices 2024-03, Vol.21, p.100267, Article 100267 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | This work provides a new multimodal fusion generative adversarial net (GAN) model, Multiple Conditions Transform W-net (MCSTransWnet), which primarily uses femtosecond laser arcuate keratotomy surgical parameters and preoperative corneal topography to predict postoperative corneal topography in astigmatism-corrected patients. The MCSTransWnet model comprises a generator and a discriminator, and the generator is composed of two sub-generators. The first sub-generator extracts features using the U-net model, vision transform (ViT) and a multi-parameter conditional module branch. The second sub-generator uses a U-net network for further image denoising. The discriminator uses the pixel discriminator in Pix2Pix. Currently, most GAN models are convolutional neural networks; however, due to their feature extraction locality, it is difficult to comprehend the relationships among global features. Thus, we added a vision Transform network as the model branch to extract the global features. It is normally difficult to train the transformer, and image noise and geometric information loss are likely. Hence, we adopted the standard U-net fusion scheme and transform network as the generator, so that global features, local features, and rich image details could be obtained simultaneously. Our experimental results clearly demonstrate that MCSTransWnet successfully predicts postoperative corneal topographies (structural similarity = 0.765, peak signal-to-noise ratio = 16.012, and Fréchet inception distance = 9.264). Using this technique to obtain the rough shape of the postoperative corneal topography in advance gives clinicians more references and guides changes to surgical planning and improves the success rate of surgery.
•Predicting postoperative corneal topography using the generative adversarial network Pix2Pix.•Feasibility of generating adversarial networks to predict postoperative corneal topography.•Predicting postoperative corneal topography using multimodal fusion techniques.•Using multimodal fusion techniques to fuse surgical text data with image data. |
---|---|
ISSN: | 2590-0935 2590-0935 |
DOI: | 10.1016/j.medntd.2023.100267 |