Loading…

LiteGPT: Large Vision-Language Model for Joint Chest X-ray Localization and Classification Task

Vision-language models have been extensively explored across a wide range of tasks, achieving satisfactory performance; however, their application in medical imaging remains underexplored. In this work, we propose a unified framework - LiteGPT - for the medical imaging. We leverage multiple pre-trai...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-07
Main Authors: Le-Duc, Khai, Zhang, Ryan, Ngoc Son Nguyen, Tan-Hanh Pham, Dao, Anh, Ngo, Ba Hung, Anh Totti Nguyen, Truong-Son Hy
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Vision-language models have been extensively explored across a wide range of tasks, achieving satisfactory performance; however, their application in medical imaging remains underexplored. In this work, we propose a unified framework - LiteGPT - for the medical imaging. We leverage multiple pre-trained visual encoders to enrich information and enhance the performance of vision-language models. To the best of our knowledge, this is the first study to utilize vision-language models for the novel task of joint localization and classification in medical images. Besides, we are pioneers in providing baselines for disease localization in chest X-rays. Finally, we set new state-of-the-art performance in the image classification task on the well-benchmarked VinDr-CXR dataset. All code and models are publicly available online: https://github.com/leduckhai/LiteGPT
ISSN:2331-8422