Advancements and challenges in CT image segmentation for COVID-19 diagnosis through augmented and virtual reality: a systematic review and future perspectives

Kahina Amara, Oussama Kerdjidj , Mohamed Amine Guerroudji, Nadia Zenati, Naeem Ramzan*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    5 Downloads (Pure)

    Abstract

    This article presents a systematic exploration of the synergy between artificial intelligence (AI) and immersive technologies augmented reality (AR) and virtual reality (VR) in diagnosing Coronavirus disease 2019 (COVID- 19) using computerised tomography (CT) medical imaging. Prior reviews have separately tackled COVID-19 CT diagnosis, focusing extensively on image segmentation and classification tasks, often encompassing both CT and X-ray images. However, an integrated consideration of AI, immersive technologies, and CT image segmentation for COVID-19 diagnosis has been notably absent in existing literature. To bridge this gap, our analysis concentrates on methods merging CT image segmentation with AR and VR for COVID-19 diagnostics, leveraging prominent search engines and databases: Google, Google Scholar, IEEE Xplore, Web of Science, PubMed, ScienceDirect, and Scopus. Our in-depth examination furnished comprehensive insights from each selected research, revealing the promising potential of AI and immersive technologies in expediting COVID-19 diagnosis through process automation. The development of precise and rapid diagnostic models holds considerable promise for real-time clinical application, even though further research is imperative. This review categorises the literature on CT image segmentation employing AR and VR technologies, laying a solid foundation for future research endeavours in this promising intersection. The authors conducted an extensive analysis, focusing on methodologies that combine deep learning based CT image segmentation based on Deep Learning with AR and VR for COVID-19 diagnostics. The study outcomes highlight the transformative potential of Augmented Reality (AR) and Virtual Reality (VR) in enhancing healthcare delivery during the COVID-19 pandemic, particularly in aiding diagnosis and treatment planning. Furthermore, the widespread adoption of Artificial Intelligence (AI) and deep learning models has been proven to be instrumental in detecting COVID-19 infections from chest CT images, offering automated diagnostic solutions that streamline workflows, reduce patient contact, and improve efficiency for medical professionals. VR, AR and AI integration presents a promising avenue for advancing diagnostic precision and patient treatment strategies. However, the use of VR and AR in healthcare raises significant privacy and security concerns due to the handling of sensitive patient data, underscoring the need for robust regulatory frameworks to govern their application. Lightweight deep learning (DL) models facilitate efficient on-device processing, significantly enhancing their utility, scalability, and real-time deployment in resource-constrained environments. Together, these findings demonstrate the significant role of emerging technologies in addressing pandemic challenges, while highlighting the importance of addressing ethical and regulatory considerations.
    Original languageEnglish
    Article number101374
    Number of pages13
    JournalJournal of Radiation Research and Applied Sciences
    Volume18
    Issue number2
    Early online date22 Mar 2025
    DOIs
    Publication statusE-pub ahead of print - 22 Mar 2025

    Keywords

    • computed topography (CT)
    • coronavirus disease 2019 (COVID-19)
    • virtual reality
    • augmented reality
    • image segmentation
    • aid-diagnosis
    • systematic review
    • challenges
    • perspectives

    Fingerprint

    Dive into the research topics of 'Advancements and challenges in CT image segmentation for COVID-19 diagnosis through augmented and virtual reality: a systematic review and future perspectives'. Together they form a unique fingerprint.

    Cite this