Advanced machine learning methods for oncological image analysis
Author: Astaraki, Mehdi
Date: 2022-09-30
Location: T2 (Jacobssonsalen), Hälsovägen 11C, floor 5, Technology and Health, KTH Flemingsberg, Sweden.
Time: 13.00
Department: Inst för onkologi-patologi / Dept of Oncology-Pathology
View/ Open:
Thesis (9.754Mb)
Abstract
Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow.
This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis.
The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy.
Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power.
Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses.
In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis.
This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis.
The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy.
Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power.
Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses.
In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis.
List of papers:
I. Prior-aware Autoencoders for Lung Pathology Segmentation. M. Astaraki, Ö. Smedby, C. Wang. Med Image Anal. 2022 Aug;80:102491.
Fulltext (DOI)
Pubmed
View record in Web of Science®
II. Unsupervised Tumor Segmentation. M. Astaraki, F. De Benetti, Y. Yeganeh, I. Toma-Dasu, Ö. Smedby, C. Wang, N. Navab, T. Wendler. [Manuscript]
III. Benign-malignant pulmonary nodule classification in low-dose CT with convolutional features. M. Astaraki, Y. Zakko, I. Toma Dasu, Ö. Smedby, C. Wang. Phys Med. 2021 Mar;83:146-153.
Fulltext (DOI)
Pubmed
View record in Web of Science®
IV. A Comparative Study of Radiomics and Deep-Learning Based Methods for Pulmonary Nodule Malignancy Prediction in Low Dose CT Images. M. Astaraki, G. Yang, Y. Zakko, I. Toma Dasu, Ö. Smedby, C. Wang. Front Oncol. 2021 Dec 17;11:737368.
Fulltext (DOI)
Pubmed
View record in Web of Science®
V. Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method. M. Astaraki, C. Wang, G. Buizza, I. Toma-Dasu, M. Lazzeroni, Ö. Smedby. Phys Med. 2019 Apr;60:58-65.
Fulltext (DOI)
Pubmed
View record in Web of Science®
VI. Spherical Convolutional Neural Networks for Survival Rate Prediction in Cancer Patients. F. Sinzinger, M. Astaraki, Ö. Smedby, R. Moreno. Front Oncol. 2022 Apr 27;12:870457.
Fulltext (DOI)
Pubmed
View record in Web of Science®
I. Prior-aware Autoencoders for Lung Pathology Segmentation. M. Astaraki, Ö. Smedby, C. Wang. Med Image Anal. 2022 Aug;80:102491.
Fulltext (DOI)
Pubmed
View record in Web of Science®
II. Unsupervised Tumor Segmentation. M. Astaraki, F. De Benetti, Y. Yeganeh, I. Toma-Dasu, Ö. Smedby, C. Wang, N. Navab, T. Wendler. [Manuscript]
III. Benign-malignant pulmonary nodule classification in low-dose CT with convolutional features. M. Astaraki, Y. Zakko, I. Toma Dasu, Ö. Smedby, C. Wang. Phys Med. 2021 Mar;83:146-153.
Fulltext (DOI)
Pubmed
View record in Web of Science®
IV. A Comparative Study of Radiomics and Deep-Learning Based Methods for Pulmonary Nodule Malignancy Prediction in Low Dose CT Images. M. Astaraki, G. Yang, Y. Zakko, I. Toma Dasu, Ö. Smedby, C. Wang. Front Oncol. 2021 Dec 17;11:737368.
Fulltext (DOI)
Pubmed
View record in Web of Science®
V. Early survival prediction in non-small cell lung cancer from PET/CT images using an intra-tumor partitioning method. M. Astaraki, C. Wang, G. Buizza, I. Toma-Dasu, M. Lazzeroni, Ö. Smedby. Phys Med. 2019 Apr;60:58-65.
Fulltext (DOI)
Pubmed
View record in Web of Science®
VI. Spherical Convolutional Neural Networks for Survival Rate Prediction in Cancer Patients. F. Sinzinger, M. Astaraki, Ö. Smedby, R. Moreno. Front Oncol. 2022 Apr 27;12:870457.
Fulltext (DOI)
Pubmed
View record in Web of Science®
Institution:
- Karolinska Institutet
- Kungliga Tekniska Högskolan
Supervisor: Wang, Chunliang
Co-supervisor: Toma-Dasu, Iuliana; Smedby, Örjan
Issue date: 2022-09-06
Rights:
Publication year: 2022
ISBN: 978-91-8040-313-9
Statistics
Total Visits
Views | |
---|---|
Advanced ... | 479 |
Total Visits Per Month
September 2023 | October 2023 | November 2023 | December 2023 | January 2024 | February 2024 | March 2024 | |
---|---|---|---|---|---|---|---|
Advanced ... | 20 | 26 | 21 | 23 | 23 | 20 | 20 |
File Visits
Views | |
---|---|
Thesis_Mehdi_Astaraki.pdf | 279 |
Top country views
Views | |
---|---|
Sweden | 89 |
Ireland | 75 |
United States | 71 |
United Kingdom | 53 |
China | 19 |
Germany | 13 |
India | 11 |
France | 10 |
Taiwan | 10 |
Hong Kong | 9 |
Top cities views
Views | |
---|---|
Dublin | 71 |
Stockholm | 19 |
Ashburn | 7 |
Bromma | 6 |
Huddinge | 6 |
Central | 5 |
Moscow | 5 |
Taipei | 5 |
Umeå | 5 |
Sundbyberg | 4 |