Self-learning Shape Recognition in Medical Images
A massive amount of medical image data, e.g. from Computed Tomography (CT) and Magnetic Resonance Imaging (MRI), is generated from hospitals every day. Biological structure segmentation is very useful to support surgery planning and treatments, as an ideal delineation of the outline of the target object can offer a precise location and quantitative analysis for further clinical diagnoses such as identification of tumorous tissues. However, the large dimension and complex patterns in medical image data make manual annotation extremely time-consuming and problematic. Accordingly, automatic biomedical image segmentation becomes a crucial pre-requisite in practice and has been a critical research issue over tens of years. However, major challenges exist in medical image segmentation such as the low intensity contrast to surrounding tissues and complex geometry of shape. Moreover, limited amounts of labeled training data give rise to difficulties as well. Numerous approaches have been proposed to mitigate these challenges, from low-level image processing to supervised machine learning techniques. It is worth mentioning that statistical shape models (SSMs) based segmentation approaches have achieved remarkable success in a widespread of applications. SSMs are trained mostly using self-learning approaches to parameterize the significant variabilities of biological shapes, subsequently, the learned shape prior is adopted in image adaption to guide the shape fitting. Despite the success, SSMs-based segmentation approaches suffer from the limitation that the power of SSMs rises and falls with the quality of training data and geometrical complexity of the target shape. Furthermore, the existing image adaption may not be efficient in cases where the target object has a small and distorted structure. Therefore, this thesis aims to de- rive SSMs that are robust to training data corruption and are able to represent complex patterns, and address the problem of the poor image adaption to realize the challenging object segmentation. As training data is often corrupted by many factors like inherent noise/artifacts and non-ideal delineations in this thesis, many efforts have been devoted to developing SSMs that are robust to data corruption. First, early attempts proposing an imputation method and weighted Robust Principal Component Analysis (WRPCA) have been made to ad- dress arbitrary corruptions under the assumption of linear distribution. Nevertheless, deriving a quality model is still demanding as the shape variance of biological structures may not simply follow Gaussian distribution. To combat this, a kernelized RPCA is proposed to cope with outliers in a nonlinear distribution. The idea is performing the low-rank modeling on the kernel matrix to achieve nonlinear dimensionality reduction, and outlier recovery thereof. To increase the generality and feasibility, this thesis, furthermore, presents a general nonlinear data compression technique, the Robust Kernel PCA (RKPCA), with the aim of constructing a low-rank nonlinear subspace free of outliers. In terms of evaluation, the proposed RKPCA delivers high performance on not only creating SSMs but also on outlier recovery. Experiments are conducted using two representative datasets, a set of 30 public CT kidneys and a set of 49 internal MRI ankle bones. Embedded into an existing segmentation framework, experimental results show that SSM built with the proposed RKPCA outperforms the state-of-the-art modeling techniques in terms of model quality and segmentation accuracy. Since SSMs fail to adopt in cases where the target structure occupies a relatively small or distorted area, deep neural networks that remedy this shortcoming are considered thereof. However, redundant background contents in 3D volume may significantly influence the accuracy of deep deep neural networks. Aiming at challenging structures that occupy relatively small areas and have large variances, a novel unified segmentation framework is proposed that incorporates SSM on the top of deep neural network for de- tailed refinement. The motivation is aggregating both spatial and intensity based features from a limited amount of data. Globally optimized via Bayesian inference, the segmentation is driven by a dynamic weighted Gaussian Mixture Model integrating the probability scores from the deep neural network and the shape prior from the SSM. Under a public NIH dataset of CT pancreas, the proposed segmentation framework achieves the best average Dice Similarity Coefficient compared to the-state-of-the-art approaches. The majority of this work is based on public tools: the Medical Imaging Interaction Toolkit (MITK) for SSMs investigation and analysis and the public library Keras for deep neural networks development. All medical image datasets used in this thesis have been validated by clinical experts.
Singapore, TU, Diss., 2019
Fellner, Dieter W.