SORA

Advancing, promoting and sharing knowledge of health through excellence in teaching, clinical practice and research into the prevention and treatment of illness

Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels

Soltaninejad, M; Yang, G; Lambrou, T; Allinson, N; Jones, TL; Barrick, TR; Howe, FA; Ye, X (2018) Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels. Comput Methods Programs Biomed, 157. pp. 69-84. ISSN 1872-7565 https://doi.org/10.1016/j.cmpb.2018.01.003
SGUL Authors: Barrick, Thomas Richard Howe, Franklyn Arron Yang, Guang

[img]
Preview
PDF Accepted Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (3MB) | Preview

Abstract

BACKGROUND: Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. METHODS: We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. RESULTS: The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. CONCLUSION: The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management.

Item Type: Article
Additional Information: © 2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/
Keywords: Brain tumour segmentation, Diffusion tensor imaging, Multimodal MRI, Random forests, Supervoxel, Textons, Brain tumour segmentation, Diffusion tensor imaging, Multimodal MRI, Random forests, Supervoxel, Textons, Medical Informatics, 0903 Biomedical Engineering
SGUL Research Institute / Research Centre: Academic Structure > Molecular and Clinical Sciences Research Institute (MCS)
Academic Structure > Molecular and Clinical Sciences Research Institute (MCS) > Neuroscience (INCCNS)
Journal or Publication Title: Comput Methods Programs Biomed
ISSN: 1872-7565
Language: eng
Dates:
DateEvent
April 2018Published
11 January 2018Published Online
9 January 2018Accepted
Publisher License: Creative Commons: Attribution-Noncommercial-No Derivative Works 4.0
Projects:
Project IDFunderFunder ID
600929Seventh Framework Programmehttp://dx.doi.org/10.13039/501100004963
EP/L023679/1Engineering and Physical Sciences Research Councilhttp://dx.doi.org/10.13039/501100000266
LSHC-CT-2004-503094Seventh Framework Programmehttp://dx.doi.org/10.13039/501100004963
PubMed ID: 29477436
Web of Science ID: WOS:000425897400008
Go to PubMed abstract
URI: https://openaccess.sgul.ac.uk/id/eprint/109661
Publisher's version: https://doi.org/10.1016/j.cmpb.2018.01.003

Actions (login required)

Edit Item Edit Item