7
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access

# Deep Feature Transfer Learning in Combination with Traditional Features Predicts Survival Among Patients with Lung Adenocarcinoma

research-article

### Read this article at

Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

### Abstract

Lung cancer is the most common cause of cancer-related deaths in the USA. It can be detected and diagnosed using computed tomography images. For an automated classifier, identifying predictive features from medical images is a key concern. Deep feature extraction using pretrained convolutional neural networks (CNNs) has recently been successfully applied in some image domains. Here, we applied a pretrained CNN to extract deep features from 40 computed tomography images, with contrast, of non-small cell adenocarcinoma lung cancer, and combined deep features with traditional image features and trained classifiers to predict short- and long-term survivors. We experimented with several pretrained CNNs and several feature selection strategies. The best previously reported accuracy when using traditional quantitative features was 77.5% (area under the curve [AUC], 0.712), which was achieved by a decision tree classifier. The best reported accuracy from transfer learning and deep features was 77.5% (AUC, 0.713) using a decision tree classifier. When extracted deep neural network features were combined with traditional quantitative features, we obtained an accuracy of 90% (AUC, 0.935) with the 5 best post-rectified linear unit features extracted from a vgg-f pretrained CNN and the 5 best traditional features. The best results were achieved with the symmetric uncertainty feature ranking algorithm followed by a random forests classifier.

### Most cited references39

• Record: found
• Abstract: found
• Article: not found

### Learning hierarchical features for scene labeling.

(2013)
Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a $(320\times 240)$ image labeling in less than a second, including feature extraction.
Bookmark
• Record: found
• Abstract: not found
• Book Chapter: not found

### Naive (Bayes) at forty: The independence assumption in information retrieval

(1998)
Bookmark
• Record: found
• Abstract: not found
• Book Chapter: not found

### Estimating attributes: Analysis and extensions of RELIEF

(1994)
Bookmark

### Author and article information

###### Journal
101671170
44558
Tomography
Tomography
Tomography : a journal for imaging research
2379-1381
31 December 2016
December 2016
06 January 2017
: 2
: 4
: 388-395
###### Affiliations
[1 ]Department of Computer Science and Engineering, University of South Florida, Tampa, Florida
[2 ]Department of Cancer Imaging and Metabolism, H. Lee Moffitt Cancer Center & Research Institute, Tampa, Florida
[3 ]Department of Cancer Epidemiology, H. Lee Moffitt Cancer Center & Research Institute, Tampa, Florida
###### Author notes
Corresponding Author: Dmitry B. Goldgof, PhD, Department of Computer Science and Engineering, USF College of Engineering, Building-II 4220 E. Fowler Ave, Tampa, FL 33620; goldgof@ 123456mail.usf.edu
###### Article
NIHMS839236
10.18383/j.tom.2016.00211
5218828
f3de4613-7cd3-40ea-9ca3-8c5da553eb08

This is an open access article under the CC BY 4.0 license ( https://creativecommons.org/licenses/by/4.0/).

###### Categories
Article