Multi-channel Deep Transfer Learning for Nuclei Segmentation in Glioblastoma Cell Tissue Images

Similar documents
Black-Box Hyperparameter Optimization for Nuclei Segmentation in Prostate Tissue Images

Lecture 7: Semantic Segmentation

Model-Based Segmentation and Colocalization Quantification in 3D Microscopy Images

TEXT SEGMENTATION ON PHOTOREALISTIC IMAGES

DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution and Fully Connected CRFs

3D Densely Convolutional Networks for Volumetric Segmentation. Toan Duc Bui, Jitae Shin, and Taesup Moon

Object detection using Region Proposals (RCNN) Ernest Cheung COMP Presentation

Segmentation of Heterochromatin Foci Using a 3D Spherical Harmonics Intensity Model

Encoder-Decoder Networks for Semantic Segmentation. Sachin Mehta

Probabilistic Tracking of Virus Particles in Fluorescence Microscopy Image Sequences

Content-Based Image Recovery

COMP 551 Applied Machine Learning Lecture 16: Deep Learning

Supplementary material for Analyzing Filters Toward Efficient ConvNet

In-Place Activated BatchNorm for Memory- Optimized Training of DNNs

Channel Locality Block: A Variant of Squeeze-and-Excitation

Convolutional Neural Networks. Computer Vision Jia-Bin Huang, Virginia Tech

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

Efficient Segmentation-Aided Text Detection For Intelligent Robots

CNNS FROM THE BASICS TO RECENT ADVANCES. Dmytro Mishkin Center for Machine Perception Czech Technical University in Prague

AUTOMATED DETECTION AND CLASSIFICATION OF CANCER METASTASES IN WHOLE-SLIDE HISTOPATHOLOGY IMAGES USING DEEP LEARNING

Presentation Outline. Semantic Segmentation. Overview. Presentation Outline CNN. Learning Deconvolution Network for Semantic Segmentation 6/6/16

Object Detection on Self-Driving Cars in China. Lingyun Li

Proceedings of the International MultiConference of Engineers and Computer Scientists 2018 Vol I IMECS 2018, March 14-16, 2018, Hong Kong

Deep learning for dense per-pixel prediction. Chunhua Shen The University of Adelaide, Australia

Spatial Localization and Detection. Lecture 8-1

Kaggle Data Science Bowl 2017 Technical Report

Boundary-aware Fully Convolutional Network for Brain Tumor Segmentation

End-to-End Learning of Polygons for Remote Sensing Image Classification

Deep Residual Architecture for Skin Lesion Segmentation

Tracking of Virus Particles in Time-Lapse Fluorescence Microscopy Image Sequences

arxiv: v1 [cs.cv] 31 Mar 2016

Automatically Improving 3D Neuron Segmentations for Expansion Microscopy Connectomics. by Albert Gerovitch

Cell Segmentation Proposal Network For Microscopy Image Analysis

Microscopy Cell Counting with Fully Convolutional Regression Networks

Automatic Lymphocyte Detection in H&E Images with Deep Neural Networks

Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. By Joa õ Carreira and Andrew Zisserman Presenter: Zhisheng Huang 03/02/2018

Object Detection Based on Deep Learning

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

Human Pose Estimation with Deep Learning. Wei Yang

U-Net On Biomedical Images

arxiv: v1 [cs.cv] 29 Sep 2016

Classification of objects from Video Data (Group 30)

COMPARISON OF OBJECTIVE FUNCTIONS IN CNN-BASED PROSTATE MAGNETIC RESONANCE IMAGE SEGMENTATION

Large-scale Video Classification with Convolutional Neural Networks

CAP 6412 Advanced Computer Vision

COLOCALISATION. Alexia Loynton-Ferrand. Imaging Core Facility Biozentrum Basel

arxiv: v2 [cs.cv] 30 Sep 2018

Fully Convolutional Networks for Semantic Segmentation

COLOCALISATION. Alexia Ferrand. Imaging Core Facility Biozentrum Basel

Convolution Neural Network for Traditional Chinese Calligraphy Recognition

Deconvolutions in Convolutional Neural Networks

Automated Diagnosis of Vertebral Fractures using 2D and 3D Convolutional Networks

COMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017

A FRAMEWORK OF EXTRACTING MULTI-SCALE FEATURES USING MULTIPLE CONVOLUTIONAL NEURAL NETWORKS. Kuan-Chuan Peng and Tsuhan Chen

Deep learning for object detection. Slides from Svetlana Lazebnik and many others

Multi-Label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations

Fully Convolutional Deep Network Architectures for Automatic Short Glass Fiber Semantic Segmentation from CT scans

CNN Basics. Chongruo Wu

Handling Missing Annotations for Semantic Segmentation with Deep ConvNets

Using Machine Learning for Classification of Cancer Cells

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Disguised Face Identification (DFI) with Facial KeyPoints using Spatial Fusion Convolutional Network. Nathan Sun CIS601

Hybrid Spline-based Multimodal Registration using a Local Measure for Mutual Information

YOLO9000: Better, Faster, Stronger

Residual Networks for Tiny ImageNet

Probabilistic Tracking and Model-based Segmentation of 3D Tubular Structures

Part Localization by Exploiting Deep Convolutional Networks

Volumetric Medical Image Segmentation with Deep Convolutional Neural Networks

Learning Deep Representations for Visual Recognition

arxiv: v1 [cs.cv] 20 Dec 2016

Regionlet Object Detector with Hand-crafted and CNN Feature

arxiv: v1 [cs.cv] 11 Apr 2018

SSD: Single Shot MultiBox Detector. Author: Wei Liu et al. Presenter: Siyu Jiang

BioImaging facility update: from multi-photon in vivo imaging to highcontent high-throughput image-based screening. Alex Laude The BioImaging Unit

Dynamic Video Segmentation Network

Yiqi Yan. May 10, 2017

Dynamic Routing Between Capsules

One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models

Structured Prediction using Convolutional Neural Networks

Cost-alleviative Learning for Deep Convolutional Neural Network-based Facial Part Labeling

PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space

Data Augmentation for Skin Lesion Analysis

RSRN: Rich Side-output Residual Network for Medial Axis Detection

Deep Learning. Visualizing and Understanding Convolutional Networks. Christopher Funk. Pennsylvania State University.

Convolutional Neural Networks

Advanced Video Analysis & Imaging

Recognition of Animal Skin Texture Attributes in the Wild. Amey Dharwadker (aap2174) Kai Zhang (kz2213)

Multi-Input Cardiac Image Super-Resolution using Convolutional Neural Networks

doi: /

Deep Learning for Object detection & localization

Robust Minutiae Extractor: Integrating Deep Networks and Fingerprint Domain Knowledge

3 Object Detection. BVM 2018 Tutorial: Advanced Deep Learning Methods. Paul F. Jaeger, Division of Medical Image Computing

Volumetric and Multi-View CNNs for Object Classification on 3D Data Supplementary Material

Multi-Task Self-Supervised Visual Learning

Tracking Multiple Particles in Fluorescence Time-Lapse Microscopy Images via Probabilistic Data Association

CNNs for Segmenting Confluent Cellular Culture

A Model Based Neuron Detection Approach Using Sparse Location Priors

PortraitNet: Real-time Portrait Segmentation Network for Mobile Device

REGION AVERAGE POOLING FOR CONTEXT-AWARE OBJECT DETECTION

What was Monet seeing while painting? Translating artworks to photo-realistic images M. Tomei, L. Baraldi, M. Cornia, R. Cucchiara

Transcription:

Multi-channel Deep Transfer Learning for Nuclei Segmentation in Glioblastoma Cell Tissue Images Thomas Wollmann 1, Julia Ivanova 1, Manuel Gunkel 2, Inn Chung 3, Holger Erfle 2, Karsten Rippe 3, Karl Rohr 1 1 University of Heidelberg, BioQuant, IPMB, and DKFZ Heidelberg, Dept. Bioinformatics and Functional Genomics, Biomedical Computer Vision Group, 2 High-Content Analysis of the Cell (HiCell) and Advanced Biological Screening Facility, BioQuant, University of Heidelberg, Germany 3 Division of Chromatin Networks, DKFZ and BioQuant, Heidelberg, Germany thomas.wollmann@bioquant.uni-heidelberg.de Abstract. Segmentation and quantification of cell nuclei is an important task in tissue microscopy image analysis. We introduce a deep learning method leveraging atrous spatial pyramid pooling for cell segmentation. We also present two different approaches for transfer learning using datasets with a different number of channels. A quantitative comparison with previous methods was performed on challenging glioblastoma cell tissue images. We found that our transfer learning method improves the segmentation result. 1 Introduction Segmentation of cell nuclei is a frequent and important task in quantitative microscopy image analysis and for extracting phenotypes. In this work, we consider the segmentation of nuclei from 3D tissue microscopy images of glioblastoma cells. This data is very challenging due to strong intensity variation, cell clustering, poor edge information, missing object borders, strong shape variation, and low signal-to-noise ratio (Fig. 1). In previous work, several methods for cell segmentation were introduced(e.g., [1, 2]). Recently, deep learning methods achieved very good results [2]. When onlyasmalldatasetisavailablefortraining, itiscommoninvideoimageanalysis of natural scenes to pretrain a deep neural network on a large dataset like ImageNet and fine-tune the network on the considered dataset [3]. However, images of natural scenes are usually color images represented by three channels, but microscopy images generally have a varying number of channels (and often more than three channels). For a convolutional neural network, in the first layer a filter is used for each channel to extract corresponding feature maps. Hence, the number of channels is fixed in the network according to the considered data, and the pretrained network cannot directly be transferred to data with a different number of channels. 316

Multi-channel Deep Transfer Learning 317 Inthiswork,weintroduceanoveldeepneuralnetworkbasedonatrousspatial pyramid pooling (ASPP) for cell segmentation. We also present two transfer learning approaches which use only one channel for training and perform finetuning on more channels. In our application, we trained the neural network on a one-channel dataset and transfer it to a dataset with four channels. We applied our method to segment cell nuclei in challenging glioblastoma cell tissue images and performed a quantitative comparison with previous methods. 2 Methods Our proposed deep learning method combines a U-Net [2] with batch normalization [4], residual connections [5], and atrous spatial pyramid pooling (ASPP) [6]. ASPP has the advantage that large context information can be captured at multiple image scales. We modified ASPP by using dilations of 1, 2, and 4 as well as global average pooling (pooling kernel equal to feature maps) to capture information from the whole image. After the ASPP block we employ Gaussian dropout (p=0.5). For our deep learning model we investigated PReLU [7] activation functions. Using a U-Net in conjunction with a PReLU activation function, we observed that the first layers mostly favour negative activations. However, PReLU increases the computation time. Therefore, we used PReLU only in the first layer to make use of negatively activated features, while saving computation time. Our network was trained using cross-validation and early stopping with the Adam optimizer and a learning rate of l init = 0.001 as well as β 1 = 0.9 and β 2 = 0.999. The dataset was always splitinto 50% training, 25% validation, and 25% testing data. We augmented the dataset using random flipping, rotation, cropping (200 200 pixels), color shift, and elastic deformations. (a) Original image (b) Manual annotation Fig. 1. DAPI channel of original tissue image of glioblastoma cells and ground truth annotation.

318 Wollmann et al. For transfer learning we employed two approaches. In the first approach, we use the network trained on one channel (Fig. 2(a)) for all channels by altering the first layer of the network and using the same trained convolutional filters for all channels (Fig. 2(b)). This is motivated by the assumption that the trained filters are generic for different types of images and can therefore be applied to other channels with different stainings. In the second approach, we use the trained convolutional filters for the corresponding channel in the new dataset and initialize the filters for the other channels by MSRA initialisation [7] (Fig. 2(c)). With this approach we keep the pretrained filters for one channel and train all other filters from scratch. 2.1 Performance measures We used four performance measures for quantitative evaluation: Object IOU, IOU, Dice, and Warping Error. The object-based intersection over union(object IOU) measure quantifies the agreement between the segmentation result and the ground truth for each object. We matched a ground truth object to a segmented object, if the normalized overlap is more than 50%. In addition to this objectbased IOU, we also determined a pixel-based IOU. The Dice coefficient is defined as the ratio of true positive pixels and the sum of pixels in ground truth and segmentation. The Warping Error[8] is the minimum mean square error between pixels of the segmentation and pixels of the topology-preserving warped ground truth. We calculated all performance measures for each image and averaged over the whole dataset. 3 Experimental results We applied our method to tissue microscopy images of glioblastoma cells. Segmentation of cell nuclei is important for subsequent analysis of telomeres and for patient stratification[9]. The dataset consists of five 3D images and was acquired (a) Original filter trained on one channel (b) Same trained filter used for all channels Fig. 2. Different approaches for transfer learning. (c) Individual trained filter for one channel

Multi-channel Deep Transfer Learning 319 Table 1. Performance of different segmentation methods. Bold and underline highlights the best result, and bold indicates the second best result. Method Object IOU IOU Dice Warp Error [10 4 ] Clustering & Thresholding 0.5782 0.6520 0.7884 0.093 Fast-Marching Level Set 0.5065 0.5682 0.7194 0.149 U-Net 0.6666 0.5774 0.7168 0.011 Proposed NN 0.7814 0.6154 0.7581 0.040 Proposed NN (transfer, same filter) 0.7913 0.5221 0.6709 0.045 Proposed NN (transfer, indiv. filters) 0.7981 0.6426 0.7775 0.030 by a Leica TCS SP5 point scanning confocal microscope with a 63x objective lens and a voxel size of 100 100 250nm. Four color channels were imaged sequentially: PML antibody stain (Alexa 647), FISH CY3 telomere probe, FAM labeled CENP-B PNA probe, and DAPI nuclei stain. 45 axial sections were acquired for each 3D stack. The deep learning models with transfer learning were trained on a second dataset with glioblastoma cells containing 50 images stained with DAPI nuclei stain, before training on the considered first dataset. However, the second dataset has only one channel and consists of maximum intensity projection (MIP) images. Therefore, standard transfer learning is not applicable and we need other approaches such as the two transfer learning strategies described in Section 2 above. For a quantitative comparison, we also applied thresholding in combination with mean shift clustering. The 3D images were preprocessed using 3D Gaussian filtering (σ = 2 pixels). We used an empirically determined threshold of 160. In addition, we used an approach based on Gaussian filtering, mean shift clustering, and 3D fast-marching level sets [10]. The segmentation results were post-processed using hole filling. All segmentation methods were evaluated on the 3D images from the first dataset, which were not used for training. We compared the segmentation results forfive3dimageseachcontaining65sections(intotal3252dimagesperchannel were used). Ground truth segmentations for all images were determined by manual annotation. Table 1 shows the results for all methods for the different evaluation metrics. It turns out that the proposed neural network combined with transferring individual filters performs best for object-based IOU and second best for IOU, Dice and Warping Error. Segmentation results for an example image are provided in Fig. 3. It can be seen that the proposed network performs best. In addition, transferring individual filters improves cell separation. The high object-based IOU and low Warping Error indicates that the proposed model is more suited to correctly merge and split objects. 4 Conclusion We presented a deep neural network based on ASPP for cell segmentation combined with two approaches for transfer learning to transfer trained networks from

320 Wollmann et al. one-channel data to multi-channel data. Based on a quantiative comparison using glioblastoma cell tissue images we showed that transfer learning improves the performance. Our novel deep neural network in conjunction with transfer learning and individual filters performed best for Object IOU and second best for Dice and Warping Error. Acknowledgement. Support of the BMBF within the projects CancerTelSys (e:med) and de.nbi (HD-HuB) is gratefully acknowledged. (a) Original image (b) Ground truth (c) Cluster. & Thresholding (d) Fast-Marching Level Set (e) U-Net (f) Proposed NN (g) Proposed NN (transfer, same filter) (h) Proposed NN (transfer, individual filters) Fig.3. Example tissue microscopy image of glioblastoma cells, ground truth, and segmentation results of different methods.

Multi-channel Deep Transfer Learning 321 References 1. Dima AA, Elliott JT, Filliben JJ, et al. Comparison of segmentation algorithms for fluorescence microscopy images of cells. Cytometry Part A. 2011;79(7):545 559. 2. Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. Proc MICCAI. 2015; p. 234 241. 3. Huh M, Agrawal P, Efros AA. What makes ImageNet good for transfer learning?. arxiv:1608.08614; 2016. 4. Ioffe S, Szegedy C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proc ICML. 2015; p. 448 456. 5. He K, Zhang X, Ren S, et al. Deep residual learning for image recognition. Proc CVPR. 2016; p. 770 778. 6. Chen LC, Papandreou G, Kokkinos I, et al.. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. arxiv:1606.00915; 2016. 7. He K, Zhang X, Ren S, et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proc ICCV. 2015; p. 1026 1034. 8. Jain V, Bollmann B, Richardson M, et al. Boundary learning by optimization with topological constraints. Proc CVPR. 2010; p. 2488 2495. 9. Osterwald S, Deeg KI, Chung I, et al. PML induces compaction, TRF2 depletion and DNA damage signaling at telomeres and promotes their alternative lengthening. J Cell Sci. 2015;128(10):1887 1900. 10. Sethian JA. Level Set Methods and Fast Marching Methods. vol. 3. Cambridge University Press; 1999.