STAR: Spatio-Temporal Architecture for super-resolution in Low-Dose CT Perfusion

Size: px
Start display at page:

Download "STAR: Spatio-Temporal Architecture for super-resolution in Low-Dose CT Perfusion"

Transcription

1 STAR: Spatio-Temporal Architecture for super-resolution in Low-Dose CT Perfusion Yao Xiao 1, Ajay Gupta 2, Pina C. Sanelli 3, Ruogu Fang 1 1 University of Florida, Gainesville, FL 2 Weill Cornell Medical College, New York, NY 3 Northwell Health, Manhasset, NY Abstract. Computed tomography perfusion (CTP) is one of the most widely used imaging modality for cerebrovascular disease diagnosis and treatment, especially in emergency situations. While cerebral CTP is capable of quantifying the blood flow dynamics by continuous scanning at a focused region of the brain, the associated excessive radiation increases the patients risk levels of developing cancer. To reduce the necessary radiation dose in CTP, decreasing the temporal sampling frequency is one promising direction. In this paper, we propose STAR, an end-toend Spatio-Temporal Architecture for super-resolution to significantly reduce the necessary scanning time and subsequent radiation exposure. The inputs into STAR are multi-directional 2D low-resolution spatiotemporal patches at different cross sections over space and time. Via training multiple direction networks followed by a conjoint reconstruction network, our approach can produce high-resolution spatio-temporal volumes. The experiment results demonstrate the capability of STAR to maintain the image quality and accuracy of cerebral hemodynamic parameters at only one-third of the original scanning time. 1 Introduction Computed tomography perfusion (CTP) is one of the most widely used imaging modality for disease diagnosis and therapeutics planning such as stroke and oncology[3,11], especially in emergency situations. Cerebral CTP scans a focused brain region for a prolonged amount of time to quantify the blood flow dynamics in the brain. However, a single 40-second cerebral CTP scan can subject the human body to as much as a year s worth of radiation exposure from natural surroundings [7]. In contrast, a chest x-ray would be on par with about ten days worth of exposure. Also, by repetitively scanning a particular region of the brain, there is always a chance the patient may experience the effects of excessive exposure to radiation. Effects such as hair loss (epilation) and skin reddening (erythema) have been reported in a CT brain perfusion over-exposure incident [13]. Risks such as cancer and congenital disabilities are also within public concern [2]. Solutions such as lowering the radiation dose will increase image noise [8] and optimizing a CT scan system will increase the cost. Significant research continues with the goal of reducing radiation exposure from CTP scans. Corresponding author.

2 2 Y. Xiao, A. Gupta, P. Sanelli and R. Fang In recent years deep learning has achieved significant performance improvement in super-resolution (SR) and image reconstruction [1,5,10,9]. Deep learning models, especially convolutional neural network (CNN) structure, allows the use of learning from low-resolution (LR) image input to reconstruct a high-resolution (HR) output, thus providing a practical solution for image reconstruction. However, most of the super-resolution frameworks using deep learning techniques favor to focus on the 2D natural image SR, since adding the temporal dimension is more challenging, especially with medical images. In this work, we aim to address the challenges in temporal SR and demonstrate the feasibility of our CNN based framework in cerebral CTP for the purpose of increasing scanning intervals and thus reducing the overall scanning time, such that reducing the radiation amount to the patients. Contributions: This paper proposes STAR an end-to-end Spatio-Temporal Architecture for super-resolution, and we validate this framework on the clinical cerebral CTP dataset. The proposed STAR architecture consists of two main components: single-directional networks (SDN) and a multi-directional conjoint CNN. SDNs can capture both spatial and temporal features from CTP slices simultaneously by different cross-section patch representations, and the multidirectional conjoint CNN can integrate various single-directional information to reconstruct the final HR spatio-temporal cerebral CTP sequences. Specifically, the contributions are three-fold: (1) Our patch representation layer extracts CTP features from both spatial and temporal dimension. With the cross-section information, the STAR model can represent both spatial and temporal details for CTP image SR, especially for improving CTP sequence temporal resolution. (2) We integrate multiple SDNs with a conjoined multi-directional network to boost the performance for 3D spatio-temporal CTP data. (3) STAR can reduce the scanning time to only one-third of the current method with comparable image quality and accuracy, regarding peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) of the hemodynamic maps for disease diagnosis. 2 Methodology In this section, we first introduce our patch representation schema for generating 2D spatio-temporal LR inputs. Then, we explain how to improve the spatial and temporal resolution simultaneously for each cross-section by the SDNs. Last but not least, we describe our conjoint model to synthesize multi-directional inputs into a spatio-temporal HR image sequence. Patch Representation. The input 2D LR patches for image SR are generated from 3D cerebral CTP slices (X Y T ). X and Y represent the 2D spatial dimensions and T indicates the temporal dimension of the sequence. We also consider the diagonal (D) direction from X Y as one spatial dimension, where X and Y are equal in our data. We extract 2D patches on the X Y direction as well as on one of the spatial directions with T dimension: X T, Y T, and D T. With these cross-section data, we can re-scale them on spatial direction, temporal direction, or both spatial and temporal directions to create 2D LR patches. For instance, a 2D spatio-temporal patch represents a single spatial

3 STAR: Spatial-Temporal Architecture for super-resolution 3 Fig. 1: Single Directional Network. This model learns the difference between the LR inputs and the HR ground truth image from each cross-section from the high-dimensional medical images. By adding a skip connection between with the input image to the reconstruction layer, the model learns the reside between the LR and HR images. The convolution and ReLU layer occur in pairs and we set 64 filters with size 3 3 for each convolutional layer. vector change through time, re-scale on temporal dimension allows the change of CTP scanning time in a particular ratio. After feeding these LR patches into the convolution layers for learning the spatio-temporal details, HR output will be generated in the testing stage based on the captured features. Single Directional Network. The Single Directional Network takes the input patches from one of the four combinations of spatial and temporal dimensions: X Y, X T, Y T, and D T. Selecting a proper CNN model is a critical component for learning spatio-temporal features for SR problems. We adapt the very deep network for super-resolution (VDSR) [5] with optimized network structure to the SDN (See Fig 1) due to its high performance in 2D natural image SR. With numerous small filters of size 3 3 in the convolution layers the deep net architecture not only captures the detailed image information but also reduces the computational complexity [12]. The convolution layers of SDN exploit the spatio-temporal information over large cross-section regions by cascading small filters many times. A filter is an integral component of the layered architecture. It refers to an operator applied to the entire image which transforms the information encoded in the pixels. We set 64 filters of size 3 3 in the convolutional layers where a filter operates on the 3 3 region of the 2D input patches. The first layer of convolution operates on the spatio-temporal patches directly to obtain the feature maps, while the kernels in middle layers are convolved with the results from the previous layer. The computed intermediate j th feature map f (l) j for the middle layer l are calculated by convolving kernels wkj l (l 1) with the output feature maps f k from the previous layer l 1, that is f (l) j = 1 K K k=1 f (l 1) k wkj l, where is the convolution operator. By padding zeros for every convolutional layer during training, we ensure our output size is the same as the input. The ReLU activation layer is used after each convolutional layer; it ensures only certain features are most relevant and will be passed to the next convolution layer. It s activation function max(0, x) defines the output of a node given an input or set of inputs x.

4 4 Y. Xiao, A. Gupta, P. Sanelli and R. Fang Fig. 2: STAR Architecture. The last layer of SDN is for CTP image reconstruction. A reconstruction function y = φ(p, f (L) ) is responsible for constructing HR outputs. In this function, p denotes the 2D patches result from the middle convolution layers, and φ is the reconstruction function that sums up the predicted residuals and LR inputs to generate the HR outputs. We also set a high learning rate and apply residual learning to accelerate the convergence and ensure a less training time. STAR Architecture. The Single Directional Network only extracts features from one of the directions: Y T, X T, D T or X Y. By simply stacking the output from various cross-sections into a spatio-temporal volume, the subvoxel information and the contextual cues from different planes are missed. Thus we enhance the SDN by integrating different cross-sections together into spatiotemporal volume through a conjoint layer and another CNN architecture, with the goal of preserving the complementary inter-directional information. Fig. 2 visualizes the proposed STAR model. In this model, the left side shows the extraction of 2D patches through the four directions: Y T, X T, D T and X Y. Followed by the arrows, we feed those patches into a single dimensional network S (L) i respectively, where i = 1, 2, 3, 4 is the index of different directional inputs and L indicates the number of convolution layers. After the reconstruction of the perfusion slices P i through a single directional network S (L) i, we calculate the mean M = mean(φ(p i, S (L) i )) of all directions output in the conjoint layer. In the end, we supply another deep neural network for the conjoint learning, and the result from that is the final HR CTP slices. The advantage of combining different direction spatial-temporal features can be seen from two aspects. On one hand, the sub-voxel information and contextual cues from different planes of brain CTP volume can be learned to alleviate the bias that is caused by only learning from one direction. On the other hand, the central anatomical structure of the brain can always be captured from different directions. This ensures the pixels in the same area of the brain will be reinforced after the conjunction; providing complementary details to overcome the blurring caused by the bicubic interpolation for LR patches generation.

5 STAR: Spatial-Temporal Architecture for super-resolution 5 ssvd CBF CBV PSNR SSIM PSNR SSIM Bicubic Spat-SDN Temp-SDN STAR Table 1: PSNR Comparison for Perfusion Maps that are generated by different methods. 3 Experiments and Results PSNR (db) 40.4 Bicubic 3 Layer 20 Layer XT YT DT Single Directional SDN Fig. 3: PSNR comparison between 3 layer CNN and 20 layer CNN in SDNs. Our models are built on top of Caffe, a deep learning framework by the BVLC [4], and trained with a GPU server that contains NVIDIA K40 GPU with 64GB of RAM. The models are evaluated on 22 patients 10,472 CTP slices scanned at four 5mm thickness brain regions with the spatial resolution of 0.43mm. The slices within one sequence are intensity normalized and co-registered over time. We randomly split these slices into two subsets: 7,140 for training (15 patients), 1,428 for validation (3 patients) and 1,904 for testing (4 patients). The size for each sequence is (X Y T ). In order to create more input images to brew a robust model, we clip patches with size pixel and a stride of 21 from four directions, which yields 36,800, 36,800, 73,600, and 62,951 patches in the directions of XY, Y T, DT, and XY. We create the LR patches by using bicubic method re-size to 1/3 on the original 2D HR patches. Experiment on SDNs. We test on two different CNN structures, and the result shows that the basic single directional network outperforms the shallow network in SRCNN (3 layers) [1] at all four cross-sections. This confirms that for both spatial and temporal SR, deeper is better than the shallow net. In Fig. 3, three single directional SDNs that have temporal SR are compared with the bicubic interpolation and SRCNN method, and it shows that the 20 layer model achieves better PSNR on average. Among these three types of single directional networks, Y T direction gives the best PSNR by the 20 layer structure. The XY spatial direction has a lower PSNR value ( db) compared to the temporal cross-sections which yield to about db and db higher than bicubic and SRCNN. Therefore, we choose this 20 layer deep CNN structure for our SDN model. To maintain the output image is the same size of the input, the kernel size, stride size, and pad size of our deep directional CNN is set to be 3, 1, 1 respectively. Except the last layer outputs one feature map, other convolution layers have 64 outputs. With residual learning, the loss function is determined by the sum of estimated residuals between the HR ground truth image and the LR input. The basic learning rate is set to be 0.1, and the weight decay is set to be for faster convergence.

6 6 Y. Xiao, A. Gupta, P. Sanelli and R. Fang Fig. 4: The gray-scale images (first row) and cerebral hemodynamic parameters (second row: CBF, third row: CBV) have achieved a higher resolution through different stages: column a): LR input, column b): SDN spatial, column c): SDN temporal, column d): STAR spatio-temporal, column e): the ground truth image. Furthermore, we also compare the performance based on the different patch representations in the basic model. The result of down-sampling on temporal direction only can be seen in Fig. 3. We also evaluate on SDNs with Y T, XT and DT LR inputs that are scaled down on both spatial and temporal directions, they bring about 2.13 db, 2.98 db and 4.01 db lower PSNR than temporal direction only. However, these SNDs outperform much better results where they achieved a greater improvement on image SR - on average of 0.9 db higher than the differences between the improvement of temporal only SDNs. This experiment indicates that the temporal pattern can be predicted more precisely than the spatial features through the proposed approach. In other words, there is an extensive potential of our basic networks to combine the spatial and temporal learning to produce better performance in an appropriate way, for which the high performance will be explained in our STAR cross-section learning approach. Experiment on STAR. The proposed STAR network is a combination of multiple SDNs from different cross-sections together and is cascaded with another deep convolution network where the convolution occurs in pairs with the ReLU layer. The cascaded layers have the same parameter settings of SDN. In the first convolution layer, filters are convolved on top of the mean outputs

7 STAR: Spatial-Temporal Architecture for super-resolution 7 from the previous spatial SDN and the three temporal SDNs. Thus the sub-voxel information and the contextual cues can be learned through different directions. As can be seen in Fig. 4, the resolution of images from left to right of the columns have been improved gradually. The first row of the figure is the grayscale CTP slice. The following two rows are the cerebral hemodynamic parameters: Cerebral Blood Flow (CBF) and Cerebral Blood Volume (CBV); which are calculated by Perfusion Mismatch Analyzer (PMA, [6]) from the corresponding CTP sequences in the first row. The first row LR images in column a) are downscaled with a ratio of 3 from the ground truth gray-scale images in column e). Column b) and c) are the intermediate results from spatial SR SDN where we only perform spatial SR on the XY direction, and temporal SR SDN where we only perform temporal cross-section SR. The superior outcomes of the proposed STAR method are shown in the column d). As the areas that the arrows are pointing at, the images with LR are missing the details and the rough sketch is blurry. The spatial SR images can show more details than the LR images, but still, the boundaries are not clear enough; and the temporal SR images are similar to the spatial SR images, which with drawbacks in different areas. By combining the spatial and temporal details, we can get a much better SR result: the images present more clearly and with extra details. We also measure the PSNR and SSIM for the perfusion images. The highlighted row in Table 1 shows the best values for our method. The basic models no matter the spatial-only SDN (Spat-SDN) or the temporal-only SDN provides better image quality than the bicubic method in different degrees. Our final STAR model gives the highest PSNR of db on CBF which is 6.86 db higher than the baseline and in CBV calculation, the STAR improves bicubic about db. Other than that, in the SSIM comparison, STAR still achieves the best results of and respectively. These experiments show the perfusion maps at only 1/3 of the original scanning time with comparable perfusion map quality and accuracy through the STAR framework. Our model allows the input with 1/3 of CTP scanning time and provides high-resolution outputs which potentially reduce the possibility of radiation over-exposure. 4 Conclusion In this paper, we have presented STAR, an end-to-end spatio-temporal superresolution framework. The experimental results show that the proposed basic model of single directional network improves both spatial and temporal resolution, while the multi-directional conjoint network further enhances the SR results - comparing favorably with only temporal or only spatial SR. By learning the spatial-temporal features, our approach ensures the ability to maintain the quality of brain CTP slices within one-third of the original scanning time. In the future, we believe that by reducing the scanning time, our approach will provide an applicable solution for improving spatial and temporal resolution and help to lower the possibility of excessive patient radiation exposure, while increasing the potential of assisted clinical diagnosis of cerebrovascular disease with highquality perfusion images. Our plans include applying our pipeline to a variety of

8 8 Y. Xiao, A. Gupta, P. Sanelli and R. Fang imaging modalities, including functional MRI, PET/CT for functionality superresolution and investigating the correlation between the temporal and spatial upscale ratios with super-resolution quality. Acknowledgements This work is partially supported by the National Science Foundation under Grant No. IIS , National Center for Advancing Translational Sciences of the National Institute of Health under Award Number UL1TR000457, National Key Research and Development Program of China (No: 2016YFC ) and by National Natural Science Foundation of China (No: ). References 1. Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: European Conference on Computer Vision. pp Springer (2014) 2. de González, A.B., Mahesh, M., Kim, K.P., Bhargavan, M., Lewis, R., Mettler, F., Land, C.: Projected cancer risks from computed tomographic scans performed in the united states in Archives of internal medicine 169(22), (2009) 3. Hoeffner, E.G., Case, I., Jain, R., Gujar, S.K., Shah, G.V., et al.: Cerebral perfusion ct: technique and clinical applications 1. Radiology 231(3), (2004) 4. Jia, Y., Shelhamer, E., Donahue, J., et al.: Caffe: Convolutional architecture for fast feature embedding. arxiv preprint arxiv: (2014) 5. Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR Oral) (June 2016) 6. Kudo, K.: Perfusion mismatch analyzer, version asist-japan web site. http: //asist.umin.jp/index-e.htm, accessed: December 15, Mettler Jr, F.A., Bhargavan, M., et al.: Radiologic and nuclear medicine studies in the united states and worldwide: Frequency, radiation dose, and comparison with other radiation sources Radiology 253(2), (2009) 8. Nelson, T.R.: Practical strategies to reduce pediatric ct radiation dose. Journal of the American College of Radiology 11(3), (2014) 9. Oktay, O., Bai, W., Lee, M., et al.: Multi-input cardiac image super-resolution using convolutional neural networks. In: International Conference on Medical Image Computing and Computer-Assisted Intervention. pp Springer (2016) 10. Shi, W., et al.: Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp (2016) 11. Shrier, D.A., Tanaka, H., Numaguchi, Y., Konno, S., Patel, U., Shibata, D.: Ct angiography in the evaluation of acute stroke. American Journal of Neuroradiology 18(6), (1997) 12. Szymanski, L., McCane, B.: Deep networks are effective encoders of periodicity. IEEE transactions on neural networks and learning systems 25(10), (2014) 13. Wintermark, M., Lev, M.: FDA investigates the safety of brain perfusion ct. American Journal of Neuroradiology 31(1), 2 3 (2010)

Deep Back-Projection Networks For Super-Resolution Supplementary Material

Deep Back-Projection Networks For Super-Resolution Supplementary Material Deep Back-Projection Networks For Super-Resolution Supplementary Material Muhammad Haris 1, Greg Shakhnarovich 2, and Norimichi Ukita 1, 1 Toyota Technological Institute, Japan 2 Toyota Technological Institute

More information

FAST: A Framework to Accelerate Super- Resolution Processing on Compressed Videos

FAST: A Framework to Accelerate Super- Resolution Processing on Compressed Videos FAST: A Framework to Accelerate Super- Resolution Processing on Compressed Videos Zhengdong Zhang, Vivienne Sze Massachusetts Institute of Technology http://www.mit.edu/~sze/fast.html 1 Super-Resolution

More information

Single Image Super Resolution of Textures via CNNs. Andrew Palmer

Single Image Super Resolution of Textures via CNNs. Andrew Palmer Single Image Super Resolution of Textures via CNNs Andrew Palmer What is Super Resolution (SR)? Simple: Obtain one or more high-resolution images from one or more low-resolution ones Many, many applications

More information

RTSR: Enhancing Real-time H.264 Video Streaming using Deep Learning based Video Super Resolution Spring 2017 CS570 Project Presentation June 8, 2017

RTSR: Enhancing Real-time H.264 Video Streaming using Deep Learning based Video Super Resolution Spring 2017 CS570 Project Presentation June 8, 2017 RTSR: Enhancing Real-time H.264 Video Streaming using Deep Learning based Video Super Resolution Spring 2017 CS570 Project Presentation June 8, 2017 Team 16 Soomin Kim Leslie Tiong Youngki Kwon Insu Jang

More information

Multi-Input Cardiac Image Super-Resolution using Convolutional Neural Networks

Multi-Input Cardiac Image Super-Resolution using Convolutional Neural Networks Multi-Input Cardiac Image Super-Resolution using Convolutional Neural Networks Ozan Oktay, Wenjia Bai, Matthew Lee, Ricardo Guerrero, Konstantinos Kamnitsas, Jose Caballero, Antonio de Marvao, Stuart Cook,

More information

OPTICAL Character Recognition systems aim at converting

OPTICAL Character Recognition systems aim at converting ICDAR 2015 COMPETITION ON TEXT IMAGE SUPER-RESOLUTION 1 Boosting Optical Character Recognition: A Super-Resolution Approach Chao Dong, Ximei Zhu, Yubin Deng, Chen Change Loy, Member, IEEE, and Yu Qiao

More information

Example-Based Image Super-Resolution Techniques

Example-Based Image Super-Resolution Techniques Example-Based Image Super-Resolution Techniques Mark Sabini msabini & Gili Rusak gili December 17, 2016 1 Introduction With the current surge in popularity of imagebased applications, improving content

More information

Efficient Module Based Single Image Super Resolution for Multiple Problems

Efficient Module Based Single Image Super Resolution for Multiple Problems Efficient Module Based Single Image Super Resolution for Multiple Problems Dongwon Park Kwanyoung Kim Se Young Chun School of ECE, Ulsan National Institute of Science and Technology, 44919, Ulsan, South

More information

Computed Tomography Image Enhancement using 3D Convolutional Neural Network

Computed Tomography Image Enhancement using 3D Convolutional Neural Network Computed Tomography Image Enhancement using 3D Convolutional Neural Network Meng Li 1,4,ShiwenShen 2, Wen Gao 1, William Hsu 2, and Jason Cong 3,4 1 National Engineering Laboratory for Video Technology,

More information

Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform. Xintao Wang Ke Yu Chao Dong Chen Change Loy

Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform. Xintao Wang Ke Yu Chao Dong Chen Change Loy Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform Xintao Wang Ke Yu Chao Dong Chen Change Loy Problem enlarge 4 times Low-resolution image High-resolution image Previous

More information

Enhao Gong, PhD Candidate, Electrical Engineering, Stanford University Dr. John Pauly, Professor in Electrical Engineering, Stanford University Dr.

Enhao Gong, PhD Candidate, Electrical Engineering, Stanford University Dr. John Pauly, Professor in Electrical Engineering, Stanford University Dr. Enhao Gong, PhD Candidate, Electrical Engineering, Stanford University Dr. John Pauly, Professor in Electrical Engineering, Stanford University Dr. Greg Zaharchuk, Associate Professor in Radiology, Stanford

More information

A Novel Multi-Frame Color Images Super-Resolution Framework based on Deep Convolutional Neural Network. Zhe Li, Shu Li, Jianmin Wang and Hongyang Wang

A Novel Multi-Frame Color Images Super-Resolution Framework based on Deep Convolutional Neural Network. Zhe Li, Shu Li, Jianmin Wang and Hongyang Wang 5th International Conference on Measurement, Instrumentation and Automation (ICMIA 2016) A Novel Multi-Frame Color Images Super-Resolution Framewor based on Deep Convolutional Neural Networ Zhe Li, Shu

More information

SINGLE image super-resolution (SR) aims to reconstruct

SINGLE image super-resolution (SR) aims to reconstruct Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang 1 arxiv:1710.01992v2 [cs.cv] 11 Oct 2017 Abstract Convolutional

More information

SINGLE image super-resolution (SR) aims to reconstruct

SINGLE image super-resolution (SR) aims to reconstruct Fast and Accurate Image Super-Resolution with Deep Laplacian Pyramid Networks Wei-Sheng Lai, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang 1 arxiv:1710.01992v3 [cs.cv] 9 Aug 2018 Abstract Convolutional

More information

Detecting Bone Lesions in Multiple Myeloma Patients using Transfer Learning

Detecting Bone Lesions in Multiple Myeloma Patients using Transfer Learning Detecting Bone Lesions in Multiple Myeloma Patients using Transfer Learning Matthias Perkonigg 1, Johannes Hofmanninger 1, Björn Menze 2, Marc-André Weber 3, and Georg Langs 1 1 Computational Imaging Research

More information

Super-resolution MRI through Deep Learning Qing Lyu 13, Chenyu You 23, Hongming Shan 1, Ge Wang 1 *

Super-resolution MRI through Deep Learning Qing Lyu 13, Chenyu You 23, Hongming Shan 1, Ge Wang 1 * Super-resolution MRI through Deep Learning Qing Lyu 13, Chenyu You 23, Hongg Shan 1, Ge Wang 1 * 1 Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy,

More information

SINGLE image super-resolution (SR) aims to infer a high. Single Image Super-Resolution via Cascaded Multi-Scale Cross Network

SINGLE image super-resolution (SR) aims to infer a high. Single Image Super-Resolution via Cascaded Multi-Scale Cross Network This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible. 1 Single Image Super-Resolution via

More information

A Spatio-temporal Denoising Approach based on Total Variation Regularization for Arterial Spin Labeling

A Spatio-temporal Denoising Approach based on Total Variation Regularization for Arterial Spin Labeling A Spatio-temporal Denoising Approach based on Total Variation Regularization for Arterial Spin Labeling Cagdas Ulas 1,2, Stephan Kaczmarz 3, Christine Preibisch 3, Jonathan I Sperl 2, Marion I Menzel 2,

More information

RADIOMICS: potential role in the clinics and challenges

RADIOMICS: potential role in the clinics and challenges 27 giugno 2018 Dipartimento di Fisica Università degli Studi di Milano RADIOMICS: potential role in the clinics and challenges Dr. Francesca Botta Medical Physicist Istituto Europeo di Oncologia (Milano)

More information

CNN for Low Level Image Processing. Huanjing Yue

CNN for Low Level Image Processing. Huanjing Yue CNN for Low Level Image Processing Huanjing Yue 2017.11 1 Deep Learning for Image Restoration General formulation: min Θ L( x, x) s. t. x = F(y; Θ) Loss function Parameters to be learned Key issues The

More information

arxiv: v1 [cs.cv] 3 Jan 2017

arxiv: v1 [cs.cv] 3 Jan 2017 Learning a Mixture of Deep Networks for Single Image Super-Resolution Ding Liu, Zhaowen Wang, Nasser Nasrabadi, and Thomas Huang arxiv:1701.00823v1 [cs.cv] 3 Jan 2017 Beckman Institute, University of Illinois

More information

arxiv: v2 [cs.cv] 11 Nov 2016

arxiv: v2 [cs.cv] 11 Nov 2016 Accurate Image Super-Resolution Using Very Deep Convolutional Networks Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee Department of ECE, ASRI, Seoul National University, Korea {j.kim, deruci, kyoungmu}@snu.ac.kr

More information

Spatio-Temporal Registration of Biomedical Images by Computational Methods

Spatio-Temporal Registration of Biomedical Images by Computational Methods Spatio-Temporal Registration of Biomedical Images by Computational Methods Francisco P. M. Oliveira, João Manuel R. S. Tavares tavares@fe.up.pt, www.fe.up.pt/~tavares Outline 1. Introduction 2. Spatial

More information

Whole Body MRI Intensity Standardization

Whole Body MRI Intensity Standardization Whole Body MRI Intensity Standardization Florian Jäger 1, László Nyúl 1, Bernd Frericks 2, Frank Wacker 2 and Joachim Hornegger 1 1 Institute of Pattern Recognition, University of Erlangen, {jaeger,nyul,hornegger}@informatik.uni-erlangen.de

More information

DCGANs for image super-resolution, denoising and debluring

DCGANs for image super-resolution, denoising and debluring DCGANs for image super-resolution, denoising and debluring Qiaojing Yan Stanford University Electrical Engineering qiaojing@stanford.edu Wei Wang Stanford University Electrical Engineering wwang23@stanford.edu

More information

Image Super-Resolution Using Dense Skip Connections

Image Super-Resolution Using Dense Skip Connections Image Super-Resolution Using Dense Skip Connections Tong Tong, Gen Li, Xiejie Liu, Qinquan Gao Imperial Vision Technology Fuzhou, China {ttraveltong,ligen,liu.xiejie,gqinquan}@imperial-vision.com Abstract

More information

Accelerated very deep denoising convolutional neural network for image super-resolution NTIRE2017 factsheet

Accelerated very deep denoising convolutional neural network for image super-resolution NTIRE2017 factsheet Accelerated very deep denoising convolutional neural network for image super-resolution NTIRE2017 factsheet Yunjin Chen, Kai Zhang and Wangmeng Zuo April 17, 2017 1 Team details Team name HIT-ULSee Team

More information

Supplemental Material for End-to-End Learning of Video Super-Resolution with Motion Compensation

Supplemental Material for End-to-End Learning of Video Super-Resolution with Motion Compensation Supplemental Material for End-to-End Learning of Video Super-Resolution with Motion Compensation Osama Makansi, Eddy Ilg, and Thomas Brox Department of Computer Science, University of Freiburg 1 Computation

More information

Automatic Detection of Multiple Organs Using Convolutional Neural Networks

Automatic Detection of Multiple Organs Using Convolutional Neural Networks Automatic Detection of Multiple Organs Using Convolutional Neural Networks Elizabeth Cole University of Massachusetts Amherst Amherst, MA ekcole@umass.edu Sarfaraz Hussein University of Central Florida

More information

Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution

Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution 2011 IEEE International Symposium on Multimedia Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution Jeffrey Glaister, Calvin Chan, Michael Frankovich, Adrian

More information

Projection and Reconstruction-Based Noise Filtering Methods in Cone Beam CT

Projection and Reconstruction-Based Noise Filtering Methods in Cone Beam CT Projection and Reconstruction-Based Noise Filtering Methods in Cone Beam CT Benedikt Lorch 1, Martin Berger 1,2, Joachim Hornegger 1,2, Andreas Maier 1,2 1 Pattern Recognition Lab, FAU Erlangen-Nürnberg

More information

arxiv: v1 [cs.cv] 8 Feb 2018

arxiv: v1 [cs.cv] 8 Feb 2018 DEEP IMAGE SUPER RESOLUTION VIA NATURAL IMAGE PRIORS Hojjat S. Mousavi, Tiantong Guo, Vishal Monga Dept. of Electrical Engineering, The Pennsylvania State University arxiv:802.0272v [cs.cv] 8 Feb 208 ABSTRACT

More information

Light Field Super Resolution with Convolutional Neural Networks

Light Field Super Resolution with Convolutional Neural Networks Light Field Super Resolution with Convolutional Neural Networks by Andrew Hou A Thesis submitted in partial fulfillment of the requirements for Honors in the Department of Applied Mathematics and Computer

More information

Boosting face recognition via neural Super-Resolution

Boosting face recognition via neural Super-Resolution Boosting face recognition via neural Super-Resolution Guillaume Berger, Cle ment Peyrard and Moez Baccouche Orange Labs - 4 rue du Clos Courtel, 35510 Cesson-Se vigne - France Abstract. We propose a two-step

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

arxiv: v1 [cs.cv] 23 Sep 2017

arxiv: v1 [cs.cv] 23 Sep 2017 Adaptive Measurement Network for CS Image Reconstruction Xuemei Xie, Yuxiang Wang, Guangming Shi, Chenye Wang, Jiang Du, and Zhifu Zhao Xidian University, Xi an, China xmxie@mail.xidian.edu.cn arxiv:1710.01244v1

More information

Introduction. Prior work BYNET: IMAGE SUPER RESOLUTION WITH A BYPASS CONNECTION NETWORK. Bjo rn Stenger. Rakuten Institute of Technology

Introduction. Prior work BYNET: IMAGE SUPER RESOLUTION WITH A BYPASS CONNECTION NETWORK. Bjo rn Stenger. Rakuten Institute of Technology BYNET: IMAGE SUPER RESOLUTION WITH A BYPASS CONNECTION NETWORK Jiu Xu Yeongnam Chae Bjo rn Stenger Rakuten Institute of Technology ABSTRACT This paper proposes a deep residual network, ByNet, for the single

More information

IRGUN : Improved Residue based Gradual Up-Scaling Network for Single Image Super Resolution

IRGUN : Improved Residue based Gradual Up-Scaling Network for Single Image Super Resolution IRGUN : Improved Residue based Gradual Up-Scaling Network for Single Image Super Resolution Manoj Sharma, Rudrabha Mukhopadhyay, Avinash Upadhyay, Sriharsha Koundinya, Ankit Shukla, Santanu Chaudhury.

More information

Detecting Anatomical Landmarks from Limited Medical Imaging Data using Two-Stage Task-Oriented Deep Neural Networks

Detecting Anatomical Landmarks from Limited Medical Imaging Data using Two-Stage Task-Oriented Deep Neural Networks IEEE TRANSACTIONS ON IMAGE PROCESSING Detecting Anatomical Landmarks from Limited Medical Imaging Data using Two-Stage Task-Oriented Deep Neural Networks Jun Zhang, Member, IEEE, Mingxia Liu, Member, IEEE,

More information

Speed up a Machine-Learning-based Image Super-Resolution Algorithm on GPGPU

Speed up a Machine-Learning-based Image Super-Resolution Algorithm on GPGPU Speed up a Machine-Learning-based Image Super-Resolution Algorithm on GPGPU Ke Ma 1, and Yao Song 2 1 Department of Computer Sciences 2 Department of Electrical and Computer Engineering University of Wisconsin-Madison

More information

MODEL-BASED FREE-BREATHING CARDIAC MRI RECONSTRUCTION USING DEEP LEARNED & STORM PRIORS: MODL-STORM

MODEL-BASED FREE-BREATHING CARDIAC MRI RECONSTRUCTION USING DEEP LEARNED & STORM PRIORS: MODL-STORM MODEL-BASED FREE-BREATHING CARDIAC MRI RECONSTRUCTION USING DEEP LEARNED & STORM PRIORS: MODL-STORM Sampurna Biswas, Hemant K. Aggarwal, Sunrita Poddar, and Mathews Jacob Department of Electrical and Computer

More information

UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution

UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution Chang Chen Xinmei Tian Zhiwei Xiong Feng Wu University of Science and Technology of China Abstract Recently,

More information

Fast and Accurate Image Super-Resolution Using A Combined Loss

Fast and Accurate Image Super-Resolution Using A Combined Loss Fast and Accurate Image Super-Resolution Using A Combined Loss Jinchang Xu 1, Yu Zhao 1, Yuan Dong 1, Hongliang Bai 2 1 Beijing University of Posts and Telecommunications, 2 Beijing Faceall Technology

More information

A FRAMEWORK OF EXTRACTING MULTI-SCALE FEATURES USING MULTIPLE CONVOLUTIONAL NEURAL NETWORKS. Kuan-Chuan Peng and Tsuhan Chen

A FRAMEWORK OF EXTRACTING MULTI-SCALE FEATURES USING MULTIPLE CONVOLUTIONAL NEURAL NETWORKS. Kuan-Chuan Peng and Tsuhan Chen A FRAMEWORK OF EXTRACTING MULTI-SCALE FEATURES USING MULTIPLE CONVOLUTIONAL NEURAL NETWORKS Kuan-Chuan Peng and Tsuhan Chen School of Electrical and Computer Engineering, Cornell University, Ithaca, NY

More information

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin

More information

CT Image Denoising with Perceptive Deep Neural Networks

CT Image Denoising with Perceptive Deep Neural Networks June 2017, Xi'an CT Image Denoising with Perceptive Deep Neural Networks Qingsong Yang, and Ge Wang Department of Biomedical Engineering Rensselaer Polytechnic Institute Troy, NY, USA Email: wangg6@rpi.edu

More information

Channel Locality Block: A Variant of Squeeze-and-Excitation

Channel Locality Block: A Variant of Squeeze-and-Excitation Channel Locality Block: A Variant of Squeeze-and-Excitation 1 st Huayu Li Northern Arizona University Flagstaff, United State Northern Arizona University hl459@nau.edu arxiv:1901.01493v1 [cs.lg] 6 Jan

More information

Deep Learning for Fast and Spatially- Constrained Tissue Quantification from Highly-Undersampled Data in Magnetic Resonance Fingerprinting (MRF)

Deep Learning for Fast and Spatially- Constrained Tissue Quantification from Highly-Undersampled Data in Magnetic Resonance Fingerprinting (MRF) Deep Learning for Fast and Spatially- Constrained Tissue Quantification from Highly-Undersampled Data in Magnetic Resonance Fingerprinting (MRF) Zhenghan Fang 1, Yong Chen 1, Mingxia Liu 1, Yiqiang Zhan

More information

Fast and Accurate Single Image Super-Resolution via Information Distillation Network

Fast and Accurate Single Image Super-Resolution via Information Distillation Network Fast and Accurate Single Image Super-Resolution via Information Distillation Network Recently, due to the strength of deep convolutional neural network (CNN), many CNN-based SR methods try to train a deep

More information

Simultaneous Multiple Surface Segmentation Using Deep Learning

Simultaneous Multiple Surface Segmentation Using Deep Learning Simultaneous Multiple Surface Segmentation Using Deep Learning Abhay Shah 1, Michael D. Abramoff 1,2 and Xiaodong Wu 1,3 Department of 1 Electrical and Computer Engineering, 2 Radiation Oncology, 3 Department

More information

SIIM 2017 Scientific Session Analytics & Deep Learning Part 2 Friday, June 2 8:00 am 9:30 am

SIIM 2017 Scientific Session Analytics & Deep Learning Part 2 Friday, June 2 8:00 am 9:30 am SIIM 2017 Scientific Session Analytics & Deep Learning Part 2 Friday, June 2 8:00 am 9:30 am Performance of Deep Convolutional Neural Networks for Classification of Acute Territorial Infarct on Brain MRI:

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 10, OCTOBER

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 10, OCTOBER IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 10, OCTOBER 2017 4753 Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks Jun Zhang,

More information

Convolution Neural Networks for Chinese Handwriting Recognition

Convolution Neural Networks for Chinese Handwriting Recognition Convolution Neural Networks for Chinese Handwriting Recognition Xu Chen Stanford University 450 Serra Mall, Stanford, CA 94305 xchen91@stanford.edu Abstract Convolutional neural networks have been proven

More information

arxiv: v1 [cs.cv] 11 Apr 2018

arxiv: v1 [cs.cv] 11 Apr 2018 Unsupervised Segmentation of 3D Medical Images Based on Clustering and Deep Representation Learning Takayasu Moriya a, Holger R. Roth a, Shota Nakamura b, Hirohisa Oda c, Kai Nagara c, Masahiro Oda a,

More information

One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models

One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models [Supplemental Materials] 1. Network Architecture b ref b ref +1 We now describe the architecture of the networks

More information

arxiv: v1 [cs.cv] 31 Mar 2016

arxiv: v1 [cs.cv] 31 Mar 2016 Object Boundary Guided Semantic Segmentation Qin Huang, Chunyang Xia, Wenchao Zheng, Yuhang Song, Hao Xu and C.-C. Jay Kuo arxiv:1603.09742v1 [cs.cv] 31 Mar 2016 University of Southern California Abstract.

More information

Augmented Coupled Dictionary Learning for Image Super-Resolution

Augmented Coupled Dictionary Learning for Image Super-Resolution Augmented Coupled Dictionary Learning for Image Super-Resolution Muhammad Rushdi and Jeffrey Ho Computer and Information Science and Engineering University of Florida Gainesville, Florida, U.S.A. Email:

More information

Single Image Super Resolution - When Model Adaptation Matters

Single Image Super Resolution - When Model Adaptation Matters JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 1 Single Image Super Resolution - When Model Adaptation Matters arxiv:1703.10889v1 [cs.cv] 31 Mar 2017 Yudong Liang, Radu Timofte, Member, IEEE,

More information

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR

Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Intraoperative Prostate Tracking with Slice-to-Volume Registration in MR Sean Gill a, Purang Abolmaesumi a,b, Siddharth Vikal a, Parvin Mousavi a and Gabor Fichtinger a,b,* (a) School of Computing, Queen

More information

Hybrid Approach for MRI Human Head Scans Classification using HTT based SFTA Texture Feature Extraction Technique

Hybrid Approach for MRI Human Head Scans Classification using HTT based SFTA Texture Feature Extraction Technique Volume 118 No. 17 2018, 691-701 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu ijpam.eu Hybrid Approach for MRI Human Head Scans Classification using HTT

More information

Hallucinating Very Low-Resolution Unaligned and Noisy Face Images by Transformative Discriminative Autoencoders

Hallucinating Very Low-Resolution Unaligned and Noisy Face Images by Transformative Discriminative Autoencoders Hallucinating Very Low-Resolution Unaligned and Noisy Face Images by Transformative Discriminative Autoencoders Xin Yu, Fatih Porikli Australian National University {xin.yu, fatih.porikli}@anu.edu.au Abstract

More information

AAPM Standard of Practice: CT Protocol Review Physicist

AAPM Standard of Practice: CT Protocol Review Physicist AAPM Standard of Practice: CT Protocol Review Physicist Dianna Cody, Ph.D., DABR, FAAPM U.T.M.D. Anderson Cancer Center September 11, 2014 2014 Texas Radiation Regulatory Conference Goals Understand purpose

More information

Adaptive algebraic reconstruction technique

Adaptive algebraic reconstruction technique Adaptive algebraic reconstruction technique Wenkai Lua) Department of Automation, Key State Lab of Intelligent Technology and System, Tsinghua University, Beijing 10084, People s Republic of China Fang-Fang

More information

IMAGE SUPER-RESOLUTION BASED ON DICTIONARY LEARNING AND ANCHORED NEIGHBORHOOD REGRESSION WITH MUTUAL INCOHERENCE

IMAGE SUPER-RESOLUTION BASED ON DICTIONARY LEARNING AND ANCHORED NEIGHBORHOOD REGRESSION WITH MUTUAL INCOHERENCE IMAGE SUPER-RESOLUTION BASED ON DICTIONARY LEARNING AND ANCHORED NEIGHBORHOOD REGRESSION WITH MUTUAL INCOHERENCE Yulun Zhang 1, Kaiyu Gu 2, Yongbing Zhang 1, Jian Zhang 3, and Qionghai Dai 1,4 1 Shenzhen

More information

Bidirectional Recurrent Convolutional Networks for Video Super-Resolution

Bidirectional Recurrent Convolutional Networks for Video Super-Resolution Bidirectional Recurrent Convolutional Networks for Video Super-Resolution Qi Zhang & Yan Huang Center for Research on Intelligent Perception and Computing (CRIPAC) National Laboratory of Pattern Recognition

More information

Deep Residual Convolutional Neural Network for Hyperspectral Image Super-resolution

Deep Residual Convolutional Neural Network for Hyperspectral Image Super-resolution Deep Residual Convolutional Neural Network for Hyperspectral Image Super-resolution Chen Wang 1, Yun Liu 2, Xiao Bai 1, Wenzhong Tang 1, Peng Lei 3, and Jun Zhou 4 1 School of Computer Science and Engineering,

More information

Figure 1. Overview of a semantic-based classification-driven image retrieval framework. image comparison; and (3) Adaptive image retrieval captures us

Figure 1. Overview of a semantic-based classification-driven image retrieval framework. image comparison; and (3) Adaptive image retrieval captures us Semantic-based Biomedical Image Indexing and Retrieval Y. Liu a, N. A. Lazar b, and W. E. Rothfus c a The Robotics Institute Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213, USA b Statistics

More information

REGION AVERAGE POOLING FOR CONTEXT-AWARE OBJECT DETECTION

REGION AVERAGE POOLING FOR CONTEXT-AWARE OBJECT DETECTION REGION AVERAGE POOLING FOR CONTEXT-AWARE OBJECT DETECTION Kingsley Kuan 1, Gaurav Manek 1, Jie Lin 1, Yuan Fang 1, Vijay Chandrasekhar 1,2 Institute for Infocomm Research, A*STAR, Singapore 1 Nanyang Technological

More information

arxiv: v2 [cs.cv] 19 Apr 2019

arxiv: v2 [cs.cv] 19 Apr 2019 arxiv:1809.04789v2 [cs.cv] 19 Apr 2019 Deep Learning-based Image Super-Resolution Considering Quantitative and Perceptual Quality Jun-Ho Choi, Jun-Hyuk Kim, Manri Cheon, and Jong-Seok Lee School of Integrated

More information

Medical Image Registration by Maximization of Mutual Information

Medical Image Registration by Maximization of Mutual Information Medical Image Registration by Maximization of Mutual Information EE 591 Introduction to Information Theory Instructor Dr. Donald Adjeroh Submitted by Senthil.P.Ramamurthy Damodaraswamy, Umamaheswari Introduction

More information

Quality Enhancement of Compressed Video via CNNs

Quality Enhancement of Compressed Video via CNNs Journal of Information Hiding and Multimedia Signal Processing c 2017 ISSN 2073-4212 Ubiquitous International Volume 8, Number 1, January 2017 Quality Enhancement of Compressed Video via CNNs Jingxuan

More information

Single-Image Super-Resolution Using Multihypothesis Prediction

Single-Image Super-Resolution Using Multihypothesis Prediction Single-Image Super-Resolution Using Multihypothesis Prediction Chen Chen and James E. Fowler Department of Electrical and Computer Engineering, Geosystems Research Institute (GRI) Mississippi State University,

More information

Prostate Detection Using Principal Component Analysis

Prostate Detection Using Principal Component Analysis Prostate Detection Using Principal Component Analysis Aamir Virani (avirani@stanford.edu) CS 229 Machine Learning Stanford University 16 December 2005 Introduction During the past two decades, computed

More information

EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis Supplementary

EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis Supplementary EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis Supplementary Mehdi S. M. Sajjadi Bernhard Schölkopf Michael Hirsch Max Planck Institute for Intelligent Systems Spemanstr.

More information

Computer-Aided Diagnosis in Abdominal and Cardiac Radiology Using Neural Networks

Computer-Aided Diagnosis in Abdominal and Cardiac Radiology Using Neural Networks Computer-Aided Diagnosis in Abdominal and Cardiac Radiology Using Neural Networks Du-Yih Tsai, Masaru Sekiya and Yongbum Lee Department of Radiological Technology, School of Health Sciences, Faculty of

More information

Biomedical Image Analysis based on Computational Registration Methods. João Manuel R. S. Tavares

Biomedical Image Analysis based on Computational Registration Methods. João Manuel R. S. Tavares Biomedical Image Analysis based on Computational Registration Methods João Manuel R. S. Tavares tavares@fe.up.pt, www.fe.up.pt/~tavares Outline 1. Introduction 2. Methods a) Spatial Registration of (2D

More information

Multi-Label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations

Multi-Label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations Multi-Label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations Christian Payer 1,, Darko Štern2, Horst Bischof 1, and Martin Urschler 2,3 1 Institute for Computer Graphics and Vision,

More information

Kaggle Data Science Bowl 2017 Technical Report

Kaggle Data Science Bowl 2017 Technical Report Kaggle Data Science Bowl 2017 Technical Report qfpxfd Team May 11, 2017 1 Team Members Table 1: Team members Name E-Mail University Jia Ding dingjia@pku.edu.cn Peking University, Beijing, China Aoxue Li

More information

Inter-slice Reconstruction of MRI Image Using One Dimensional Signal Interpolation

Inter-slice Reconstruction of MRI Image Using One Dimensional Signal Interpolation IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.10, October 2008 351 Inter-slice Reconstruction of MRI Image Using One Dimensional Signal Interpolation C.G.Ravichandran

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. Copyright 2009 IEEE. Reprinted from 31 st Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009. EMBC 2009. Sept. 2009. This material is posted here with permission

More information

Automated Diagnosis of Vertebral Fractures using 2D and 3D Convolutional Networks

Automated Diagnosis of Vertebral Fractures using 2D and 3D Convolutional Networks Automated Diagnosis of Vertebral Fractures using 2D and 3D Convolutional Networks CS189 Final Project Naofumi Tomita Overview Automated diagnosis of osteoporosis-related vertebral fractures is a useful

More information

Weakly Supervised Fully Convolutional Network for PET Lesion Segmentation

Weakly Supervised Fully Convolutional Network for PET Lesion Segmentation Weakly Supervised Fully Convolutional Network for PET Lesion Segmentation S. Afshari a, A. BenTaieb a, Z. Mirikharaji a, and G. Hamarneh a a Medical Image Analysis Lab, School of Computing Science, Simon

More information

Extend the shallow part of Single Shot MultiBox Detector via Convolutional Neural Network

Extend the shallow part of Single Shot MultiBox Detector via Convolutional Neural Network Extend the shallow part of Single Shot MultiBox Detector via Convolutional Neural Network Liwen Zheng, Canmiao Fu, Yong Zhao * School of Electronic and Computer Engineering, Shenzhen Graduate School of

More information

Classifying a specific image region using convolutional nets with an ROI mask as input

Classifying a specific image region using convolutional nets with an ROI mask as input Classifying a specific image region using convolutional nets with an ROI mask as input 1 Sagi Eppel Abstract Convolutional neural nets (CNN) are the leading computer vision method for classifying images.

More information

arxiv: v2 [cs.mm] 29 Oct 2016

arxiv: v2 [cs.mm] 29 Oct 2016 A Convolutional Neural Network Approach for Post-Processing in HEVC Intra Coding Yuanying Dai, Dong Liu, and Feng Wu arxiv:1608.06690v2 [cs.mm] 29 Oct 2016 CAS Key Laboratory of Technology in Geo-Spatial

More information

Lung nodule detection by using. Deep Learning

Lung nodule detection by using. Deep Learning VRIJE UNIVERSITEIT AMSTERDAM RESEARCH PAPER Lung nodule detection by using Deep Learning Author: Thomas HEENEMAN Supervisor: Dr. Mark HOOGENDOORN Msc. Business Analytics Department of Mathematics Faculty

More information

Edge-Preserving MRI Super Resolution Using a High Frequency Regularization Technique

Edge-Preserving MRI Super Resolution Using a High Frequency Regularization Technique Edge-Preserving MRI Super Resolution Using a High Frequency Regularization Technique Kaveh Ahmadi Department of EECS University of Toledo, Toledo, Ohio, USA 43606 Email: Kaveh.ahmadi@utoledo.edu Ezzatollah

More information

Additive Manufacturing Defect Detection using Neural Networks

Additive Manufacturing Defect Detection using Neural Networks Additive Manufacturing Defect Detection using Neural Networks James Ferguson Department of Electrical Engineering and Computer Science University of Tennessee Knoxville Knoxville, Tennessee 37996 Jfergu35@vols.utk.edu

More information

PROSTATE CANCER DETECTION USING LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

PROSTATE CANCER DETECTION USING LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION PROSTATE CANCER DETECTION USING LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION Ms. Vaibhavi Nandkumar Jagtap 1, Mr. Santosh D. Kale 2 1 PG Scholar, 2 Assistant Professor, Department of Electronics and Telecommunication,

More information

Supplementary Figure 1

Supplementary Figure 1 Supplementary Figure 1 BOLD and CBV functional maps showing EPI versus line-scanning FLASH fmri. A. Colored BOLD and CBV functional maps are shown in the highlighted window (green frame) of the raw EPI

More information

End-to-end Lung Nodule Detection in Computed Tomography

End-to-end Lung Nodule Detection in Computed Tomography End-to-end Lung Nodule Detection in Computed Tomography Dufan Wu 1, Kyungsang Kim 1, Bin Dong 2, Georges El Fakhri 1, and Quanzheng Li 1 1 Gordon Center for Medical Imaging, Massachusetts General Hospital

More information

Joint CI-JAI advanced accelerator lecture series Imaging and detectors for medical physics Lecture 1: Medical imaging

Joint CI-JAI advanced accelerator lecture series Imaging and detectors for medical physics Lecture 1: Medical imaging Joint CI-JAI advanced accelerator lecture series Imaging and detectors for medical physics Lecture 1: Medical imaging Dr Barbara Camanzi barbara.camanzi@stfc.ac.uk Course layout Day AM 09.30 11.00 PM 15.30

More information

Markov Random Fields and Gibbs Sampling for Image Denoising

Markov Random Fields and Gibbs Sampling for Image Denoising Markov Random Fields and Gibbs Sampling for Image Denoising Chang Yue Electrical Engineering Stanford University changyue@stanfoed.edu Abstract This project applies Gibbs Sampling based on different Markov

More information

Convolution-Based Truncation Correction for C-Arm CT using Scattered Radiation

Convolution-Based Truncation Correction for C-Arm CT using Scattered Radiation Convolution-Based Truncation Correction for C-Arm CT using Scattered Radiation Bastian Bier 1, Chris Schwemmer 1,2, Andreas Maier 1,3, Hannes G. Hofmann 1, Yan Xia 1, Joachim Hornegger 1,2, Tobias Struffert

More information

CEA LIST s participation to the Scalable Concept Image Annotation task of ImageCLEF 2015

CEA LIST s participation to the Scalable Concept Image Annotation task of ImageCLEF 2015 CEA LIST s participation to the Scalable Concept Image Annotation task of ImageCLEF 2015 Etienne Gadeski, Hervé Le Borgne, and Adrian Popescu CEA, LIST, Laboratory of Vision and Content Engineering, France

More information

Efficient Segmentation-Aided Text Detection For Intelligent Robots

Efficient Segmentation-Aided Text Detection For Intelligent Robots Efficient Segmentation-Aided Text Detection For Intelligent Robots Junting Zhang, Yuewei Na, Siyang Li, C.-C. Jay Kuo University of Southern California Outline Problem Definition and Motivation Related

More information

An Efficient Technique For Multi-Phase Model Based Iterative Reconstruction

An Efficient Technique For Multi-Phase Model Based Iterative Reconstruction 1 An Efficient Technique For Multi-Phase Model Based Iterative Reconstruction Shiyu Xu, Debashish Pal and Jean-Baptiste Thibault Abstract Multi-phase scan is a fundamental CT acquisition technology used

More information

Fast and Accurate Single Image Super-Resolution via Information Distillation Network

Fast and Accurate Single Image Super-Resolution via Information Distillation Network Fast and Accurate Single Image Super-Resolution via Information Distillation Network Zheng Hui, Xiumei Wang, Xinbo Gao School of Electronic Engineering, Xidian University Xi an, China zheng hui@aliyun.com,

More information

Manifold Learning-based Data Sampling for Model Training

Manifold Learning-based Data Sampling for Model Training Manifold Learning-based Data Sampling for Model Training Shuqing Chen 1, Sabrina Dorn 2, Michael Lell 3, Marc Kachelrieß 2,Andreas Maier 1 1 Pattern Recognition Lab, FAU Erlangen-Nürnberg 2 German Cancer

More information

Storage Efficient NL-Means Burst Denoising for Programmable Cameras

Storage Efficient NL-Means Burst Denoising for Programmable Cameras Storage Efficient NL-Means Burst Denoising for Programmable Cameras Brendan Duncan Stanford University brendand@stanford.edu Miroslav Kukla Stanford University mkukla@stanford.edu Abstract An effective

More information