Shupeng Chen a), and An Qin Department of Radiation Oncology, William Beaumont Hospital, 3601 W. 13 Mile Rd, Royal Oak, MI 48073, USA

Size: px
Start display at page:

Download "Shupeng Chen a), and An Qin Department of Radiation Oncology, William Beaumont Hospital, 3601 W. 13 Mile Rd, Royal Oak, MI 48073, USA"

Transcription

1 Technical Note: U-net-generated synthetic CT images for magnetic resonance imaging-only prostate intensity-modulated radiation therapy treatment planning Shupeng Chen a), and An Qin Department of Radiation Oncology, William Beaumont Hospital, 3601 W. 13 Mile Rd, Royal Oak, MI 48073, USA Dingyi Zhou Department of Oncology, Renmin Hospital of Wuhan University, Wuhan, Hubei, China Di Yan Department of Radiation Oncology, William Beaumont Hospital, 3601 W. 13 Mile Rd, Royal Oak, MI 48073, USA (Received 12 July 2018; revised 30 August 2018; accepted for publication 9 October 2018; published 13 November 2018) Purpose: Clinical implementation of magnetic resonance imaging (MRI)-only radiotherapy requires a method to derive synthetic CT image (S-CT) for dose calculation. This study investigated the feasibility of building a deep convolutional neural network for MRI-based S-CT generation and evaluated the dosimetric accuracy on prostate IMRT planning. Methods: A paired CT and T2-weighted MR images were acquired from each of 51 prostate cancer patients. Fifteen pairs were randomly chosen as tested set and the remaining 36 pairs as training set. The training subjects were augmented by applying artificial deformations and feed to a two-dimensional U-net which contains 23 convolutional layers and million trainable parameters. The U- net represents a nonlinear function with input an MR slice and output the corresponding S-CT slice. The mean absolute error (MAE) of Hounsfield unit (HU) between the true CT and S-CT images was used to evaluate the HU estimation accuracy. IMRT plans with dose 79.2 Gy prescribed to the PTV were applied using the true CT images. The true CT images then were replaced by the S-CT images and the dose matrices were recalculated on the same plan and compared to the one obtained from the true CT using gamma index analysis and absolute point dose discrepancy. Results: The U-net was trained from scratch in h using a GP100-GPU. The computation time for generating a new S-CT volume image was s. Within body, the (mean SD) of MAE was ( ) HU. The 1%/1 mm and 2%/2 mm gamma pass rates were over 98.03% and 99.36% respectively. The DVH parameters discrepancy was less than 0.87% and the maximum point dose discrepancy within PTV was less than 1.01% respect to the prescription. Conclusion: The U-net can generate S-CT images from conventional MR image within seconds with high dosimetric accuracy for prostate IMRT plan American Association of Physicists in Medicine [ Key words: deep learning, IMRT, prostate, synthetic CT, U-net 1. INTRODUCTION Magnetic resonance imaging (MRI) has been widely used in radiotherapy clinic for organ delineation due to its superior soft tissue contrast over that of computed tomography (CT). Currently, MR image is usually used as secondary image modality for tumor/organ of interest delineation after aligned rigidly to the simulation CT image. A clinical workflow of radiotherapy based on MRI simulation alone can eliminate the image registration uncertainties for contours propagation, reduce radiation exposure and associated patient discomfort. Implementing MRI-guided radiotherapy in clinic requires a method to derive CT equivalent information or electron density distribution from an MR image for radiation dose calculation. Such CT equivalent data usually referred to as pseudo-ct 1,2 or synthetic-ct (S-CT) 3 5 images. Various methods of S-CT image generation has been proposed which can be categorized as (a) statistical modeling 6 11 ; (b) multi-atlas deformable image registration alone 12 or combined with local weighting or patch fusion algorithms 3,13 17 ; (c) random forest-based methods; 2,18 20 and (d) deep learning-based methods Statistical modeling methods usually require special MRI sequences to provide additional information such as using the ultrashort echo time (UTE) sequence to segment air and bony structures. 6,9 However, those sequences are not commonly used in radiotherapy clinic. Multi-atlas-based methods inherently suffer from large uncertainties induced by intersubject registration due to their considerable anatomical variations. Including local weighting or patch fusion algorithms improves HU estimation accuracy, especially in those high HU contrast areas. 3,13 However, their implementations either required prelabeled region of interests (ROI)s 3 or extensive computation for calculating the local similarity metrics voxel by voxel between a tested subject and each of atlas after applying interpatient deformable image registrations. 13,16 Random forest-based methods 2,18 do not require deformable 5659 Med. Phys. 45 (12), December /2018/45(12)/5659/ American Association of Physicists in Medicine 5659

2 5660 Chen et al.: U-net based S-CT for prostate IMRT 5660 image registration between a tested subject and the training subjects. However, their prediction accuracy largely depends on the quality of features extracted from a small image patch which may not provide sufficient spatial information for CT number estimation. The major advantages of deep learning-based methods including (a) fast model deployment with only seconds to generate an S-CT volume image once the model is trained; (b) capable to learn a nonlinear MR-to-CT intensity mapping from a large scale of training samples and (c) fully automatic without requiring any tissue/organ segmentations during training and testing stages. Different deep learning models have been applied for MRI-based S-CT image generation. Snehashis 23 adopted inception block structure to build a shallow convolutional neural network (ConvNet) with only 5 6 convolutional layers. Han 21 fine-tuned the pretrained VGG-16 network 27 to create a deep ConvNet with 27 convolutional layers for brain S-CT image generation. 28 Recently, generative adversarial network (GAN) 29 also been introduced for the S- CT image generation that enables the ConvNet to be trained with unpaired CT-MR images based on the principle of cycle consistency. 30 Although these studies have shown comparable S-CT image estimation accuracy with state-of-art algorithms, so far, few of them have been evaluated by clinical planning studies. In this study, we investigated the feasibility of training a deep ConvNet from scratch for S-CT image generation from conventional T2-weighed MR images using a handful of limited paired CT-MR images obtained from prostate cancer patients. We adopt the standard U-net architecture 28 as it is of high capability for high spatial resolution prediction task. The U-net based S-CT algorithm was compared with a previous published multi-atlas algorithm. 3 In addition, we evaluated the dosimetric accuracy of using the S-CT images for clinical prostate intensity-modulated radiation therapy (IMRT) planning. 2. MATERIALS AND METHODS 2.A. Data acquisition and preprocessing A total of 51 MR-CT paired images of prostate cancer patients were used in the study. Patients were aged between 51 and 85 yrs with the median body weight 87 (58 118) kg. No specific instructions were given to the patients regarding bladder and bowel preparation. The patients were imaged head first supine with knee support for both CT and MR imaging following the same setup procedure. The median interval time between the CT and MR imaging was 45.5 (31 116) min and the total acquisition time of MR imaging was approximate 7 min. The MR scanner was equipped with a dedicated radiotherapy flat couch. All CT images were acquired using a Brilliance Big Bore (Philips Healthcare) scanner with a reconstructed voxel size mm 3 and MR images were acquired using a 3 T Ingenia high-field system (Philips Healthcare) using a spin-echo sequence (T2-TSE), an echo time/ repetitive time (TE/TR) of 110/5000 ms, a flip angle of 90, and a reconstructed voxel size of mm 3 with field of view to cover the entire pelvic region. Multimodality, that is, MR-CT, deformable image registration was performed for each pair. The resulting paired MR- CT images shared a resolution of mm 3 with voxels per axial slice and a total of slices per patient. The intensity of each MR volume image was normalized as zero mean and unit variance and scale to 255 intensity bins. A binary body mask was derived from each MR image and applied to remove the nonanatomical information outside the body for both MR and CT images. 2.B. Architecture A schematic view of the U-net model is depicted in Fig. 1. Briefly, it contains 23 convolutional layers interleaved with max pooling and unpooling layers as well as million trainable parameters. The contracting path on the left consists of the two convolutions, each followed by a rectified linear unit (ReLU) and a max pooling with stride two for downsampling. Each step in the expansive path on the right consists of an upsampling of the feature maps followed by a convolution that halved the number of feature channels, a concatenation with the corresponding copied feature map from the contracting path, and two consecutive convolutions, each followed by a ReLU activation. Different from the original U-net, 28 we applied the batch FIG. 1. The U-net architecture. Each box represents to a multichannel feature map. The number of channels is denoted on top of the box. The size of the feature map is on lower left of the box. White boxes represent copied feature maps. (BN, batch normalization; Relu, rectified linear unit activation function). [Color figure can be viewed at wileyonlinelibrary.com]

3 5661 Chen et al.: U-net based S-CT for prostate IMRT 5661 normalization (BN) right after each of the convolutional layers and before activation to accelerate the training process by reducing internal covariate shift. 31 In addition, we applied zero padding for each convolutional layer so that the size of their output feature maps is preserved after the convolution operations. 2.C. Implementation and training Fifteen of 51 CT-MR pairs were randomly selected as the tested subjects and the other 36 as the training subjects. We performed data augmentation prior to the training process by applying deformations to the training subjects to increase the number of training samples. Specifically, for a selected training subject, deformable image registration was performed between the selected training subject and each of the other training subjects, respectively. The resulting deformations were multiplied by a coefficient l and applied to the selected training subject s CT-MR images to generate a set of deformed paired CT-MR images. The coefficient l controlled the magnitude of the applied deformations. In this study, we experimentally choose a small l 1/3 to minimize the possibility of generating unrealistic deformed images with local distortions. After applying augmentation, the number of training subjects increased from 36 to 1296 including a total of 88,353 paired CT-MR slices. All the image registrations in this study were performed using a commercial tool ADMIRE, CMS Software, Elekta Inc., Stockholm, Sweden. Training the U-net requires estimating the network parameters. This was achieved by minimizing the loss defined as the absolute difference between the true CT images and the S-CT images generated with the given MR images. The model was implemented using Keras 32 with using tensorflow 33 as the backend. The model was trained from scratch on single NVIDIA GP100 GPU card with 16 GB memory. A total of 88,353 MR-CT paired slices were used as samples to train the model with a minibatch 34 size of five samples or slices. Adam optimization algorithm 35 was used with the following parameters: initial learning rate = 10 2 ; b 1 = 0.9; b 2 = 0.999; epsilon = 10 8 without applying dropout. 36 The model was trained in a total of 22 epochs or k iterations and the learning was divided by 10 at 16th epoch. 2.D. Evaluation For each tested subject, the MR image was used as input for the U-net to generate the corresponding S-CT image slice by slice. The corresponding paired CT image was referred to as true CT image in this study. Mean error (ME) and mean absolute error (MAE) between the S-CT image and the true CT image were calculated and used to measure the CT number estimation accuracy: and ME ¼ 1 N XN i¼1 ½SCTðv i Þ CTðv i ÞŠ (1) MAE ¼ 1 N XN i¼1 jsctðv i Þ CTðv i Þj (2) where v is a voxel within body and N is the total number of voxels. The ME and MAE were calculated within body, bone (HU 150) and soft tissue (150 > HU 100), respectively. The S-CT images generated in this study were compared with those generated using our previous published multi-atlas method. 3 Dosimetric accuracy of using the S-CT images for treatment planning was analyzed using clinical IMRT plans with dose 79.2 Gy prescribed to planning target volume (PTV), 6 MV photons beams, and 7 9 gantry angles/ports depending on patients geometry. For each tested patient, a plan was designed and optimized on the true CT image. The true CT image then was replaced by the S-CT images and the dose matrices were recalculated using the same beam parameters (Pinnacle, v9.7). The dose matrices covered all ROIs including PTV, bladder, rectum, and femoral region. The resulting dose matrices were compared using the global three-dimensional gamma index analysis 37 under the criteria of 1%/ 1 mm, 2%/2 mm, and 3%/3 mm (dose discrepancy/distance agreement) with surpassing areas that have a point dose less than 10% of the maximum dose. Within the ROIs, the average and maximum absolute point dose discrepancies with respect to the prescription dose were calculated. Dose volume histogram (DVH) parameters obtained from the dose matrices that calculated on the true CT and S-CT images were compared. 3. RESULTS The model was trained in h from scratch. Once the model is trained, it takes s ( ms/slice) to generate an S-CT volume image from a new MR image. In comparison, the multi-atlas method 3 took 2 3 min on the same GPU with using nine atlases for deformable registration followed by a selection algorithm. For the 15 tested patients, the (mean SD) ME between the S-CT and the true CT images within body were ( ) HU and ( ) HU with respect to the U-net model and multi-atlas-based method. The MAEs within body were ( ) and ( ) HU with respect to the U-net model and the multi-atlas-based method (P = , Wilcoxon rank sum test). The corresponding maximum MAEs were and HU, respectively. The details of the ME and MAE within different tissues were listed in Table I. Figure 2 shows an axis slice of MR, true CT, and S-CT images generated using the U-net for a patient with the average accuracy of CT number estimation. Figure 3 shows a patient s S-CT images generated using the two algorithms. Within body, the 1%/1 mm, 2%/2 mm, and 3%/3 mm gamma pass rates were over 98.03%, 99.36%, and 99.76% for the U-net-generated S-CT images and over 97%, 99.05%, and 99.67% for the multi-atlas-generated S-CT images.

4 5662 Chen et al.: U-net based S-CT for prostate IMRT 5662 TABLE I. Synthetic CT image Hounsfield unit (HU) discrepancies. ME MAE Mean SD (range) HU U-net Multi-atlas U-net Multi-atlas Body ( 0.96, 21.01) ( 3.3, 16.66) (22.36, 43.26) (29.85, 48.75) Soft tissue (150 > ( 1.54, 12.51) (0.01, 17.72) (16.24, 22.18) (21.98, 32.07) HU 100) Bone (HU 150) (33.26, ) (10, 76.33) (103.63, ) (106.75, ) ME, mean error; MAE, mean absolute error; SD, standard deviation. (a) (b) (c) (d) FIG. 2. U-net-generated synthetic CT image (S-CT) of a patient with a mean absolute error, MAE = 30.6 Hounsfield unit (HU). Left to right: MR (a), true CT (b), S-CT (c) and HU discrepancy (d). The bottom row shows the zooming windows for the regions within the yellow rectangles. [Color figure can be viewed at wileyonlinelibrary.com] (a) (b) (c) (d) (e) FIG. 3. Synthetic CT images (S-CT) of a patient with a mean absolute error, MAE = and 33.9 HU, generated using the U-net and multi-atlas method. Left to right: true CT (a), S-CT generated by U-net (b) and its corresponding HU discrepancy (c) as well as S-CT generated by the multi-atlas method (d) and its corresponding HU discrepancy (e). [Color figure can be viewed at wileyonlinelibrary.com] Within ROIs (PTV, bladder, rectum, and femoral region), the average absolute point dose discrepancy was less than 0.4% and 0.6% for the S-CT images generated by U-net and multiatlas method, respectively. The details of point dose discrepancy in each ROI were listed in Table II. The discrepancy of the DVH parameters was listed in Table III. Figure 4 shows the dosimetry results of a patient with the largest MAE. 4. DISCUSSIONS The U-net architecture created in this study achieved competitive CT number estimation accuracy for the prostate/pelvic region with a (mean SD) MAE of ( ) HU within body. Compared with our previously multi-atlas method, 3 the overall accuracy improved significantly. The

5 5663 Chen et al.: U-net based S-CT for prostate IMRT 5663 improvements most likely come from the soft tissue (Table I). However, the HU estimation within bony structure was worse than the multi-atlas method. One possible reason could be the large interpatient variability in CT intensity distribution within bony structure and the training samples from only 36 subjects may not well representative. Therefore, improvements could be expected by including more training samples. Although both methods suffer from the similar effects due to the limited training samples, the U-net seems to be more sensitive, as it determines the CT number within bony structures based on the MR intensity from different tissues of all training samples. In contrast, the multi-atlas only based on those selected atlases using a ROI-based local weighting algorithm. 3 Therefore, adding additional anatomical information in the training data for the U-net could further improve the CT number estimation accuracy. For instance, a ConvNet can be built with multiple inputs, including MR image and manually delineated contour of bony structures, and single output, that is, the S-CT image. The input contours can be regarded as additional knowledge to help the model predict CT number. Compared with other deep learning-based methods evaluated on the prostate/pelvic data, 22,24,25,38 our results achieved a relatively smaller MAE. Partly because we adopted extensively data augmentation by applying TABLE II. Average and maximum absolute point dose discrepancies with respect to the prescription dose within region of interests (ROI)s. Average point dose discrepancies mean SD (range) (%) Maximum point dose discrepancies mean SD (range) (%) ROI U-net Multi-atlas P U-net Multi-atlas P PTV (0.07, 0.40) (0.09, 0.60) (0.25, 1.01) (0.28, 1.24) Rectum (0.04, 0.34) (0.08, 0.46) (0.25, 2.12) (0.21, 2.11) Bladder (0.02, 0.30) (0.04, 0.37) (0.27, 3.44) (0.31, 3.47) Femoral region (0.03, 0.22) (0.02, 0.28) (0.25, 1.71) (0.22, 1.70) SD, standard deviation; PTV, planning target volume; p values were calculated based on Wilcoxon rank sum test. TABLE III. Absolute discrepancies of dose volume histogram (DVH) parameters with respect to those calculated on the true CT images. DVH parameter discrepancies mean SD (range) (%) U-net Multi-atlas P PTV-D (0.00, 0.49) (0.12, 0.65) PTV-D (0.00, 0.38) (0.01, 0.59) Bladder D (0.03, 0.87) (0.09, 0.89) Rectum-D (0.03, 0.49) (0.14, 0.93) Rectum-D (0.01, 0.40) (0.07, 0.75) Femoral Region-D (0.02, 0.44) (0.06, 0.77) SD, standard deviation; PTV, planning target volume; p values were calculated based on Wilcoxon rank sum test. (a) (b) (c) (d) (e) FIG. 4. Dosimetry results of a patient with the largest mean absolute error, MAE = HU. Dose matrix calculated on the true CT (a) and on the U-net-generated synthetic CT (S-CT) (b), the 1%/1 mm (c) and 3%/3 mm gamma matrices (d), and the dose-volume histogram (e) (solid line: dose calculated on true CT; dashed line: dose calculated on S-CT) PTV: planning target volume. [Color figure can be viewed at wileyonlinelibrary.com]

6 5664 Chen et al.: U-net based S-CT for prostate IMRT 5664 artificially created deformations to the training subjects to increase the number of training samples. Such a strategy increases the model robustness with respect to soft tissue organ deformations and alleviate model overfitting. 28,39 Without applying data augmentation, the (mean SD) MAE of the S-CT images from the 15 tested patients increased to ( ) HU. In addition, we did not apply bias field correction and MR intensity histogram normalization. Instead, we created a large number of samples and let the network learn the inherent MR image intensity variations. Therefore, the CT number estimation could be more robust to the intensity inhomogeneity of a new coming MR image. The dosimetric accuracy of clinical prostate IMRT planning using the U-net-generated S-CT images was high and superior than using those S-CT images generated by the multi-atlas method (Tables II and III). However, we freely acknowledged that the accuracy of S-CT images can be further improved by using more advanced network architectures such as extending the network architecture to three-dimensional U-net 40 when more computational resources are available or including adversarial training to generate more realistic images with less blurry. 38 The bottleneck of the U-net was the misalignments between the MR and paired CT images of soft tissue organs even when the images were acquired at the same position, on the same day with applying CT-MR image registration. Therefore, using the cycle GAN trained with unpaired samples 30,40 would have a potential improvement on dosimetric accuracy. Figure 4 shows the dosimetric results of a patient with the worse CT number estimation. Dose grids that failed to pass the 1%/1 mm gamma index were in the beam entrance angles, near the skin and outside the region of interests. Most of them pass the 3%/3 mm gamma index, therefore, should not have substantial clinical impacts. Another weakness of applying deep learning models would be the requirements of a large amount of training data and it may be sensitive to the intervariation of MRI machines and imaging procedures/acquisitions. However, this could be partially solved by transfer learning. 41 Specifically, a model trained with samples acquired from one machine can be used as a pretrained model for another machine. The pretrained model then can be retrained or reoptimized using a much smaller number of training samples acquired from the new machine. In summary, the U-net architecture can be trained using a handful of limited paired MR-CT images from scratch. It can generate accurate volume S-CT images from conventional MR sequence images fully automatic within seconds. Using the U-net-generated S-CT images for prostate IMRT dose calculation achieved (1%/1 mm) gamma pass rate over 98.03% within body. Further evaluation on a larger patient cohort is warranted. ACKNOWLEDGMENTS This work has been partially supported by Elekta R&D grant and the research planning software license provided by Philips. a) Author to whom correspondence should be addressed. Electronic mail: shupeng.chen@beaumont.org; Telephone: REFERENCES 1. Degen J, Heinrich MP. Multi-atlas based pseudo-ct synthesis using multimodal image registration and local atlas fusion strategies, in CVPR. 2016, pp Largent A, Nunes J, Saint-Jalmes H, et al., Pseudo-CT generation by conditional inference random forest for MRI-based radiotherapy treatment planning, in EUSIPCO. 2017, pp Chen S, Quan H, Qin A, Yee S, Yan D. MR image-based synthetic CT for IMRT prostate treatment planning and CBCT image-guided localization. J Appl Clin Med Phys. 2016;17: Bredfeldt JS, Liu L, Feng M, Cao Y. Synthetic CT for MRI-based liver stereotactic body radiotherapy treatment planning. Phys Med Biol. 2017;62: Hsu S-H, Cao Y, Huang K, Feng M, Balter JM. Investigation of a method for generating synthetic CT models from MRI scans of the head and neck for radiation therapy. Phys Med Biol. 2013;58: Johansson A, Karlsson M, Nyholm T. CT substitute derived from MRI sequences with ultrashort echo time. Med Phys. 2011;38: Kapanen M, Tenhunen M. T1/T2*-weighted MRI provides clinically relevant pseudo-ct density data for the pelvic bones in MRI-only based radiotherapy treatment planning. Acta Oncol. 2013;52: Korhonen J, Kapanen M, Keyril ainen J, Sepp al a T, Tenhunen M. A dual model HU conversion from MRI intensity values within and outside of bone segment for MRI-based radiotherapy treatment planning of prostate cancer. Med Phys. 2014;41: Edmund JM, Kjer HM, Van LK, Hansen RH, Andesen JAL, Andreasen D. A voxel-based investigation for MRI-only radiotherapy of the brain using ultra short echo times. Phys Med Biol. 2014;59: Zheng W, Kim JP, Kadbi M, Movsas B, Chetty IJ, Glide-Hurst CK. Magnetic resonance-based automatic air segmentation for generation of synthetic computed tomography scans in the head region. Int J Radiat Oncol Biol Phys. 2015;93: Kim J, Glide-Hurst C, Doemer A, Wen N, Movsas B, Chetty IJ. Implementation of a novel algorithm for generating synthetic CT images from magnetic resonance imaging data sets for prostate cancer radiation therapy. Int J Radiat Oncol Biol Phys. 2015;91: Dowling JA, Lambert J, Parker J, et al. An atlas-based electron density mapping method for magnetic resonance imaging (MRI)-alone treatment planning and adaptive MRI-based prostate radiation therapy. Int J Radiat Oncol Biol Phys. 2012;83:e5 e Uh J, Merchant TE, Li Y, Li X, Hua C. MRI-based treatment planning with pseudo CT generated through atlas registration. Med Phys. 2014;41: Siversson C, Nordstr om F, Nilsson T, et al. Technical Note: MRI only prostate radiotherapy planning using the statistical decomposition algorithm. Med Phys. 2015;42: Dowling JA, Sun J, Pichler P, et al. Automatic substitute computed tomography generation and contouring for magnetic resonance imaging (MRI)-alone external beam radiation therapy from standard MRI sequences. Int J Radiat Oncol Biol Phys. 2015;93: Farjam R, Tyagi N, Veeraraghavan H, et al. Multiatlas approach with local registration goodness weighting for MRI-based electron density mapping of head and neck anatomy. Med Phys. 2017;44: Ren S, Hara W, Wang L, et al. Robust Estimation of Electron Density From Anatomic Magnetic Resonance Imaging of the Brain Using a Unifying Multi-Atlas Approach. Int J Radiat Oncol Biol Phys. 2017;97: Huynh T, Gao Y, Kang J, et al. Estimating CT image from MRI data using structured random forest and auto-context model. IEEE Trans Med Imaging. 2016;35: Andreasen D, Edmund JM, Zografos V, Menze BH, Van LK, Computed tomography synthesis from magnetic resonance images in the pelvis using multiple random forests and auto-context features, in SPIE 2016, pp Zhao C, Carass A, Lee J, Jog A, Prince JL. A supervoxel based random forest synthesis framework for bidirectional MR/CT synthesis. SASHIM. 2017;I:33 40.

7 5665 Chen et al.: U-net based S-CT for prostate IMRT Han X. MR-based synthetic CT generation using a deep convolutional neural network method. Med Phys. 2017;44: Ravishankar H, Sudhakar P, Venkataramani R, et al., Estimating CT image from MRI data using 3D fully convolutional networks, in LABELS 2016, pp Roy S, Butman JA, Pham DL Synthesizing CT from Ultrashort Echo- Time MR Images via Convolutional Neural Networks, in SASHIMI 2017, pp Xiang L, Wang Q, Nie D, et al. Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image. Med Image Anal. 2018;47: Maspero M, Savenije MHF, Dinkla AM, et al. Dose evaluation of fast synthetic-ct generation using a generative adversarial network for general pelvis MR-only radiotherapy. Phys Med Biol. 2018;63: Emami H, Dong M. Generating synthetic CTs from magnetic resonance images using generative adversarial networks. Med Phys. 2018;45: Simonyan K, Zisserman A. Very deep convolutional networks for largescale image recognition, in ICLR 2015, pp Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation, in MICCAI Goodfellow I, Pouget-Abadie J, Mirza M, et al., Generative adversarial nets, in NIPS 2014, pp Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, vanden Berg CAT, Isgum I. Deep MR to CT synthesis using unpaired data, in MIC- CAI 2017, pp Ioffe S, Szegedy C. Batch normalization: accelerating deep network training by reducing internal covariate shift, in ICML Chollet F. Keras: Deep learning library for theano and tensorflow, URL https//keras.io/ Abadi M, Barham P, Chen J, et al. TensorFlow: a system for large-scale machine learning, in OSDI 2016, pp Li M, Zhang T, Chen Y, Smola AJ. Efficient mini-batch training for stochastic optimization, in SIGKDD Kingma DP, Ba JL. ADAM: a method for stochastic optimization, in ICLR Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. JMLR. 2014;15: Low DA, Harms WB, Mutic S, Purdy JA. A technique for the quantitative evaluation of dose distributions. Med Phys. 1998;25: Nie D, Trullo R, Lian J, et al. Medical image synthesis with contextaware generative adversarial networks. IEEE Trans Biomed Eng. 2017;9294: Manca G, Vanzi E, Rubello D, et al. 18F-FDG PET/CT quantification in head and neck squamous cell cancer: principles, technical issues and clinical applications. Eur J Nucl Med Mol Imaging. 2016;43: Cßicßek O, Abdulkadir A, Lienkamp S.S, Brox T, Ronneberger O..3D U- net: Learning dense volumetric segmentation from sparse annotation, in MICCAI Zhu J, Park T, Isola P, Efros AA. Unpaired image-to-mage translation using cycle-consistent adversarial networks, in ICCV 2017, pp Yang Q. A survey on transfer learning. IEEE Trans Knowl Data Eng. 2010;1:1 15.

arxiv: v1 [cs.cv] 3 Aug 2017

arxiv: v1 [cs.cv] 3 Aug 2017 Deep MR to CT Synthesis using Unpaired Data Jelmer M. Wolterink 1, Anna M. Dinkla 2, Mark H.F. Savenije 2, Peter R. Seevinck 1, Cornelis A.T. van den Berg 2, Ivana Išgum 1 1 Image Sciences Institute, University

More information

arxiv: v1 [physics.med-ph] 28 Feb 2018

arxiv: v1 [physics.med-ph] 28 Feb 2018 Male pelvic synthetic CT generation from T1-weighted MRI using 2D and 3D convolutional neural networks Jie Fu 1,2, Yingli Yang 2, Kamal Singhrao 1,2, Dan Ruan 2, Daniel A. Low 2, John H. Lewis 2,* 1 University

More information

3D Densely Convolutional Networks for Volumetric Segmentation. Toan Duc Bui, Jitae Shin, and Taesup Moon

3D Densely Convolutional Networks for Volumetric Segmentation. Toan Duc Bui, Jitae Shin, and Taesup Moon 3D Densely Convolutional Networks for Volumetric Segmentation Toan Duc Bui, Jitae Shin, and Taesup Moon School of Electronic and Electrical Engineering, Sungkyunkwan University, Republic of Korea arxiv:1709.03199v2

More information

Isointense infant brain MRI segmentation with a dilated convolutional neural network Moeskops, P.; Pluim, J.P.W.

Isointense infant brain MRI segmentation with a dilated convolutional neural network Moeskops, P.; Pluim, J.P.W. Isointense infant brain MRI segmentation with a dilated convolutional neural network Moeskops, P.; Pluim, J.P.W. Published: 09/08/2017 Document Version Author s version before peer-review Please check

More information

Female pelvic synthetic CT generation based on joint intensity and shape analysis

Female pelvic synthetic CT generation based on joint intensity and shape analysis 2949 Institute of Physics and Engineering in Medicine Physics in Medicine & Biology https://doi.org/0.088/36-6560/62/8/2935 Female pelvic synthetic CT generation based on joint intensity and shape analysis

More information

Auto-Segmentation Using Deformable Image Registration. Disclosure. Objectives 8/4/2011

Auto-Segmentation Using Deformable Image Registration. Disclosure. Objectives 8/4/2011 Auto-Segmentation Using Deformable Image Registration Lei Dong, Ph.D. Dept. of Radiation Physics University of Texas MD Anderson Cancer Center, Houston, Texas AAPM Therapy Educational Course Aug. 4th 2011

More information

Detecting Bone Lesions in Multiple Myeloma Patients using Transfer Learning

Detecting Bone Lesions in Multiple Myeloma Patients using Transfer Learning Detecting Bone Lesions in Multiple Myeloma Patients using Transfer Learning Matthias Perkonigg 1, Johannes Hofmanninger 1, Björn Menze 2, Marc-André Weber 3, and Georg Langs 1 1 Computational Imaging Research

More information

TomoTherapy Related Projects. An image guidance alternative on Tomo Low dose MVCT reconstruction Patient Quality Assurance using Sinogram

TomoTherapy Related Projects. An image guidance alternative on Tomo Low dose MVCT reconstruction Patient Quality Assurance using Sinogram TomoTherapy Related Projects An image guidance alternative on Tomo Low dose MVCT reconstruction Patient Quality Assurance using Sinogram Development of A Novel Image Guidance Alternative for Patient Localization

More information

radiotherapy Andrew Godley, Ergun Ahunbay, Cheng Peng, and X. Allen Li NCAAPM Spring Meeting 2010 Madison, WI

radiotherapy Andrew Godley, Ergun Ahunbay, Cheng Peng, and X. Allen Li NCAAPM Spring Meeting 2010 Madison, WI GPU-Accelerated autosegmentation for adaptive radiotherapy Andrew Godley, Ergun Ahunbay, Cheng Peng, and X. Allen Li agodley@mcw.edu NCAAPM Spring Meeting 2010 Madison, WI Overview Motivation Adaptive

More information

GPU applications in Cancer Radiation Therapy at UCSD. Steve Jiang, UCSD Radiation Oncology Amit Majumdar, SDSC Dongju (DJ) Choi, SDSC

GPU applications in Cancer Radiation Therapy at UCSD. Steve Jiang, UCSD Radiation Oncology Amit Majumdar, SDSC Dongju (DJ) Choi, SDSC GPU applications in Cancer Radiation Therapy at UCSD Steve Jiang, UCSD Radiation Oncology Amit Majumdar, SDSC Dongju (DJ) Choi, SDSC Conventional Radiotherapy SIMULATION: Construciton, Dij Days PLANNING:

More information

ADVANCING CANCER TREATMENT

ADVANCING CANCER TREATMENT 3 ADVANCING CANCER TREATMENT SUPPORTING CLINICS WORLDWIDE RaySearch is advancing cancer treatment through pioneering software. We believe software has un limited potential, and that it is now the driving

More information

Kaggle Data Science Bowl 2017 Technical Report

Kaggle Data Science Bowl 2017 Technical Report Kaggle Data Science Bowl 2017 Technical Report qfpxfd Team May 11, 2017 1 Team Members Table 1: Team members Name E-Mail University Jia Ding dingjia@pku.edu.cn Peking University, Beijing, China Aoxue Li

More information

Channel Locality Block: A Variant of Squeeze-and-Excitation

Channel Locality Block: A Variant of Squeeze-and-Excitation Channel Locality Block: A Variant of Squeeze-and-Excitation 1 st Huayu Li Northern Arizona University Flagstaff, United State Northern Arizona University hl459@nau.edu arxiv:1901.01493v1 [cs.lg] 6 Jan

More information

PROSTATE CANCER DETECTION USING LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION

PROSTATE CANCER DETECTION USING LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION PROSTATE CANCER DETECTION USING LABEL IMAGE CONSTRAINED MULTIATLAS SELECTION Ms. Vaibhavi Nandkumar Jagtap 1, Mr. Santosh D. Kale 2 1 PG Scholar, 2 Assistant Professor, Department of Electronics and Telecommunication,

More information

Commissioning of MR-only simulation for radiotherapy planning

Commissioning of MR-only simulation for radiotherapy planning Radiation Oncology Ingenia MR-RT MR-only sim Commissioning of MR-only simulation for radiotherapy planning Gerald Schubert, Teuvo Vaara, Matti Lindström, Reko Kemppainen, and Marieke van Grootel-Rensen

More information

arxiv: v1 [cs.cv] 20 Apr 2017

arxiv: v1 [cs.cv] 20 Apr 2017 End-to-End Unsupervised Deformable Image Registration with a Convolutional Neural Network Bob D. de Vos 1, Floris F. Berendsen 2, Max A. Viergever 1, Marius Staring 2, and Ivana Išgum 1 1 Image Sciences

More information

3DVH : SUN NUCLEAR On The Accuracy Of The corporation Planned Dose Perturbation Algorithm Your Most Valuable QA and Dosimetry Tools *Patent Pending

3DVH : SUN NUCLEAR On The Accuracy Of The corporation Planned Dose Perturbation Algorithm Your Most Valuable QA and Dosimetry Tools *Patent Pending 3DVH : On The Accuracy Of The Planned Dose Perturbation Algorithm SUN NUCLEAR corporation Your Most Valuable QA and Dosimetry Tools *Patent Pending introduction State-of-the-art IMRT QA of static gantry

More information

Mutual information based CT registration of the lung at exhale and inhale breathing states using thin-plate splines

Mutual information based CT registration of the lung at exhale and inhale breathing states using thin-plate splines Mutual information based CT registration of the lung at exhale and inhale breathing states using thin-plate splines Martha M. Coselmon, a) James M. Balter, Daniel L. McShan, and Marc L. Kessler Department

More information

Coverage based treatment planning to accommodate organ deformable motions and contouring uncertainties for prostate treatment. Huijun Xu, Ph.D.

Coverage based treatment planning to accommodate organ deformable motions and contouring uncertainties for prostate treatment. Huijun Xu, Ph.D. Coverage based treatment planning to accommodate organ deformable motions and contouring uncertainties for prostate treatment Huijun Xu, Ph.D. Acknowledgement and Disclosure Dr. Jeffrey Siebers Dr. DJ

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 10, OCTOBER

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 10, OCTOBER IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 26, NO. 10, OCTOBER 2017 4753 Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks Jun Zhang,

More information

Detecting Anatomical Landmarks from Limited Medical Imaging Data using Two-Stage Task-Oriented Deep Neural Networks

Detecting Anatomical Landmarks from Limited Medical Imaging Data using Two-Stage Task-Oriented Deep Neural Networks IEEE TRANSACTIONS ON IMAGE PROCESSING Detecting Anatomical Landmarks from Limited Medical Imaging Data using Two-Stage Task-Oriented Deep Neural Networks Jun Zhang, Member, IEEE, Mingxia Liu, Member, IEEE,

More information

Deep Embedding Convolutional Neural Network for Synthesizing CT Image from T1-Weighted MR Image

Deep Embedding Convolutional Neural Network for Synthesizing CT Image from T1-Weighted MR Image Deep Embedding Convolutional Neural Network for Synthesizing CT Image from T1-Weighted MR Image Lei Xiang 1, Qian Wang 1,*, Xiyao Jin 1, Dong Nie 3, Yu Qiao 2, Dinggang Shen 3,4,* 1 Med-X Research Institute,

More information

Volumetric Medical Image Segmentation with Deep Convolutional Neural Networks

Volumetric Medical Image Segmentation with Deep Convolutional Neural Networks Volumetric Medical Image Segmentation with Deep Convolutional Neural Networks Manvel Avetisian Lomonosov Moscow State University avetisian@gmail.com Abstract. This paper presents a neural network architecture

More information

A fluence convolution method to account for respiratory motion in three-dimensional dose calculations of the liver: A Monte Carlo study

A fluence convolution method to account for respiratory motion in three-dimensional dose calculations of the liver: A Monte Carlo study A fluence convolution method to account for respiratory motion in three-dimensional dose calculations of the liver: A Monte Carlo study Indrin J. Chetty, a) Mihaela Rosu, Neelam Tyagi, Lon H. Marsh, Daniel

More information

ADVANCING CANCER TREATMENT

ADVANCING CANCER TREATMENT The RayPlan treatment planning system makes proven, innovative RayStation technology accessible to clinics that need a cost-effective and streamlined solution. Fast, efficient and straightforward to use,

More information

VALIDATION OF DIR. Raj Varadhan, PhD, DABMP Minneapolis Radiation Oncology

VALIDATION OF DIR. Raj Varadhan, PhD, DABMP Minneapolis Radiation Oncology VALIDATION OF DIR Raj Varadhan, PhD, DABMP Minneapolis Radiation Oncology Overview Basics: Registration Framework, Theory Discuss Validation techniques Using Synthetic CT data & Phantoms What metrics to

More information

Auto-contouring the Prostate for Online Adaptive Radiotherapy

Auto-contouring the Prostate for Online Adaptive Radiotherapy Auto-contouring the Prostate for Online Adaptive Radiotherapy Yan Zhou 1 and Xiao Han 1 Elekta Inc., Maryland Heights, MO, USA yan.zhou@elekta.com, xiao.han@elekta.com, Abstract. Among all the organs under

More information

Enhao Gong, PhD Candidate, Electrical Engineering, Stanford University Dr. John Pauly, Professor in Electrical Engineering, Stanford University Dr.

Enhao Gong, PhD Candidate, Electrical Engineering, Stanford University Dr. John Pauly, Professor in Electrical Engineering, Stanford University Dr. Enhao Gong, PhD Candidate, Electrical Engineering, Stanford University Dr. John Pauly, Professor in Electrical Engineering, Stanford University Dr. Greg Zaharchuk, Associate Professor in Radiology, Stanford

More information

Multi-Label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations

Multi-Label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations Multi-Label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations Christian Payer 1,, Darko Štern2, Horst Bischof 1, and Martin Urschler 2,3 1 Institute for Computer Graphics and Vision,

More information

Determination of rotations in three dimensions using two-dimensional portal image registration

Determination of rotations in three dimensions using two-dimensional portal image registration Determination of rotations in three dimensions using two-dimensional portal image registration Anthony E. Lujan, a) James M. Balter, and Randall K. Ten Haken Department of Nuclear Engineering and Radiological

More information

Dosimetric Analysis Report

Dosimetric Analysis Report RT-safe 48, Artotinis str 116 33, Athens Greece +30 2107563691 info@rt-safe.com Dosimetric Analysis Report SAMPLE, for demonstration purposes only Date of report: ----------- Date of irradiation: -----------

More information

Structured Prediction using Convolutional Neural Networks

Structured Prediction using Convolutional Neural Networks Overview Structured Prediction using Convolutional Neural Networks Bohyung Han bhhan@postech.ac.kr Computer Vision Lab. Convolutional Neural Networks (CNNs) Structured predictions for low level computer

More information

Lucy Phantom MR Grid Evaluation

Lucy Phantom MR Grid Evaluation Lucy Phantom MR Grid Evaluation Anil Sethi, PhD Loyola University Medical Center, Maywood, IL 60153 November 2015 I. Introduction: The MR distortion grid, used as an insert with Lucy 3D QA phantom, is

More information

8/3/2017. Contour Assessment for Quality Assurance and Data Mining. Objective. Outline. Tom Purdie, PhD, MCCPM

8/3/2017. Contour Assessment for Quality Assurance and Data Mining. Objective. Outline. Tom Purdie, PhD, MCCPM Contour Assessment for Quality Assurance and Data Mining Tom Purdie, PhD, MCCPM Objective Understand the state-of-the-art in contour assessment for quality assurance including data mining-based techniques

More information

Boundary-aware Fully Convolutional Network for Brain Tumor Segmentation

Boundary-aware Fully Convolutional Network for Brain Tumor Segmentation Boundary-aware Fully Convolutional Network for Brain Tumor Segmentation Haocheng Shen, Ruixuan Wang, Jianguo Zhang, and Stephen J. McKenna Computing, School of Science and Engineering, University of Dundee,

More information

State-of-the-Art IGRT

State-of-the-Art IGRT in partnership with State-of-the-Art IGRT Exploring the Potential of High-Precision Dose Delivery and Real-Time Knowledge of the Target Volume Location Antje-Christin Knopf IOP Medical Physics Group Scientific

More information

3D Voxel-Based Volumetric Image Registration with Volume-View Guidance

3D Voxel-Based Volumetric Image Registration with Volume-View Guidance 3D Voxel-Based Volumetric Image Registration with Volume-View Guidance Guang Li*, Huchen Xie, Holly Ning, Deborah Citrin, Jacek Copala, Barbara Arora, Norman Coleman, Kevin Camphausen, and Robert Miller

More information

arxiv: v2 [cs.cv] 28 Jan 2019

arxiv: v2 [cs.cv] 28 Jan 2019 Improving Myocardium Segmentation in Cardiac CT Angiography using Spectral Information Steffen Bruns a, Jelmer M. Wolterink a, Robbert W. van Hamersvelt b, Majd Zreik a, Tim Leiner b, and Ivana Išgum a

More information

Automatic Segmentation of Parotids from CT Scans Using Multiple Atlases

Automatic Segmentation of Parotids from CT Scans Using Multiple Atlases Automatic Segmentation of Parotids from CT Scans Using Multiple Atlases Jinzhong Yang, Yongbin Zhang, Lifei Zhang, and Lei Dong Department of Radiation Physics, University of Texas MD Anderson Cancer Center

More information

Is deformable image registration a solved problem?

Is deformable image registration a solved problem? Is deformable image registration a solved problem? Marcel van Herk On behalf of the imaging group of the RT department of NKI/AVL Amsterdam, the Netherlands DIR 1 Image registration Find translation.deformation

More information

Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon

Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon Design and performance characteristics of a Cone Beam CT system for Leksell Gamma Knife Icon WHITE PAPER Introduction Introducing an image guidance system based on Cone Beam CT (CBCT) and a mask immobilization

More information

A software tool for the quantitative evaluation of 3D dose calculation algorithms

A software tool for the quantitative evaluation of 3D dose calculation algorithms A software tool for the quantitative evaluation of 3D dose calculation algorithms William B. Harms, Sr., Daniel A. Low, John W. Wong, a) and James A. Purdy Washington University School of Medicine, Mallinckrodt

More information

Use of Deformable Image Registration in Radiation Therapy. Colin Sims, M.Sc. Accuray Incorporated 1

Use of Deformable Image Registration in Radiation Therapy. Colin Sims, M.Sc. Accuray Incorporated 1 Use of Deformable Image Registration in Radiation Therapy Colin Sims, M.Sc. Accuray Incorporated 1 Overview of Deformable Image Registration (DIR) Algorithms that can deform one dataset to another have

More information

Convolutional Neural Networks. Computer Vision Jia-Bin Huang, Virginia Tech

Convolutional Neural Networks. Computer Vision Jia-Bin Huang, Virginia Tech Convolutional Neural Networks Computer Vision Jia-Bin Huang, Virginia Tech Today s class Overview Convolutional Neural Network (CNN) Training CNN Understanding and Visualizing CNN Image Categorization:

More information

Deep Learning for Computer Vision II

Deep Learning for Computer Vision II IIIT Hyderabad Deep Learning for Computer Vision II C. V. Jawahar Paradigm Shift Feature Extraction (SIFT, HoG, ) Part Models / Encoding Classifier Sparrow Feature Learning Classifier Sparrow L 1 L 2 L

More information

Content-Based Image Recovery

Content-Based Image Recovery Content-Based Image Recovery Hong-Yu Zhou and Jianxin Wu National Key Laboratory for Novel Software Technology Nanjing University, China zhouhy@lamda.nju.edu.cn wujx2001@nju.edu.cn Abstract. We propose

More information

Deep Residual Architecture for Skin Lesion Segmentation

Deep Residual Architecture for Skin Lesion Segmentation Deep Residual Architecture for Skin Lesion Segmentation Venkatesh G M 1, Naresh Y G 1, Suzanne Little 2, and Noel O Connor 2 1 Insight Centre for Data Analystics-DCU, Dublin, Ireland 2 Dublin City University,

More information

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images

Chapter 3 Set Redundancy in Magnetic Resonance Brain Images 16 Chapter 3 Set Redundancy in Magnetic Resonance Brain Images 3.1 MRI (magnetic resonance imaging) MRI is a technique of measuring physical structure within the human anatomy. Our proposed research focuses

More information

Prostate Detection Using Principal Component Analysis

Prostate Detection Using Principal Component Analysis Prostate Detection Using Principal Component Analysis Aamir Virani (avirani@stanford.edu) CS 229 Machine Learning Stanford University 16 December 2005 Introduction During the past two decades, computed

More information

A Generation Methodology for Numerical Phantoms with Statistically Relevant Variability of Geometric and Physical Properties

A Generation Methodology for Numerical Phantoms with Statistically Relevant Variability of Geometric and Physical Properties A Generation Methodology for Numerical Phantoms with Statistically Relevant Variability of Geometric and Physical Properties Steven Dolly 1, Eric Ehler 1, Yang Lou 2, Mark Anastasio 2, Hua Li 2 (1) University

More information

COMPARISON OF OBJECTIVE FUNCTIONS IN CNN-BASED PROSTATE MAGNETIC RESONANCE IMAGE SEGMENTATION

COMPARISON OF OBJECTIVE FUNCTIONS IN CNN-BASED PROSTATE MAGNETIC RESONANCE IMAGE SEGMENTATION COMPARISON OF OBJECTIVE FUNCTIONS IN CNN-BASED PROSTATE MAGNETIC RESONANCE IMAGE SEGMENTATION Juhyeok Mun, Won-Dong Jang, Deuk Jae Sung, Chang-Su Kim School of Electrical Engineering, Korea University,

More information

ANALYSIS OF PULMONARY FIBROSIS IN MRI, USING AN ELASTIC REGISTRATION TECHNIQUE IN A MODEL OF FIBROSIS: Scleroderma

ANALYSIS OF PULMONARY FIBROSIS IN MRI, USING AN ELASTIC REGISTRATION TECHNIQUE IN A MODEL OF FIBROSIS: Scleroderma ANALYSIS OF PULMONARY FIBROSIS IN MRI, USING AN ELASTIC REGISTRATION TECHNIQUE IN A MODEL OF FIBROSIS: Scleroderma ORAL DEFENSE 8 th of September 2017 Charlotte MARTIN Supervisor: Pr. MP REVEL M2 Bio Medical

More information

T1/T2*-weighted MRI provides clinically relevant pseudo-ct density data for the pelvic bones in MRI-only based radiotherapy treatment planning

T1/T2*-weighted MRI provides clinically relevant pseudo-ct density data for the pelvic bones in MRI-only based radiotherapy treatment planning Acta Oncologica, 2013; 52: 612 618 original article T1/T2*-weighted MRI provides clinically relevant pseudo-ct density data for the pelvic bones in MRI-only based radiotherapy treatment planning Mika Kapanen

More information

arxiv: v1 [cs.cv] 30 Jul 2017

arxiv: v1 [cs.cv] 30 Jul 2017 Virtual PET Images from CT Data Using Deep Convolutional Networks: Initial Results Avi Ben-Cohen 1, Eyal Klang 2, Stephen P. Raskin 2, Michal Marianne Amitai 2, and Hayit Greenspan 1 arxiv:1707.09585v1

More information

Effects of the difference in tube voltage of the CT scanner on. dose calculation

Effects of the difference in tube voltage of the CT scanner on. dose calculation Effects of the difference in tube voltage of the CT scanner on dose calculation Dong Joo Rhee, Sung-woo Kim, Dong Hyeok Jeong Medical and Radiological Physics Laboratory, Dongnam Institute of Radiological

More information

Supplementary material for Analyzing Filters Toward Efficient ConvNet

Supplementary material for Analyzing Filters Toward Efficient ConvNet Supplementary material for Analyzing Filters Toward Efficient Net Takumi Kobayashi National Institute of Advanced Industrial Science and Technology, Japan takumi.kobayashi@aist.go.jp A. Orthonormal Steerable

More information

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion

Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Methodological progress in image registration for ventilation estimation, segmentation propagation and multi-modal fusion Mattias P. Heinrich Julia A. Schnabel, Mark Jenkinson, Sir Michael Brady 2 Clinical

More information

Medical Image Synthesis Methods and Applications

Medical Image Synthesis Methods and Applications MR Intensity Scale is Arbitrary This causes problems in most postprocessing methods Inconsistency or algorithm failure 11/5/2015 2 Joint Histogram 1.5 T GE SPGR 3 T Philips MPRAGE 11/5/2015 3 Problem With

More information

Automatic Thoracic CT Image Segmentation using Deep Convolutional Neural Networks. Xiao Han, Ph.D.

Automatic Thoracic CT Image Segmentation using Deep Convolutional Neural Networks. Xiao Han, Ph.D. Automatic Thoracic CT Image Segmentation using Deep Convolutional Neural Networks Xiao Han, Ph.D. Outline Background Brief Introduction to DCNN Method Results 2 Focus where it matters Structure Segmentation

More information

Financial disclosure. Onboard imaging modality for IGRT

Financial disclosure. Onboard imaging modality for IGRT Tetrahedron Beam Computed Tomography Based On Multi-Pixel X- Ray Source and Its Application in Image Guided Radiotherapy Tiezhi Zhang, Ph.D. Advanced X-ray imaging Lab Financial disclosure Patent royalty

More information

Robust Lung Ventilation Assessment

Robust Lung Ventilation Assessment Fifth International Workshop on Pulmonary Image Analysis -75- Robust Lung Ventilation Assessment Sven Kabus 1, Tobias Klinder 1, Tokihiro Yamamoto 2, Paul J. Keall 3, Billy W. Loo, Jr. 4, and Cristian

More information

Using a handheld stereo depth camera to overcome limited field-of-view in simulation imaging for radiation therapy treatment planning

Using a handheld stereo depth camera to overcome limited field-of-view in simulation imaging for radiation therapy treatment planning Using a handheld stereo depth camera to overcome limited field-of-view in simulation imaging for radiation therapy treatment planning Cesare Jenkins Departments of Radiation Oncology, Stanford University,

More information

Proceedings of the International MultiConference of Engineers and Computer Scientists 2018 Vol I IMECS 2018, March 14-16, 2018, Hong Kong

Proceedings of the International MultiConference of Engineers and Computer Scientists 2018 Vol I IMECS 2018, March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong TABLE I CLASSIFICATION ACCURACY OF DIFFERENT PRE-TRAINED MODELS ON THE TEST DATA

More information

MR-guided radiotherapy: Vision, status and research at the UMC Utrecht. Dipl. Ing. Dr. Markus Glitzner

MR-guided radiotherapy: Vision, status and research at the UMC Utrecht. Dipl. Ing. Dr. Markus Glitzner MR-guided radiotherapy: Vision, status and research at the UMC Utrecht Dipl. Ing. Dr. Markus Glitzner About myself Training Medizintechnik TU Graz PhD UMC Utrecht Clinical work Software implementation

More information

Good Morning! Thank you for joining us

Good Morning! Thank you for joining us Good Morning! Thank you for joining us Deformable Registration, Contour Propagation and Dose Mapping: 101 and 201 Marc Kessler, PhD, FAAPM The University of Michigan Conflict of Interest I receive direct

More information

COMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 7. Image Processing COMP9444 17s2 Image Processing 1 Outline Image Datasets and Tasks Convolution in Detail AlexNet Weight Initialization Batch Normalization

More information

Multi-channel Deep Transfer Learning for Nuclei Segmentation in Glioblastoma Cell Tissue Images

Multi-channel Deep Transfer Learning for Nuclei Segmentation in Glioblastoma Cell Tissue Images Multi-channel Deep Transfer Learning for Nuclei Segmentation in Glioblastoma Cell Tissue Images Thomas Wollmann 1, Julia Ivanova 1, Manuel Gunkel 2, Inn Chung 3, Holger Erfle 2, Karsten Rippe 3, Karl Rohr

More information

Image-based Monte Carlo calculations for dosimetry

Image-based Monte Carlo calculations for dosimetry Image-based Monte Carlo calculations for dosimetry Irène Buvat Imagerie et Modélisation en Neurobiologie et Cancérologie UMR 8165 CNRS Universités Paris 7 et Paris 11 Orsay, France buvat@imnc.in2p3.fr

More information

PyCMSXiO: an external interface to script treatment plans for the Elekta CMS XiO treatment planning system

PyCMSXiO: an external interface to script treatment plans for the Elekta CMS XiO treatment planning system Journal of Physics: Conference Series OPEN ACCESS PyCMSXiO: an external interface to script treatment plans for the Elekta CMS XiO treatment planning system To cite this article: Aitang Xing et al 2014

More information

Creating a Knowledge Based Model using RapidPlan TM : The Henry Ford Experience

Creating a Knowledge Based Model using RapidPlan TM : The Henry Ford Experience DVH Estimates Creating a Knowledge Based Model using RapidPlan TM : The Henry Ford Experience Karen Chin Snyder, MS, DABR AAMD Region V Meeting October 4, 2014 Disclosures The Department of Radiation Oncology

More information

Clinical implementation of photon beam flatness measurements to verify beam quality

Clinical implementation of photon beam flatness measurements to verify beam quality JOURNAL OF APPLIED CLINICAL MEDICAL PHYSICS, VOLUME 16, NUMBER 6, 2015 Clinical implementation of photon beam flatness measurements to verify beam quality Simon Goodall, a Nicholas Harding, Jake Simpson,

More information

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT

REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT REAL-TIME ADAPTIVITY IN HEAD-AND-NECK AND LUNG CANCER RADIOTHERAPY IN A GPU ENVIRONMENT Anand P Santhanam Assistant Professor, Department of Radiation Oncology OUTLINE Adaptive radiotherapy for head and

More information

Additional file 1: Online Supplementary Material 1

Additional file 1: Online Supplementary Material 1 Additional file 1: Online Supplementary Material 1 Calyn R Moulton and Michael J House School of Physics, University of Western Australia, Crawley, Western Australia. Victoria Lye, Colin I Tang, Michele

More information

IMRT site-specific procedure: Prostate (CHHiP)

IMRT site-specific procedure: Prostate (CHHiP) IMRT site-specific procedure: Prostate (CHHiP) Scope: To provide site specific instructions for the planning of CHHIP IMRT patients Responsibilities: Radiotherapy Physicists, HPC Registered Therapy Radiographers

More information

ASCII Art Synthesis with Convolutional Networks

ASCII Art Synthesis with Convolutional Networks ASCII Art Synthesis with Convolutional Networks Osamu Akiyama Faculty of Medicine, Osaka University oakiyama1986@gmail.com 1 Introduction ASCII art is a type of graphic art that presents a picture with

More information

8/3/2016. Image Guidance Technologies. Introduction. Outline

8/3/2016. Image Guidance Technologies. Introduction. Outline 8/3/26 Session: Image Guidance Technologies and Management Strategies Image Guidance Technologies Jenghwa Chang, Ph.D.,2 Department of Radiation Medicine, Northwell Health 2 Hofstra Northwell School of

More information

Automated Quality Assurance for Image-Guided Radiation Therapy

Automated Quality Assurance for Image-Guided Radiation Therapy JOURNAL OF APPLIED CLINICAL MEDICAL PHYSICS, VOLUME 10, NUMBER 1, WINTER 2009 Automated Quality Assurance for Image-Guided Radiation Therapy Eduard Schreibmann, a Eric Elder, Tim Fox Department of Radiation

More information

A Deep Learning Approach to Vehicle Speed Estimation

A Deep Learning Approach to Vehicle Speed Estimation A Deep Learning Approach to Vehicle Speed Estimation Benjamin Penchas bpenchas@stanford.edu Tobin Bell tbell@stanford.edu Marco Monteiro marcorm@stanford.edu ABSTRACT Given car dashboard video footage,

More information

The Jacobian as a measure of planar dose congruence

The Jacobian as a measure of planar dose congruence The Jacobian as a measure of planar dose congruence L. D.Paniak 1 and P. M. Charland 2 1 4 Solutions Inc, Kitchener, Ontario 2 Grand River Hospital, Kitchener, Ontario and the University of Waterloo, Waterloo,

More information

Ensemble registration: Combining groupwise registration and segmentation

Ensemble registration: Combining groupwise registration and segmentation PURWANI, COOTES, TWINING: ENSEMBLE REGISTRATION 1 Ensemble registration: Combining groupwise registration and segmentation Sri Purwani 1,2 sri.purwani@postgrad.manchester.ac.uk Tim Cootes 1 t.cootes@manchester.ac.uk

More information

Motion artifact detection in four-dimensional computed tomography images

Motion artifact detection in four-dimensional computed tomography images Motion artifact detection in four-dimensional computed tomography images G Bouilhol 1,, M Ayadi, R Pinho, S Rit 1, and D Sarrut 1, 1 University of Lyon, CREATIS; CNRS UMR 5; Inserm U144; INSA-Lyon; University

More information

Image Co-Registration II: TG132 Quality Assurance for Image Registration. Image Co-Registration II: TG132 Quality Assurance for Image Registration

Image Co-Registration II: TG132 Quality Assurance for Image Registration. Image Co-Registration II: TG132 Quality Assurance for Image Registration Image Co-Registration II: TG132 Quality Assurance for Image Registration Preliminary Recommendations from TG 132* Kristy Brock, Sasa Mutic, Todd McNutt, Hua Li, and Marc Kessler *Recommendations are NOT

More information

Shading correction algorithm for cone-beam CT in radiotherapy: Extensive clinical validation of image quality improvement

Shading correction algorithm for cone-beam CT in radiotherapy: Extensive clinical validation of image quality improvement Shading correction algorithm for cone-beam CT in radiotherapy: Extensive clinical validation of image quality improvement K D Joshi a, T E Marchant b, a, and C J Moore b, a a Christie Medical Physics and

More information

Iterative fully convolutional neural networks for automatic vertebra segmentation

Iterative fully convolutional neural networks for automatic vertebra segmentation Iterative fully convolutional neural networks for automatic vertebra segmentation Nikolas Lessmann Image Sciences Institute University Medical Center Utrecht Pim A. de Jong Department of Radiology University

More information

Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group

Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group Deep Learning Vladimir Golkov Technical University of Munich Computer Vision Group 1D Input, 1D Output target input 2 2D Input, 1D Output: Data Distribution Complexity Imagine many dimensions (data occupies

More information

HENet: A Highly Efficient Convolutional Neural. Networks Optimized for Accuracy, Speed and Storage

HENet: A Highly Efficient Convolutional Neural. Networks Optimized for Accuracy, Speed and Storage HENet: A Highly Efficient Convolutional Neural Networks Optimized for Accuracy, Speed and Storage Qiuyu Zhu Shanghai University zhuqiuyu@staff.shu.edu.cn Ruixin Zhang Shanghai University chriszhang96@shu.edu.cn

More information

Position accuracy analysis of the stereotactic reference defined by the CBCT on Leksell Gamma Knife Icon

Position accuracy analysis of the stereotactic reference defined by the CBCT on Leksell Gamma Knife Icon Position accuracy analysis of the stereotactic reference defined by the CBCT on Leksell Gamma Knife Icon WHITE PAPER Introduction An image guidance system based on Cone Beam CT (CBCT) is included in Leksell

More information

arxiv: v1 [cs.lg] 6 Jan 2019

arxiv: v1 [cs.lg] 6 Jan 2019 Efforts estimation of doctors annotating medical image Yang Deng 1,2, Yao Sun 1,2#, Yongpei Zhu 1,2, Yue Xu 1,2, Qianxi Yang 1,2, Shuo Zhang 2, Mingwang Zhu 3, Jirang Sun 3, Weiling Zhao 4, Xiaobo Zhou

More information

The MSKCC Approach to IMRT. Outline

The MSKCC Approach to IMRT. Outline The MSKCC Approach to IMRT Spiridon V. Spirou, PhD Department of Medical Physics Memorial Sloan-Kettering Cancer Center New York, NY Outline Optimization Field splitting Delivery Independent verification

More information

Whole Body MRI Intensity Standardization

Whole Body MRI Intensity Standardization Whole Body MRI Intensity Standardization Florian Jäger 1, László Nyúl 1, Bernd Frericks 2, Frank Wacker 2 and Joachim Hornegger 1 1 Institute of Pattern Recognition, University of Erlangen, {jaeger,nyul,hornegger}@informatik.uni-erlangen.de

More information

7/31/2011. Learning Objective. Video Positioning. 3D Surface Imaging by VisionRT

7/31/2011. Learning Objective. Video Positioning. 3D Surface Imaging by VisionRT CLINICAL COMMISSIONING AND ACCEPTANCE TESTING OF A 3D SURFACE MATCHING SYSTEM Hania Al-Hallaq, Ph.D. Assistant Professor Radiation Oncology The University of Chicago Learning Objective Describe acceptance

More information

arxiv: v2 [cs.cv] 25 Apr 2018

arxiv: v2 [cs.cv] 25 Apr 2018 Fork me on GitHub TOMAAT: volumetric medical image analysis as a cloud service Fausto Milletari 1, Johann Frei 2, Seyed-Ahmad Ahmadi 3 arxiv:1803.06784v2 [cs.cv] 25 Apr 2018 1 NVIDIA 2 Technische Universität

More information

arxiv: v1 [cs.lg] 12 Jul 2018

arxiv: v1 [cs.lg] 12 Jul 2018 arxiv:1807.04585v1 [cs.lg] 12 Jul 2018 Deep Learning for Imbalance Data Classification using Class Expert Generative Adversarial Network Fanny a, Tjeng Wawan Cenggoro a,b a Computer Science Department,

More information

Brilliance CT Big Bore.

Brilliance CT Big Bore. 1 2 2 There are two methods of RCCT acquisition in widespread clinical use: cine axial and helical. In RCCT with cine axial acquisition, repeat CT images are taken each couch position while recording respiration.

More information

Weakly Supervised Fully Convolutional Network for PET Lesion Segmentation

Weakly Supervised Fully Convolutional Network for PET Lesion Segmentation Weakly Supervised Fully Convolutional Network for PET Lesion Segmentation S. Afshari a, A. BenTaieb a, Z. Mirikharaji a, and G. Hamarneh a a Medical Image Analysis Lab, School of Computing Science, Simon

More information

Monte Carlo methods in proton beam radiation therapy. Harald Paganetti

Monte Carlo methods in proton beam radiation therapy. Harald Paganetti Monte Carlo methods in proton beam radiation therapy Harald Paganetti Introduction: Proton Physics Electromagnetic energy loss of protons Distal distribution Dose [%] 120 100 80 60 40 p e p Ionization

More information

Ch. 4 Physical Principles of CT

Ch. 4 Physical Principles of CT Ch. 4 Physical Principles of CT CLRS 408: Intro to CT Department of Radiation Sciences Review: Why CT? Solution for radiography/tomography limitations Superimposition of structures Distinguishing between

More information

arxiv: v2 [cs.cv] 3 Sep 2018

arxiv: v2 [cs.cv] 3 Sep 2018 Deep to MR Synthesis using Paired and Unpaired Data Cheng-Bin Jin School of Information and Communication Engineering, INHA University, Incheon 2222, South Korea Hakil Kim arxiv:805.0790v2 [cs.cv] 3 Sep

More information

Comprehensive treatment planning for brachytherapy. Advanced planning made easy

Comprehensive treatment planning for brachytherapy. Advanced planning made easy Comprehensive treatment planning for brachytherapy Advanced planning made easy Oncentra Brachy offers a variety of smart tools that facilitate many of the repetitive tasks for you. In contemporary brachytherapy,

More information

Adaptive Local Multi-Atlas Segmentation: Application to Heart Segmentation in Chest CT Scans

Adaptive Local Multi-Atlas Segmentation: Application to Heart Segmentation in Chest CT Scans Adaptive Local Multi-Atlas Segmentation: Application to Heart Segmentation in Chest CT Scans Eva M. van Rikxoort, Ivana Isgum, Marius Staring, Stefan Klein and Bram van Ginneken Image Sciences Institute,

More information