Depth image super-resolution via multi-frame registration and deep learning

Size: px
Start display at page:

Download "Depth image super-resolution via multi-frame registration and deep learning"

Transcription

1 Depth image super-resolution via multi-frame registration and deep learning Ching Wei Tseng 1 and Hong-Ren Su 1 and Shang-Hong Lai 1 * and JenChi Liu 2 1 National Tsing Hua University, Hsinchu, Taiwan 2 Intelligent Vision System Division Electronic and Optoelectronic System Research Lab, Industrial Technology Research Institute, Taiwan * lai@cs.nthu.edu.tw Abstract In this paper, we develop an algorithm for depth image super-resolution from RGB-D images, which are acquired under different imaging conditions so that we can combine them to improve the image quality with precise 3D registration. We focus on how to increase the resolution and quality of depth images by combining multiple RGB-D images and using the deep learning technique. In the proposed solution, we combine multiple RGB-D images by 3D alignment from 3D feature point correspondences and apply the guided filter as the input to SRCNN to obtain the up-sampled depth images. We show depth quality improvement of the up-sampled depth maps by using the proposed algorithm over the traditional methods through experimental results on some public-domain RGB-D datasets. I. INTRODUCTION RGB-D image becomes more and more common and has been applied in many fields, such as entertainment, factory automation, robotics, human-computer interaction, and so on. The newest application is the robot visual system for walking and working Guidelines. For example, auto-driving highly depends on precise distance estimation and object identification. High accuracy is a key factor for several applications to be used in practice. However, there are still many problems in the RGB-D image, including the high noise depth image, the missing depth information near the object boundary. These problems make the RGB-D image quality poor. Therefore, if the RGB-D image has high-quality and high-precision, it can be used for automated analysis in computer vision, object recognition or pose estimation, etc., and is very helpful. Hence, super-resolution is important for the application on RGB-D image. Image super-resolution is an important topic in computer vision and it has been researched for decades. Freeman et al. [1] and Xue et al. [2] considered the super-resolution problem as a multi-class MRF model with hidden layer as the high resolution information and proposed a cost function to estimate it. Yang et al. [3] and Zeyde et al. [4] proposed the superresolution based on dictionary and sparse representation. Yang et al. [5] used a novel self-learning super-resolution method by support vector regression (SVR). Dong et al. [6] proposed a single image super resolution based on the convolutional network (SRCNN). Large high resolution and high quality images need to be used for data training to reduce blurry or ringing artifacts. Zontak et al. [7] and Glasner et al. [8] used the self-similarity to increase the image resolution. These approaches search similar image patches across different down scale of the same image for patch reconstruction in high resolution. Sun et al. [9] and Tai et al. [10] used the gradient information in multi-scale with tensor voting strategy to reconstruct the high resolution image. Fig. 1 Illustration of our proposed depth super-resolution method

2 Most super-resolution topic is focused on the color image. In RGB-D image, super-resolution in depth image is another issue for accurate 3D information. It is different from color image. Hence we focus on the super-resolution of depth images. A direct idea is to fuse multiple low-resolution depth image to obtain a high-resolution depth image [11]. However, there are many broken areas and high noises in the depth image, and this makes the multiple depth image alignment error-prone to obtain the high resolution depth results. A feasible method [11] is to preprocess the depth image to alleviate the high noise and data missing problems. However, the multiple depth image alignment becomes the key issue to improve the quality of high resolution depth image. Depth image does not provide much information to provide precise alignment. Another idea [12][13] is to use the color image for depth alignment and the edge information for depth pixel prediction in high resolution. A nonlocal means filter (NLM) [14] was proposed to reconstruct the high resolution depth image. The above depth image super-resolution methods are based on multiple images. Few papers focused on the single depth image super-resolution. Researches [15][16][17][18] based on self-similar property and sparse representation have been proposed for single-image depth image super-resolution. Still, these methods suffer from blurred edge and artifacts. In this work, we propose a depth super-resolution method that merges multi-frame registration and depth refinement algorithms, including the guide image filtering [19] and SRCNN technique [6]. Depth images usually have high noise and data missing rates. Hence, in the high resolution training data, we will use multiple image super-resolution methods to repair the broken area and reduce noises. Then the guided image filtering is applied to refine the repaired depth image based on the color image. Previously, SRCNN framework [6] was proposed for super-resolution on color image. In the proposed algorithm, we extend the framework to enhance the depth image and finally obtain a high-resolution depth image through the process. Fig. 1 depicts the overall flow of the proposed depth super-resolution algorithm. Fig. 2 System flow of Depth Repair II. PROPOSED METHOD In this section, we first describe how to reconstruct a depth image by utilizing multiple RGB-D image captured at different locations. Then we show how to refine the reconstructed depth image through guided image and learning-based method, and finally recover a high-resolution depth image. A. Depth Repair Depth data acquired from depth sensor often contains high noise and missing data around object boundary, especially for depth sensors based on structure light. In order to reconstruct a high-quality depth image from the low-quality depth images, we apply a fast 3D alignment to fuse multiple depth images that are captured at different locations in time series. To begin with, color and depth images captured at time t are picked as the reference out of several consecutive color and depth frames acquired from a RGBD sensor. Then, a sequence of procedure is applied to align the neighboring depth images to the reference depth image. SURF feature detector [20] is used to extract features from every color images, and then we match features from other color images to reference using Euclidean distance as matching metric with searching range of 60% of image size. However, the matching result may contain many incorrect matching feature points between images. Therefore, we implement some techniques to ensure the correct matching results. For instances, the duplicated matching points, which means that one feature point matches to multiple feature points, are eliminated. We align all the other RGB-D images to the reference image through 3D similarity alignment based on 3D feature points. To acquire 3D feature points, we extract 2D image SURF feature points and its corresponding depth value since the color and depth image coordinates are aligned. Moreover, we re-project the 2D image feature points and its depth value to perspective projection to obtain the 3D feature points in camera coordinate. The 3D transformation includes scaling, rotation and translation, which is a rigid transformation. The Common transformation equation is written as:

3 p =s R p+t (1) M = (A A) A b (4) where p represents the 3D coordinate of a feature point from the original 3D point cloud and p is the aligned coordinate. s, R and t represent the scaling factor, rotation matrix and translation vector. We can also write eq. (1) in matrix multiplication form: x x x y y y =s z z z R R R R R R R R R x x x y y y z z t + z t In our formulation, both of p and p are known as 3D feature points acquired at different locations. Our main goal is to estimate the transformation matrix. So we rewrite eq. (2) as: where = = x y x y x z x z x y, = x y z AM =b z z , As contains all the transformation parameters, it can be compute by the least square solution: (2) (3) With the purpose of minimizing the distance between the reference and other 3D feature points, the 3D similarity transformation matrix can be determined. Despite removing all the duplicated matching feature points, some incorrect matching points may still exist. Thus, we apply the clique RANSAC algorithm [22] for both improving matching results and accelerating the outlier removing process. Different from estimating 2D affine transformation from RANSAC based on image feature points, 3D similarity transform is used in clique RANSAC algorithm to generate best inliers and best 3D transformation based on 3D feature points. Once the 3D transformations between images are calculated, we can align all the 3D points re-projected from RGB-D images to the reference frame. Finally, we project the fused point cloud back to the orthogonal projection. However, there are many points are still non-grid points. In this situation, we spread the value of every points to its four neighbor grid points (bilinear) based on the coordinate distance weights to them. Furthermore, one grid point might be received many spread values from points in the fused point cloud. A weighted average operation is applied on the grid points to obtain the average value from multiple spread values. As showed in Fig. 3, we call this operation as forward bilinear mapping. Fig. 3 Forward Bilinear Mapping A repaired depth map now can be generated from projecting the combined 3D points back to image plane. In addition, the repaired depth map may still exist some depth missing parts, we just simply apply the flood-fill operation to fill the missing region. Fig. 4 Proposed deep learning model of revised SRCNN for depth image super-resolution.

4 B. Depth Super-Resolution We propose a depth super-resolution method that combines guided image filtering [19] and SRCNN [6]. Combination of these two methods take advantage of the corresponding color image and learned high-quality depth prior information from massive high-resolution training images. We describe our algorithm in the following. Guided Image Upsampling Since we have the reference color image and its corresponding repaired depth image, the color image can be utilized as guided image to filter the repaired depth image after both color and repaired depth images are bilinear. The guided image filtering [19] can provide edge-preserving effect on the depth image. However, the guided image, which is upsampled from the low-resolution color image, could only provide limited improvements. Besides taking low-resolution color image as guidance, a learning-based method is applied to refine the guided image upsampling result. Revised SRCNN Referring to the deep learning structure for image superresolution, SRCNN [6] is trained based on enormous color images and can generate learned filters for super-resolution operation. In fact, the learned filters can somehow be treated as guided filters where the filters are learned (calculated) from training color images, which can be considered as guided images. Therefore, SRCNN can be implement as massive guided images filtering on depth super-resolution problem. In SRCNN framework [6], they crop sub-images out of luminance channel of color images with sliding window in order to generate numerous data because the variety in a color image can be dramatically, which provides sufficient information in training stage. Training images are obtained from blurring those sub-images using gaussian kernel and then downsampling by the upscale factor and upsampling them by the same factor through bicubic interpolation. And then we use the same training strategy in [6] to generate the training images. Similar to [6], we upscale the single depth image to the desired size using bicubic upsampling before the learning process. Apart from the original model from [6] used three convolutional layers and apply two Rectified Linear Units (ReLU) after the first two convolutional layers, we propose an end-to-end deep learning network that includes four convolutional layers with ReLU behind each layer, as depicted in Fig. 4, and choose x4 as the upscale factor in training. The settings make our model strengthen the non-linearity for mapping low-resolution image to high-resolution image. Similar ideas are presented in [23][24]. Also, the filter numbers are increased to endure the useful filters can be learned. Assuming Y is the bicubic upsampled depth image, the operation of each layer is: F (Y) =max(0, W F (Y) +B ) (5) where F is the mapping function of the layer and F (Y) = Y when =1. and represent the filter and bias, respectively, and denotes the convolution operation. And the last layer F(Y) for reconstruction is: F(Y) =W F (Y) +B In our network setting, the filter sizes are ( = 7, = =1, =5) and the filter numbers are ( = = = 64, =1). With these parameters, we can use the Mean Square Error (MSE) as the loss function to minimize the loss between the reconstructed image ( ;) and the corresponding ground truth image : () = 1 ( ;) where represents the network parameters including {, } and is the number of training samples. After the depth repair and depth super-resolution process, a high quality depth image can be obtained. The following experiments are conducted to compare the model performance to the original framework of [6] and evaluate our method to other SR methods on different datasets. III. EXPERIMENTAL RESULTS A. Implementation Details We develop the depth repair process in Matlab on a PC equipped with a AMD Phenom(tm) II X4 965 Processor running at 3.4GHz and 16 GB memory. In addition, our deep learning model is trained with Caffe platform [25] and GPU mode is used in training process and 91 high-quality depth images are used in the training process of our modified SRCNN model. The evaluation of all other algorithms is under the same machine configuration. B. Simulation of real depth images As mentioned before, depth images acquired from depth sensor often contain missing depth data around object boundary due to the limitation of IR emitting and sensing. Therefore, to evaluate on depth quality improvement, we simulate the ground truth depth images by manually cropping some regions in an image as missing depth regions and the high frequency depth regions are cropped with a higher probability. Moreover, the percentage of depth loss in a depth image is set in the range of 4-14% of the whole depth image. The simulated depth images are processed by other methods in the following evaluation. C. Evaluation The following experiments contain an overall comparison of different methods, nearest neighbor, bicubic, bilinear, Guided Image Upsampling, SRCNN and our proposed method, which combines the Guided Image Upsampling and revised SRCNN. The testing datasets, ICL-NUIM [26] MPI-Sintel [27], are used to evaluate the performance of upscaling factors, 2, 3 and 4. All these methods are compared in four different scenes. (6) (7)

5 TABLE I THE RESULT OF AND ON ICL-NUIM AND MPI-SINTEL DATASET Upscale Factor x2 x3 x4 Method Nearest Neighbor Bicubic Bilinear Guided Image Upsampling SRCNN Guided Image Upsampling + Revised SRCNN Our Improvements Nearest Neighbor Bicubic Bilinear Guided Image Upsampling SRCNN Guided Image Upsampling + Revised SRCNN Our Improvements Nearest Neighbor Bicubic Bilinear Guided Image Upsampling SRCNN Guided Image Upsampling + Revised SRCNN Our Improvements *The bold numbers denote the best performance. Metrics Moreover, we compare the performance of all these methods under different depth loss percentages. In our experiments, all these methods are applied to the Kinect raw depth images. Except for the above super-resolution methods, other methods like nearest neighbor, bicubic and bilinear upsampling, deal with depth loss situation only by flood-fill operation. In depth repair algorithm, we choose three consecutive depth frames and align them all in the process. Table I shows the quantitative result of our method in comparison to other methods using and as evaluation metrics, and the performance gain of our proposed method over all the others best is shown in the last row of every upscale factor. The comparison applies simulated depth images with 5% depth loss according to the whole image. The combination of Guided Image Upsampling and SRCNN Mountain MPI-Sintel Sleeping ICL-NUIM Living room Office Average outperform other methods in and as the upscale factor grows. We discover that SRCNN performs badly then others in some situations. The reason why this evaluation result occurs is that SRCNN will still sharpen the high frequency part in the image when the mis-alignment happened and turned out to have jagged edge in the repaired depth image. Consequently, Guided Image Upsampling is needed to smooth the jagged part first, and then the SRCNN will be applied to sharpen the image to obtain a high quality depth image. Fig. 5 shows the depth super-resolution result. Fig. 6 shows the evaluation of and on different methods on the living room scene with upscaling factor 4 with simulated depth loss region ranging from 4% to 14% of the entire image. The results show that our proposed method still outperform other methods even when the depth loss increases.

6 Fig. 5 Comparison of Guided Image Upsampling, SRCNN and the combined method on the living room scene with upscaling factor 4. Fig. 6 Comparison of and of methods under different depth loss percentages (%) Additionally, the result also proves that performance of the original SRCNN framework cannot achieve better and than Guided Image Upsampling and the proposed method because guided image filtering [19] can refine the flaws contained in the repaired depth image and smooth the jagged edge. Once the flaws are smoothed and refined, the revised SRCNN can be used to enhance the high frequency part. We test all these methods on the Kinect raw depth data. Due to the lack of ground truth depth available for qualitative comparisons, we show an example comparing all the methods together with downsampled raw Kinect depth as input. We set the upscale factor 4. Fig. 7 shows the comparison results. The results show that Guided Image Upsampling may still blur the depth image edge due to taking LR color image as guidance. On the other hand, SRCNN sharpens the edge too much which leads to emphasizing the jagged part in the depth result. The proposed method smooths the jagged edge and enhance it. We can still improve the real depth image quality with better alignment that fuse multiple image more accurately in the future. IV. CONCLUSIONS In this paper, we proposed a depth super-resolution method that utilizes multiple depth images, guided color images and

7 Fig. 7 Comparing different depth upsampling methods on Kinect raw depth image. learned filters from revised SRCNN. All of these procedures are combined together to increase the depth super-resolution results for different scenes. Recently, deep learning methods are widely used in super-resolution and different kinds of network structure are being developed. Moreover, very deep network structures [24][28] have been shown with great improvements on color image super-resolution. In the future, we aim to develop very deep network not only for depth image super-resolution, but also for depth loss repair to generate better depth image quality without collecting and fusing multiple depth frames. REFERENCES [1] W. T. Freeman, T. R. Jones, and E. C. Pasztor, Example-based superresolution, IEEE Comput. Graph. Appl., vol. 22, no. 2, pp , Mar./Apr [2] Y. Li, T. Xue, L. Sun, and J. Liu, Joint example-based depth map super-resolution, in Proc. IEEE Int. Conf. Multimedia Expo (ICME), Jul. 2012, pp [3] J. Yang, J. Wright, T. S. Huang, and Y. Ma, Image superresolution via sparse representation, IEEE Trans. Image Process., vol. 19, no. 11, pp , Nov [4] R. Zeyde, M. Elad, and M. Protter, On single image scale-up using sparse-representations, in Proc. 7th Int. Conf. Curves Surf., 2010, pp [5] M.-C. Yang and Y.-C. F. Wang, A self-learning approach to single image super-resolution, IEEE Trans. Multimedia, vol. 15, no. 3, pp , Apr [6] C. Dong, C. C. Loy, K. He, and X. Tang, Learning a deep convolutional network for image super-resolution, in Proc. 12th Eur. Conf. Comput. Vis. (ECCV), 2014, pp [7] M. Zontak and M. Irani, Internal statistics of a single natural image, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2011, pp [8] D. Glasner, S. Bagon, and M. Irani, Super-resolution from a single image, in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Sep./Oct. 2009, pp [9] J. Sun, J. Zhu, and M. F. Tappen, Context-constrained hallucination for image super-resolution, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2010, pp [10] Y.-W. Tai, W.-S. Tong, and C.-K. Tang, Perceptually-inspired and edgedirected color image super-resolution, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2006, pp [11] S. Izadi et al., KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera, in Proc. 24th Annu. ACM Symp. User Interface Softw. Technol., 2011, pp [12] K.-H. Lo, Y.-C. Wang, and K.-L. Hua, Joint trilateral filtering for depth map super-resolution, in Proc. Vis. Commun. Image Process. (VCIP), Nov. 2013, pp [13] D. Ferstl, C. Reinbacher, R. Ranftl, M. Ruether, and H. Bischof, Image guided depth upsampling using anisotropic total generalized variation, in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Dec. 2013, pp [14] J. Park, H. Kim, Y.-W. Tai, M. S. Brown, and I. Kweon, High quality depth map upsampling for 3D-TOF cameras, in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Nov. 2011, pp [15] J. Xie, C.-C. Chou, R. Feris, and M.-T. Sun, Single depth image super resolution and denoising via coupled dictionary learning with local constraints and shock filtering, in Proc. IEEE Int. Conf. Multimedia Expo (ICME), Jul. 2014, pp [16] J. Xie, R. Feris, S.-S. Yu, and M.-T. Sun, Joint super resolution and denoising from a single depth image, IEEE Trans. Multimedia, vol. 17, no. 9, pp , Sep [17] O. M. Aodha, N. D. F. Campbell, A. Nair, and G. J. Brostow, Patch based synthesis for single depth image super-resolution, in Proc. 12 th Eur. Conf. Comput. Vis. (ECCV), 2012, pp [18] M. Hornacek, C. Rhemann, M. Gelautz, and C. Rother, Depth super resolution by rigid body self-similarity in 3D, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2013, pp

8 [19] He, K., Sun, J., & Tang, X. (2010, September). Guided image filtering. In European conference on computer vision (pp. 1-14). Springer Berlin Heidelberg. [20] Bay, H., Tuytelaars, T., & Van Gool, L. (2006, May). Surf: Speeded up robust features. In European conference on computer vision (pp ). Springer Berlin Heidelberg. [21] Yinan, L., Lingyu, Y., & Gongzhang, S. (2013). A robust noniterative method for similarity transform estimation. Machine vision and applications, 24(3), [22] Edward Johns, Guang-Zhong Yang (2015). RANSAC with 2D Geometric Cliques for Image Retrieval and Place Recognition. [23] Wang, Z., Liu, D., Yang, J., Han, W., & Huang, T. (2015). Deep networks for image super-resolution with sparse prior. In Proceedings of the IEEE International Conference on Computer Vision (pp ). [24] Kim, J., Lee, J. K., & Lee, K. M. (2015). Accurate image superresolution using very deep convolutional networks. arxiv preprint arxiv: [25] Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadar-rama, S., Darrell, T.: Cae: Convolutional architecture for fast feature embedding. ArXiv: (2014) [26] A. Handa and T. Whelan and J.B. McDonald and A.J. Davison. (2014) A Benchmark for RGB-D Visual Odometry, 3D Reconstruction and SLAM. In Proceedings of ICRA. [27] D. J. Butler, J. Wulff, G. B. Stanley, and M. J. Black. A naturalistic open source movie for optical flow evaluation. In A. Fitzgibbon et al. (Eds.), editor, European Conf. on Computer Vision (ECCV), Part IV, LNCS 7577, pages Springer- Verlag, October [28] Kim, J., Lee, J. K., & Lee, K. M. (2015). Deeply-Recursive Convolutional Network for Image Super-Resolution. arxiv preprint arxiv:

Example-Based Image Super-Resolution Techniques

Example-Based Image Super-Resolution Techniques Example-Based Image Super-Resolution Techniques Mark Sabini msabini & Gili Rusak gili December 17, 2016 1 Introduction With the current surge in popularity of imagebased applications, improving content

More information

RAPID PROGRESS has been made in the 3D imaging

RAPID PROGRESS has been made in the 3D imaging 428 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 1, JANUARY 2016 Edge-Guided Single Depth Image Super Resolution Jun Xie, Rogerio Schmidt Feris, Senior Member, IEEE, and Ming-Ting Sun, Fellow, IEEE

More information

Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution

Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution Saeed AL-Mansoori 1 and Alavi Kunhu 2 1 Associate Image Processing Engineer, SIPAD Image Enhancement Section Emirates Institution

More information

arxiv: v1 [cs.cv] 8 Feb 2018

arxiv: v1 [cs.cv] 8 Feb 2018 DEEP IMAGE SUPER RESOLUTION VIA NATURAL IMAGE PRIORS Hojjat S. Mousavi, Tiantong Guo, Vishal Monga Dept. of Electrical Engineering, The Pennsylvania State University arxiv:802.0272v [cs.cv] 8 Feb 208 ABSTRACT

More information

arxiv: v1 [cs.cv] 3 Jan 2017

arxiv: v1 [cs.cv] 3 Jan 2017 Learning a Mixture of Deep Networks for Single Image Super-Resolution Ding Liu, Zhaowen Wang, Nasser Nasrabadi, and Thomas Huang arxiv:1701.00823v1 [cs.cv] 3 Jan 2017 Beckman Institute, University of Illinois

More information

Augmented Coupled Dictionary Learning for Image Super-Resolution

Augmented Coupled Dictionary Learning for Image Super-Resolution Augmented Coupled Dictionary Learning for Image Super-Resolution Muhammad Rushdi and Jeffrey Ho Computer and Information Science and Engineering University of Florida Gainesville, Florida, U.S.A. Email:

More information

Deep Depth Super-Resolution : Learning Depth Super-Resolution using Deep Convolutional Neural Network

Deep Depth Super-Resolution : Learning Depth Super-Resolution using Deep Convolutional Neural Network Deep Depth Super-Resolution : Learning Depth Super-Resolution using Deep Convolutional Neural Network Xibin Song, Yuchao Dai, Xueying Qin arxiv:607.0977v [cs.cv] 7 Jul 206 Abstract Depth image super-resolution

More information

A Novel Multi-Frame Color Images Super-Resolution Framework based on Deep Convolutional Neural Network. Zhe Li, Shu Li, Jianmin Wang and Hongyang Wang

A Novel Multi-Frame Color Images Super-Resolution Framework based on Deep Convolutional Neural Network. Zhe Li, Shu Li, Jianmin Wang and Hongyang Wang 5th International Conference on Measurement, Instrumentation and Automation (ICMIA 2016) A Novel Multi-Frame Color Images Super-Resolution Framewor based on Deep Convolutional Neural Networ Zhe Li, Shu

More information

Deep Back-Projection Networks For Super-Resolution Supplementary Material

Deep Back-Projection Networks For Super-Resolution Supplementary Material Deep Back-Projection Networks For Super-Resolution Supplementary Material Muhammad Haris 1, Greg Shakhnarovich 2, and Norimichi Ukita 1, 1 Toyota Technological Institute, Japan 2 Toyota Technological Institute

More information

Single Image Super-resolution. Slides from Libin Geoffrey Sun and James Hays

Single Image Super-resolution. Slides from Libin Geoffrey Sun and James Hays Single Image Super-resolution Slides from Libin Geoffrey Sun and James Hays Cs129 Computational Photography James Hays, Brown, fall 2012 Types of Super-resolution Multi-image (sub-pixel registration) Single-image

More information

IMAGE SUPER-RESOLUTION BASED ON DICTIONARY LEARNING AND ANCHORED NEIGHBORHOOD REGRESSION WITH MUTUAL INCOHERENCE

IMAGE SUPER-RESOLUTION BASED ON DICTIONARY LEARNING AND ANCHORED NEIGHBORHOOD REGRESSION WITH MUTUAL INCOHERENCE IMAGE SUPER-RESOLUTION BASED ON DICTIONARY LEARNING AND ANCHORED NEIGHBORHOOD REGRESSION WITH MUTUAL INCOHERENCE Yulun Zhang 1, Kaiyu Gu 2, Yongbing Zhang 1, Jian Zhang 3, and Qionghai Dai 1,4 1 Shenzhen

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

III. VERVIEW OF THE METHODS

III. VERVIEW OF THE METHODS An Analytical Study of SIFT and SURF in Image Registration Vivek Kumar Gupta, Kanchan Cecil Department of Electronics & Telecommunication, Jabalpur engineering college, Jabalpur, India comparing the distance

More information

CONTENT ADAPTIVE SCREEN IMAGE SCALING

CONTENT ADAPTIVE SCREEN IMAGE SCALING CONTENT ADAPTIVE SCREEN IMAGE SCALING Yao Zhai (*), Qifei Wang, Yan Lu, Shipeng Li University of Science and Technology of China, Hefei, Anhui, 37, China Microsoft Research, Beijing, 8, China ABSTRACT

More information

A Self-Learning Optimization Approach to Single Image Super-Resolution using Kernel ridge regression model

A Self-Learning Optimization Approach to Single Image Super-Resolution using Kernel ridge regression model A Self-Learning Optimization Approach to Single Image Super-Resolution using Kernel ridge regression model Ms. Dharani S 1 PG Student (CSE), Sri Krishna College of Engineering and Technology, Anna University,

More information

Introduction. Prior work BYNET: IMAGE SUPER RESOLUTION WITH A BYPASS CONNECTION NETWORK. Bjo rn Stenger. Rakuten Institute of Technology

Introduction. Prior work BYNET: IMAGE SUPER RESOLUTION WITH A BYPASS CONNECTION NETWORK. Bjo rn Stenger. Rakuten Institute of Technology BYNET: IMAGE SUPER RESOLUTION WITH A BYPASS CONNECTION NETWORK Jiu Xu Yeongnam Chae Bjo rn Stenger Rakuten Institute of Technology ABSTRACT This paper proposes a deep residual network, ByNet, for the single

More information

Colored Point Cloud Registration Revisited Supplementary Material

Colored Point Cloud Registration Revisited Supplementary Material Colored Point Cloud Registration Revisited Supplementary Material Jaesik Park Qian-Yi Zhou Vladlen Koltun Intel Labs A. RGB-D Image Alignment Section introduced a joint photometric and geometric objective

More information

Comparative Analysis of Edge Based Single Image Superresolution

Comparative Analysis of Edge Based Single Image Superresolution Comparative Analysis of Edge Based Single Image Superresolution Sonali Shejwal 1, Prof. A. M. Deshpande 2 1,2 Department of E&Tc, TSSM s BSCOER, Narhe, University of Pune, India. ABSTRACT: Super-resolution

More information

Ms.DHARANI SAMPATH Computer Science And Engineering, Sri Krishna College Of Engineering & Technology Coimbatore, India

Ms.DHARANI SAMPATH Computer Science And Engineering, Sri Krishna College Of Engineering & Technology Coimbatore, India Improving Super Resolution of Image by Multiple Kernel Learning Ms.DHARANI SAMPATH Computer Science And Engineering, Sri Krishna College Of Engineering & Technology Coimbatore, India dharanis012@gmail.com

More information

Bidirectional Recurrent Convolutional Networks for Multi-Frame Super-Resolution

Bidirectional Recurrent Convolutional Networks for Multi-Frame Super-Resolution Bidirectional Recurrent Convolutional Networks for Multi-Frame Super-Resolution Yan Huang 1 Wei Wang 1 Liang Wang 1,2 1 Center for Research on Intelligent Perception and Computing National Laboratory of

More information

OPTICAL Character Recognition systems aim at converting

OPTICAL Character Recognition systems aim at converting ICDAR 2015 COMPETITION ON TEXT IMAGE SUPER-RESOLUTION 1 Boosting Optical Character Recognition: A Super-Resolution Approach Chao Dong, Ximei Zhu, Yubin Deng, Chen Change Loy, Member, IEEE, and Yu Qiao

More information

Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform. Xintao Wang Ke Yu Chao Dong Chen Change Loy

Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform. Xintao Wang Ke Yu Chao Dong Chen Change Loy Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform Xintao Wang Ke Yu Chao Dong Chen Change Loy Problem enlarge 4 times Low-resolution image High-resolution image Previous

More information

Self-Learning of Edge-Preserving Single Image Super-Resolution via Contourlet Transform

Self-Learning of Edge-Preserving Single Image Super-Resolution via Contourlet Transform Self-Learning of Edge-Preserving Single Image Super-Resolution via Contourlet Transform Min-Chun Yang, De-An Huang, Chih-Yun Tsai, and Yu-Chiang Frank Wang Dept. Computer Science and Information Engineering,

More information

Fast and Accurate Single Image Super-Resolution via Information Distillation Network

Fast and Accurate Single Image Super-Resolution via Information Distillation Network Fast and Accurate Single Image Super-Resolution via Information Distillation Network Recently, due to the strength of deep convolutional neural network (CNN), many CNN-based SR methods try to train a deep

More information

One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models

One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models [Supplemental Materials] 1. Network Architecture b ref b ref +1 We now describe the architecture of the networks

More information

Boosting face recognition via neural Super-Resolution

Boosting face recognition via neural Super-Resolution Boosting face recognition via neural Super-Resolution Guillaume Berger, Cle ment Peyrard and Moez Baccouche Orange Labs - 4 rue du Clos Courtel, 35510 Cesson-Se vigne - France Abstract. We propose a two-step

More information

SINGLE DEPTH IMAGE SUPER RESOLUTION AND DENOISING VIA COUPLED DICTIONARY LEARNING WITH LOCAL CONSTRAINTS AND SHOCK FILTERING

SINGLE DEPTH IMAGE SUPER RESOLUTION AND DENOISING VIA COUPLED DICTIONARY LEARNING WITH LOCAL CONSTRAINTS AND SHOCK FILTERING SINGLE DEPTH IMAGE SUPER RESOLUTION AND DENOISING VIA COUPLED DICTIONARY LEARNING WITH LOCAL CONSTRAINTS AND SHOCK FILTERING Jun Xie*, Cheng-Chuan Chou**, Rogerio Feris***, Ming-Ting Sun* *University of

More information

Proceedings of the International MultiConference of Engineers and Computer Scientists 2018 Vol I IMECS 2018, March 14-16, 2018, Hong Kong

Proceedings of the International MultiConference of Engineers and Computer Scientists 2018 Vol I IMECS 2018, March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong TABLE I CLASSIFICATION ACCURACY OF DIFFERENT PRE-TRAINED MODELS ON THE TEST DATA

More information

Image Super-Resolution by Vectorizing Edges

Image Super-Resolution by Vectorizing Edges Image Super-Resolution by Vectorizing Edges Chia-Jung Hung Chun-Kai Huang Bing-Yu Chen National Taiwan University {ffantasy1999, chinkyell}@cmlab.csie.ntu.edu.tw robin@ntu.edu.tw Abstract. As the resolution

More information

ActiveStereoNet: End-to-End Self-Supervised Learning for Active Stereo Systems (Supplementary Materials)

ActiveStereoNet: End-to-End Self-Supervised Learning for Active Stereo Systems (Supplementary Materials) ActiveStereoNet: End-to-End Self-Supervised Learning for Active Stereo Systems (Supplementary Materials) Yinda Zhang 1,2, Sameh Khamis 1, Christoph Rhemann 1, Julien Valentin 1, Adarsh Kowdle 1, Vladimir

More information

Finding Tiny Faces Supplementary Materials

Finding Tiny Faces Supplementary Materials Finding Tiny Faces Supplementary Materials Peiyun Hu, Deva Ramanan Robotics Institute Carnegie Mellon University {peiyunh,deva}@cs.cmu.edu 1. Error analysis Quantitative analysis We plot the distribution

More information

Structured Face Hallucination

Structured Face Hallucination 2013 IEEE Conference on Computer Vision and Pattern Recognition Structured Face Hallucination Chih-Yuan Yang Sifei Liu Ming-Hsuan Yang Electrical Engineering and Computer Science University of California

More information

ROBUST INTERNAL EXEMPLAR-BASED IMAGE ENHANCEMENT. Yang Xian 1 and Yingli Tian 1,2

ROBUST INTERNAL EXEMPLAR-BASED IMAGE ENHANCEMENT. Yang Xian 1 and Yingli Tian 1,2 ROBUST INTERNAL EXEMPLAR-BASED IMAGE ENHANCEMENT Yang Xian 1 and Yingli Tian 1,2 1 The Graduate Center, 2 The City College, The City University of New York, New York, Email: yxian@gc.cuny.edu; ytian@ccny.cuny.edu

More information

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors

K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors K-Means Based Matching Algorithm for Multi-Resolution Feature Descriptors Shao-Tzu Huang, Chen-Chien Hsu, Wei-Yen Wang International Science Index, Electrical and Computer Engineering waset.org/publication/0007607

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo

MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS. Yanghao Li, Jiaying Liu, Wenhan Yang, Zongming Guo MULTI-POSE FACE HALLUCINATION VIA NEIGHBOR EMBEDDING FOR FACIAL COMPONENTS Yanghao Li, Jiaying Liu, Wenhan Yang, Zongg Guo Institute of Computer Science and Technology, Peking University, Beijing, P.R.China,

More information

Single-Image Super-Resolution Using Multihypothesis Prediction

Single-Image Super-Resolution Using Multihypothesis Prediction Single-Image Super-Resolution Using Multihypothesis Prediction Chen Chen and James E. Fowler Department of Electrical and Computer Engineering, Geosystems Research Institute (GRI) Mississippi State University,

More information

Image Super-Resolution Using Dense Skip Connections

Image Super-Resolution Using Dense Skip Connections Image Super-Resolution Using Dense Skip Connections Tong Tong, Gen Li, Xiejie Liu, Qinquan Gao Imperial Vision Technology Fuzhou, China {ttraveltong,ligen,liu.xiejie,gqinquan}@imperial-vision.com Abstract

More information

Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform Supplementary Material

Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform Supplementary Material Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform Supplementary Material Xintao Wang 1 Ke Yu 1 Chao Dong 2 Chen Change Loy 1 1 CUHK - SenseTime Joint Lab, The Chinese

More information

Single Image Super-Resolution via Iterative Collaborative Representation

Single Image Super-Resolution via Iterative Collaborative Representation Single Image Super-Resolution via Iterative Collaborative Representation Yulun Zhang 1(B), Yongbing Zhang 1, Jian Zhang 2, aoqian Wang 1, and Qionghai Dai 1,3 1 Graduate School at Shenzhen, Tsinghua University,

More information

Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution

Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution 2011 IEEE International Symposium on Multimedia Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution Jeffrey Glaister, Calvin Chan, Michael Frankovich, Adrian

More information

NTHU Rain Removal Project

NTHU Rain Removal Project People NTHU Rain Removal Project Networked Video Lab, National Tsing Hua University, Hsinchu, Taiwan Li-Wei Kang, Institute of Information Science, Academia Sinica, Taipei, Taiwan Chia-Wen Lin *, Department

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK 1 Po-Jen Lai ( 賴柏任 ), 2 Chiou-Shann Fuh ( 傅楸善 ) 1 Dept. of Electrical Engineering, National Taiwan University, Taiwan 2 Dept.

More information

Bidirectional Recurrent Convolutional Networks for Video Super-Resolution

Bidirectional Recurrent Convolutional Networks for Video Super-Resolution Bidirectional Recurrent Convolutional Networks for Video Super-Resolution Qi Zhang & Yan Huang Center for Research on Intelligent Perception and Computing (CRIPAC) National Laboratory of Pattern Recognition

More information

arxiv: v1 [cs.cv] 6 Nov 2015

arxiv: v1 [cs.cv] 6 Nov 2015 Seven ways to improve example-based single image super resolution Radu Timofte Computer Vision Lab D-ITET, ETH Zurich timofter@vision.ee.ethz.ch Rasmus Rothe Computer Vision Lab D-ITET, ETH Zurich rrothe@vision.ee.ethz.ch

More information

3D Object Recognition and Scene Understanding from RGB-D Videos. Yu Xiang Postdoctoral Researcher University of Washington

3D Object Recognition and Scene Understanding from RGB-D Videos. Yu Xiang Postdoctoral Researcher University of Washington 3D Object Recognition and Scene Understanding from RGB-D Videos Yu Xiang Postdoctoral Researcher University of Washington 1 2 Act in the 3D World Sensing & Understanding Acting Intelligent System 3D World

More information

Shape Preserving RGB-D Depth Map Restoration

Shape Preserving RGB-D Depth Map Restoration Shape Preserving RGB-D Depth Map Restoration Wei Liu 1, Haoyang Xue 1, Yun Gu 1, Qiang Wu 2, Jie Yang 1, and Nikola Kasabov 3 1 The Key Laboratory of Ministry of Education for System Control and Information

More information

arxiv: v1 [cs.cv] 16 Nov 2015

arxiv: v1 [cs.cv] 16 Nov 2015 Coarse-to-fine Face Alignment with Multi-Scale Local Patch Regression Zhiao Huang hza@megvii.com Erjin Zhou zej@megvii.com Zhimin Cao czm@megvii.com arxiv:1511.04901v1 [cs.cv] 16 Nov 2015 Abstract Facial

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

Seven ways to improve example-based single image super resolution

Seven ways to improve example-based single image super resolution Seven ways to improve example-based single image super resolution Radu Timofte CVL, D-ITET, ETH Zurich radu.timofte@vision.ee.ethz.ch Rasmus Rothe CVL, D-ITET, ETH Zurich rrothe@vision.ee.ethz.ch Luc Van

More information

Viewpoint Invariant Features from Single Images Using 3D Geometry

Viewpoint Invariant Features from Single Images Using 3D Geometry Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie

More information

COMPRESSED FACE HALLUCINATION. Electrical Engineering and Computer Science University of California, Merced, CA 95344, USA

COMPRESSED FACE HALLUCINATION. Electrical Engineering and Computer Science University of California, Merced, CA 95344, USA COMPRESSED FACE HALLUCNATON Sifei Liu Ming-Hsuan Yang Electrical Engineering and Computer Science University of California, Merced, CA 95344, USA ABSTRACT n this paper, we propose an algorithm to hallucinate

More information

Fast and Accurate Single Image Super-Resolution via Information Distillation Network

Fast and Accurate Single Image Super-Resolution via Information Distillation Network Fast and Accurate Single Image Super-Resolution via Information Distillation Network Zheng Hui, Xiumei Wang, Xinbo Gao School of Electronic Engineering, Xidian University Xi an, China zheng hui@aliyun.com,

More information

Novel Iterative Back Projection Approach

Novel Iterative Back Projection Approach IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 11, Issue 1 (May. - Jun. 2013), PP 65-69 Novel Iterative Back Projection Approach Patel Shreyas A. Master in

More information

Fast and Accurate Image Super-Resolution Using A Combined Loss

Fast and Accurate Image Super-Resolution Using A Combined Loss Fast and Accurate Image Super-Resolution Using A Combined Loss Jinchang Xu 1, Yu Zhao 1, Yuan Dong 1, Hongliang Bai 2 1 Beijing University of Posts and Telecommunications, 2 Beijing Faceall Technology

More information

A Bayesian Approach to Alignment-Based Image Hallucination

A Bayesian Approach to Alignment-Based Image Hallucination A Bayesian Approach to Alignment-Based Image Hallucination Marshall F. Tappen 1 and Ce Liu 2 1 University of Central Florida mtappen@eecs.ucf.edu 2 Microsoft Research New England celiu@microsoft.com Abstract.

More information

A Bayesian Approach to Alignment-based Image Hallucination

A Bayesian Approach to Alignment-based Image Hallucination A Bayesian Approach to Alignment-based Image Hallucination Marshall F. Tappen 1 and Ce Liu 2 1 University of Central Florida mtappen@eecs.ucf.edu 2 Microsoft Research New England celiu@microsoft.com Abstract.

More information

MOTION ESTIMATION USING CONVOLUTIONAL NEURAL NETWORKS. Mustafa Ozan Tezcan

MOTION ESTIMATION USING CONVOLUTIONAL NEURAL NETWORKS. Mustafa Ozan Tezcan MOTION ESTIMATION USING CONVOLUTIONAL NEURAL NETWORKS Mustafa Ozan Tezcan Boston University Department of Electrical and Computer Engineering 8 Saint Mary s Street Boston, MA 2215 www.bu.edu/ece Dec. 19,

More information

Exploiting Reflectional and Rotational Invariance in Single Image Superresolution

Exploiting Reflectional and Rotational Invariance in Single Image Superresolution Exploiting Reflectional and Rotational Invariance in Single Image Superresolution Simon Donn, Laurens Meeus, Hiep Quang Luong, Bart Goossens, Wilfried Philips Ghent University - TELIN - IPI Sint-Pietersnieuwstraat

More information

Fast Guided Global Interpolation for Depth and. Yu Li, Dongbo Min, Minh N. Do, Jiangbo Lu

Fast Guided Global Interpolation for Depth and. Yu Li, Dongbo Min, Minh N. Do, Jiangbo Lu Fast Guided Global Interpolation for Depth and Yu Li, Dongbo Min, Minh N. Do, Jiangbo Lu Introduction Depth upsampling and motion interpolation are often required to generate a dense, high-quality, and

More information

Efficient Module Based Single Image Super Resolution for Multiple Problems

Efficient Module Based Single Image Super Resolution for Multiple Problems Efficient Module Based Single Image Super Resolution for Multiple Problems Dongwon Park Kwanyoung Kim Se Young Chun School of ECE, Ulsan National Institute of Science and Technology, 44919, Ulsan, South

More information

Robust Single Image Super-resolution based on Gradient Enhancement

Robust Single Image Super-resolution based on Gradient Enhancement Robust Single Image Super-resolution based on Gradient Enhancement Licheng Yu, Hongteng Xu, Yi Xu and Xiaokang Yang Department of Electronic Engineering, Shanghai Jiaotong University, Shanghai 200240,

More information

Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging

Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging Thin Plate Spline Feature Point Matching for Organ Surfaces in Minimally Invasive Surgery Imaging Bingxiong Lin, Yu Sun and Xiaoning Qian University of South Florida, Tampa, FL., U.S.A. ABSTRACT Robust

More information

DCGANs for image super-resolution, denoising and debluring

DCGANs for image super-resolution, denoising and debluring DCGANs for image super-resolution, denoising and debluring Qiaojing Yan Stanford University Electrical Engineering qiaojing@stanford.edu Wei Wang Stanford University Electrical Engineering wwang23@stanford.edu

More information

UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution

UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution UDNet: Up-Down Network for Compact and Efficient Feature Representation in Image Super-Resolution Chang Chen Xinmei Tian Zhiwei Xiong Feng Wu University of Science and Technology of China Abstract Recently,

More information

Super-Resolution. Many slides from Miki Elad Technion Yosi Rubner RTC and more

Super-Resolution. Many slides from Miki Elad Technion Yosi Rubner RTC and more Super-Resolution Many slides from Mii Elad Technion Yosi Rubner RTC and more 1 Example - Video 53 images, ratio 1:4 2 Example Surveillance 40 images ratio 1:4 3 Example Enhance Mosaics 4 5 Super-Resolution

More information

A Comparison of SIFT, PCA-SIFT and SURF

A Comparison of SIFT, PCA-SIFT and SURF A Comparison of SIFT, PCA-SIFT and SURF Luo Juan Computer Graphics Lab, Chonbuk National University, Jeonju 561-756, South Korea qiuhehappy@hotmail.com Oubong Gwun Computer Graphics Lab, Chonbuk National

More information

Single Image Super Resolution of Textures via CNNs. Andrew Palmer

Single Image Super Resolution of Textures via CNNs. Andrew Palmer Single Image Super Resolution of Textures via CNNs Andrew Palmer What is Super Resolution (SR)? Simple: Obtain one or more high-resolution images from one or more low-resolution ones Many, many applications

More information

ECE 5775 High-Level Digital Design Automation, Fall 2016 School of Electrical and Computer Engineering, Cornell University

ECE 5775 High-Level Digital Design Automation, Fall 2016 School of Electrical and Computer Engineering, Cornell University ECE 5775 High-Level Digital Design Automation, Fall 2016 School of Electrical and Computer Engineering, Cornell University Optical Flow on FPGA Ian Thompson (ijt5), Joseph Featherston (jgf82), Judy Stephen

More information

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H.

Nonrigid Surface Modelling. and Fast Recovery. Department of Computer Science and Engineering. Committee: Prof. Leo J. Jia and Prof. K. H. Nonrigid Surface Modelling and Fast Recovery Zhu Jianke Supervisor: Prof. Michael R. Lyu Committee: Prof. Leo J. Jia and Prof. K. H. Wong Department of Computer Science and Engineering May 11, 2007 1 2

More information

Fast Image Super-resolution Based on In-place Example Regression

Fast Image Super-resolution Based on In-place Example Regression 2013 IEEE Conference on Computer Vision and Pattern Recognition Fast Image Super-resolution Based on In-place Example Regression Jianchao Yang, Zhe Lin, Scott Cohen Adobe Research 345 Park Avenue, San

More information

Anchored Neighborhood Regression for Fast Example-Based Super-Resolution

Anchored Neighborhood Regression for Fast Example-Based Super-Resolution Anchored Neighborhood Regression for Fast Example-Based Super-Resolution Radu Timofte 1,2, Vincent De Smet 1, and Luc Van Gool 1,2 1 KU Leuven, ESAT-PSI / iminds, VISICS 2 ETH Zurich, D-ITET, Computer

More information

A Robust Two Feature Points Based Depth Estimation Method 1)

A Robust Two Feature Points Based Depth Estimation Method 1) Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence

More information

CNN for Low Level Image Processing. Huanjing Yue

CNN for Low Level Image Processing. Huanjing Yue CNN for Low Level Image Processing Huanjing Yue 2017.11 1 Deep Learning for Image Restoration General formulation: min Θ L( x, x) s. t. x = F(y; Θ) Loss function Parameters to be learned Key issues The

More information

3D Editing System for Captured Real Scenes

3D Editing System for Captured Real Scenes 3D Editing System for Captured Real Scenes Inwoo Ha, Yong Beom Lee and James D.K. Kim Samsung Advanced Institute of Technology, Youngin, South Korea E-mail: {iw.ha, leey, jamesdk.kim}@samsung.com Tel:

More information

Geometric Reconstruction Dense reconstruction of scene geometry

Geometric Reconstruction Dense reconstruction of scene geometry Lecture 5. Dense Reconstruction and Tracking with Real-Time Applications Part 2: Geometric Reconstruction Dr Richard Newcombe and Dr Steven Lovegrove Slide content developed from: [Newcombe, Dense Visual

More information

Content-Based Image Recovery

Content-Based Image Recovery Content-Based Image Recovery Hong-Yu Zhou and Jianxin Wu National Key Laboratory for Novel Software Technology Nanjing University, China zhouhy@lamda.nju.edu.cn wujx2001@nju.edu.cn Abstract. We propose

More information

EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis Supplementary

EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis Supplementary EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis Supplementary Mehdi S. M. Sajjadi Bernhard Schölkopf Michael Hirsch Max Planck Institute for Intelligent Systems Spemanstr.

More information

Perceiving the 3D World from Images and Videos. Yu Xiang Postdoctoral Researcher University of Washington

Perceiving the 3D World from Images and Videos. Yu Xiang Postdoctoral Researcher University of Washington Perceiving the 3D World from Images and Videos Yu Xiang Postdoctoral Researcher University of Washington 1 2 Act in the 3D World Sensing & Understanding Acting Intelligent System 3D World 3 Understand

More information

A Single Image Compression Framework Combined with Sparse Representation-Based Super- Resolution

A Single Image Compression Framework Combined with Sparse Representation-Based Super- Resolution International Conference on Electronic Science and Automation Control (ESAC 2015) A Single Compression Framework Combined with Sparse RepresentationBased Super Resolution He Xiaohai, He Jingbo, Huang Jianqiu

More information

LETTER Local and Nonlocal Color Line Models for Image Matting

LETTER Local and Nonlocal Color Line Models for Image Matting 1814 IEICE TRANS. FUNDAMENTALS, VOL.E97 A, NO.8 AUGUST 2014 LETTER Local and Nonlocal Color Line Models for Image Matting Byoung-Kwang KIM a), Meiguang JIN, Nonmembers, and Woo-Jin SONG, Member SUMMARY

More information

DeepIM: Deep Iterative Matching for 6D Pose Estimation - Supplementary Material

DeepIM: Deep Iterative Matching for 6D Pose Estimation - Supplementary Material DeepIM: Deep Iterative Matching for 6D Pose Estimation - Supplementary Material Yi Li 1, Gu Wang 1, Xiangyang Ji 1, Yu Xiang 2, and Dieter Fox 2 1 Tsinghua University, BNRist 2 University of Washington

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

SurfNet: Generating 3D shape surfaces using deep residual networks-supplementary Material

SurfNet: Generating 3D shape surfaces using deep residual networks-supplementary Material SurfNet: Generating 3D shape surfaces using deep residual networks-supplementary Material Ayan Sinha MIT Asim Unmesh IIT Kanpur Qixing Huang UT Austin Karthik Ramani Purdue sinhayan@mit.edu a.unmesh@gmail.com

More information

Data-Driven Depth Map Refinement via Multi-scale Sparse Representation

Data-Driven Depth Map Refinement via Multi-scale Sparse Representation Data-Driven Depth Map Refinement via Multi-scale Sparse Representation HyeokHyen Kwon KAIST hyeokhyen@kaist.ac.kr Yu-Wing Tai KAIST yuwing@kaist.ac.kr Stephen Lin Microsoft Research stevelin@microsoft.com

More information

Learning with Side Information through Modality Hallucination

Learning with Side Information through Modality Hallucination Master Seminar Report for Recent Trends in 3D Computer Vision Learning with Side Information through Modality Hallucination Judy Hoffman, Saurabh Gupta, Trevor Darrell. CVPR 2016 Nan Yang Supervisor: Benjamin

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Fast self-guided filter with decimated box filters

Fast self-guided filter with decimated box filters INFOTEH-JAHORINA Vol. 15, March 2016. Fast self-guided filter with decimated s Dragomir El Mezeni, Lazar Saranovac Department of electronics University of Belgrade, School of Electrical Engineering Belgrade,

More information

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques Syed Gilani Pasha Assistant Professor, Dept. of ECE, School of Engineering, Central University of Karnataka, Gulbarga,

More information

Image Matting with KL-Divergence Based Sparse Sampling Supplementary Material

Image Matting with KL-Divergence Based Sparse Sampling Supplementary Material Image Matting with KL-Divergence Based Sparse Sampling Supplementary Material Levent Karacan Aykut Erdem Erkut Erdem Hacettepe University Computer Vision Lab (HUCVL) Department of Computer Engineering,

More information

Depth Estimation from a Single Image Using a Deep Neural Network Milestone Report

Depth Estimation from a Single Image Using a Deep Neural Network Milestone Report Figure 1: The architecture of the convolutional network. Input: a single view image; Output: a depth map. 3 Related Work In [4] they used depth maps of indoor scenes produced by a Microsoft Kinect to successfully

More information

arxiv: v2 [cs.cv] 11 Nov 2016

arxiv: v2 [cs.cv] 11 Nov 2016 Accurate Image Super-Resolution Using Very Deep Convolutional Networks Jiwon Kim, Jung Kwon Lee and Kyoung Mu Lee Department of ECE, ASRI, Seoul National University, Korea {j.kim, deruci, kyoungmu}@snu.ac.kr

More information

Robust Video Super-Resolution with Registration Efficiency Adaptation

Robust Video Super-Resolution with Registration Efficiency Adaptation Robust Video Super-Resolution with Registration Efficiency Adaptation Xinfeng Zhang a, Ruiqin Xiong b, Siwei Ma b, Li Zhang b, Wen Gao b a Institute of Computing Technology, Chinese Academy of Sciences,

More information

Depth Range Accuracy for Plenoptic Cameras

Depth Range Accuracy for Plenoptic Cameras Depth Range Accuracy for Plenoptic Cameras Nuno Barroso Monteiro Institute for Systems and Robotics, University of Lisbon, Portugal Institute for Systems and Robotics, University of Coimbra, Portugal Simão

More information

Specular 3D Object Tracking by View Generative Learning

Specular 3D Object Tracking by View Generative Learning Specular 3D Object Tracking by View Generative Learning Yukiko Shinozuka, Francois de Sorbier and Hideo Saito Keio University 3-14-1 Hiyoshi, Kohoku-ku 223-8522 Yokohama, Japan shinozuka@hvrl.ics.keio.ac.jp

More information

Selecting Models from Videos for Appearance-Based Face Recognition

Selecting Models from Videos for Appearance-Based Face Recognition Selecting Models from Videos for Appearance-Based Face Recognition Abdenour Hadid and Matti Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O.

More information

Super Resolution Using Graph-cut

Super Resolution Using Graph-cut Super Resolution Using Graph-cut Uma Mudenagudi, Ram Singla, Prem Kalra, and Subhashis Banerjee Department of Computer Science and Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi,

More information

Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution

Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution Deep Laplacian Pyramid Networks for Fast and Accurate Super-Resolution Wei-Sheng Lai 1 Jia-Bin Huang 2 Narendra Ahuja 3 Ming-Hsuan Yang 1 1 University of California, Merced 2 Virginia Tech 3 University

More information

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 Reconstructing Images of Bar Codes for Construction Site Object Recognition 1 by David E. Gilsinn 2, Geraldine S. Cheok 3, Dianne P. O Leary 4 ABSTRACT: This paper discusses a general approach to reconstructing

More information