Texture Sensitive Image Inpainting after Object Morphing

Similar documents
Image Inpainting Using Sparsity of the Transform Domain

Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations

An Improved Image Resizing Approach with Protection of Main Objects

A Robust and Adaptive Image Inpainting Algorithm Based on a Novel Structure Sparsity

Light Field Occlusion Removal

Geeta Salunke, Meenu Gupta

IMA Preprint Series # 2016

Image Inpainting. Seunghoon Park Microsoft Research Asia Visual Computing 06/30/2011

The SIFT (Scale Invariant Feature

Object Removal Using Exemplar-Based Inpainting

Image Resizing Based on Gradient Vector Flow Analysis

A Review on Image InpaintingTechniques and Its analysis Indraja Mali 1, Saumya Saxena 2,Padmaja Desai 3,Ajay Gite 4

Content-Aware Image Resizing

An Approach for Real Time Moving Object Extraction based on Edge Region Determination

Object Tracking Algorithm based on Combination of Edge and Color Information

AN ANALYTICAL STUDY OF DIFFERENT IMAGE INPAINTING TECHNIQUES

technique: seam carving Image and Video Processing Chapter 9

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

Analysis and Comparison of Spatial Domain Digital Image Inpainting Techniques

A reversible data hiding based on adaptive prediction technique and histogram shifting

Panoramic Image Stitching

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester

An Edge Based Adaptive Interpolation Algorithm for Image Scaling

Image Denoising Methods Based on Wavelet Transform and Threshold Functions

Texture Synthesis and Manipulation Project Proposal. Douglas Lanman EN 256: Computer Vision 19 October 2006

Iterative Removing Salt and Pepper Noise based on Neighbourhood Information

Use of Shape Deformation to Seamlessly Stitch Historical Document Images

Image gradients and edges April 11 th, 2017

Image gradients and edges April 10 th, 2018

ISSN: (Online) Volume 2, Issue 5, May 2014 International Journal of Advance Research in Computer Science and Management Studies

Hybrid Algorithm for Edge Detection using Fuzzy Inference System

An Adaptive Threshold LBP Algorithm for Face Recognition

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

Image Inpainting by Patch Propagation Using Patch Sparsity Zongben Xu and Jian Sun

Reliability Based Cross Trilateral Filtering for Depth Map Refinement

Sobel Edge Detection Algorithm

TEMPORALLY CONSISTENT REGION-BASED VIDEO EXPOSURE CORRECTION

Broad field that includes low-level operations as well as complex high-level algorithms

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

A Review on Design, Implementation and Performance analysis of the Image Inpainting Technique based on TV model

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

Comparative Study and Analysis of Image Inpainting Techniques

Why is computer vision difficult?

STRUCTURAL EDGE LEARNING FOR 3-D RECONSTRUCTION FROM A SINGLE STILL IMAGE. Nan Hu. Stanford University Electrical Engineering

Shape Preserving RGB-D Depth Map Restoration

Image gradients and edges

Time Stamp Detection and Recognition in Video Frames

Image Enhancement Techniques for Fingerprint Identification

Fast and Enhanced Algorithm for Exemplar Based Image Inpainting (Paper# 132)

Improving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries

Image Segmentation Based on Watershed and Edge Detection Techniques

2D Image Morphing using Pixels based Color Transition Methods

FPGA IMPLEMENTATION FOR REAL TIME SOBEL EDGE DETECTOR BLOCK USING 3-LINE BUFFERS

Motion Estimation. There are three main types (or applications) of motion estimation:

Towards the completion of assignment 1

Shift-Map Image Editing

Vehicle Detection Method using Haar-like Feature on Real Time System

Novel Occlusion Object Removal with Inter-frame Editing and Texture Synthesis

Image Inpainting and Selective Motion Blur

A Road Marking Extraction Method Using GPGPU

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

Learning based face hallucination techniques: A survey

Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection

Filters. Advanced and Special Topics: Filters. Filters

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

ON THE ANALYSIS OF PARAMETERS EFFECT IN PDE- BASED IMAGE INPAINTING

Graphical Models for Computer Vision

Face Hallucination Based on Eigentransformation Learning

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Volume Editor. Hans Weghorn Faculty of Mechatronics BA-University of Cooperative Education, Stuttgart Germany

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

A Noise-Robust and Adaptive Image Segmentation Method based on Splitting and Merging method

" Video Completion using Spline Interpolation

An Improved Texture Synthesis Algorithm Using Morphological Processing with Image Analogy

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING

Automatic Colorization of Grayscale Images

Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution

Matching. Compare region of image to region of image. Today, simplest kind of matching. Intensities similar.

Automatic Logo Detection and Removal

Hardware Description of Multi-Directional Fast Sobel Edge Detection Processor by VHDL for Implementing on FPGA

Overview. Related Work Tensor Voting in 2-D Tensor Voting in 3-D Tensor Voting in N-D Application to Vision Problems Stereo Visual Motion

BIG DATA-DRIVEN FAST REDUCING THE VISUAL BLOCK ARTIFACTS OF DCT COMPRESSED IMAGES FOR URBAN SURVEILLANCE SYSTEMS

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov

CHAPTER 2 ADAPTIVE DECISION BASED MEDIAN FILTER AND ITS VARIATION

Digital Image Processing. Image Enhancement - Filtering

Undergraduate Research Opportunity Program (UROP) Project Report. Video Inpainting. NGUYEN Quang Minh Tuan. Department of Computer Science

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

A Comparison of SIFT, PCA-SIFT and SURF

A Review on Image Inpainting to Restore Image

Topic 4 Image Segmentation

Markov Random Fields and Gibbs Sampling for Image Denoising

Graph-Based Superpixel Labeling for Enhancement of Online Video Segmentation

A Survey on Edge Detection Techniques using Different Types of Digital Images

Comparative Analysis of Edge Detection Algorithms Based on Content Based Image Retrieval With Heterogeneous Images

Object Extraction Using Image Segmentation and Adaptive Constraint Propagation

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

A Robust Color Image Watermarking Using Maximum Wavelet-Tree Difference Scheme

An Edge Detection Algorithm for Online Image Analysis

A Fourier Extension Based Algorithm for Impulse Noise Removal

Transcription:

Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan e-mail: ywu@csie.ntust.edu.tw Abstract - This paper develops an object morphing algorithm and an image inpainting algorithm. Nowadays, there are many image editing functions for the cameras. This paper proposed an object morphing method to heighten the human height in the image. The object morphing method is based on minimizing the change in gradients after adding rows to the object to preserve texture detail. To fill the missing pixels in the image after object morphing, an efficient and accurate inpainting method is presented based on a new patch classification to determine what edge direction the patch passed through, and recovering the missing pixel according to the edge direction. Also, a hybrid inpainting algorithm designed by automatic texture complexity detection is presented. Experiments demonstrate that the proposed texture sensitive inpainting method and the hybrid inpainting method perform better than the previous inpainting methods. Keywords: image inpainting; texture complexity; object morphing 1. Introduction Image inpainting is to fill the missing pixels by using effective information in the image. It can be widely applied to repair medical images, remove scratches in images, change the image foregrounds, etc. Many image inpainting methods have been proposed in recent years. The first automatic image inpainting method was proposed by Bertalmio et al. [1] for still images. This method is known as BSCB method, after the names of the authors. Afterwards, different kinds of models were proposed continuously. Ben et al. [2] proposed a Fast Marching method (FFM) [3] inpainting algorithm based on structure. Since the FFM algorithm is a kind of iterative algorithm, they introduced a weight calculation method to solve the time complexity problem. Farid et al. [4] presented an inpainting method using dynamic weighted kernels. This method used traditional blur kernels of variable sizes and weights. The edge pixels in the neighborhood of a missing pixel were weighted more than non-edge pixels to preserve the edges in the missing region. But the blur kernel was only applicable to restore small missing region. Sun et al. [5] introduced an inpainting algorithm based on multi-scale Markov Random Field (MRF) model. The image to be inpainted was divided into multiple scales. They inpainted the coarsest scale based on the MRF model. Then, the final inpainting result was achieved from the coarsest resolution to the finest one by using the belief propagation (BP) algorithm [6]. Xu et al. [7] presented an inpainting algorithm through investigating the sparsity of the similarities of an image patch with its neighboring patches. The patch to be inpainted is repaired by linear combination of candidate patches in the source region iteratively until no missing pixels left. Most of the previous methods were executed iteratively thus produced lots of computational overhead. To repair the missing region rapidly, Huang et al. [8] proposed an efficient inpainting approach that kept the structure consistency between the source region and the target region by the priority of the filling order of target region. Above mentioning inpainting methods were all from single image. Wu et al. [9] proposed an 3D information obtained from a sequence of images for the usage of image inpainting. They introduced Homography and Image Rectification geometric characteristic to reduce the guessing of the image inpainting. After object morphing, some pixels will be covered with the new object, some may lose the original pixels. Hence it causes missing pixels in the image. This paper aims at applying an efficient and effective image inpainting algorithm on the object morphing. The proposed image inpainting method is based on the patch priority and a new patch classification, and we inpaint the missing pixels according to the edge direction which is decided by the patch classification. Since our inpainting method focuses on the edge points, we can produce better inpainting results on

the images with complex background. The paper is organized as follows. In second section, an object morphing algorithm is presented. In third section, the details of our proposed inpainting method are introduced. In forth section, the individual inpainting performance comparison is shown. In fifth section, a hybrid inpainting method is presented. Finally, experiments are shown in sixth second. 3. The proposed image inpainting method First, we define I to be the original image which includes a target region to be inpainted. To attain an 2. Object morphing We now present the technical details of object morphing. The idea is similar to seam-carving. Seams are vertical or horizontal chains of pixels that are successively removed from or added to an image to change its width or height. A seam-carving algorithm introduced by Grundmann et al. [10], which aimed at minimizing the change in gradients during adding chains of pixels. Refer to above-mentioned conception, we propose an object morphing method based on minimizing the change in gradients to preserve texture detail. Figure 1 shows the flow chart of the morphing procedure. Objects to be edited are cut out from the original image as the input of the system. We regard the middle row of the object as the beginning of the morphing process, and the last row as the end. Next, cover Sobel mask from the beginning row to the end row on each pixel to measure chains of pixel gradients. The Sobel operator is widely used in edge detection. It is based on convolving the image with a small, separable, and integer valued filter in horizontal and vertical direction and is therefore relatively inexpensive in terms of computations. After covering Sobel mask, choose the minimum sum of gradients every n rows as the optimal seam to preserve texture detail. Then shift all the rows above the optimal seam upward, and insert the average color of the optimal seam and its previous row to the empty row. Here, we use 3 x 3 Sobel mask, and set n to 5 according to experimental experience. Now we have the morphing result. Aligning the last row of the object to the original position in the original image and paste back the object, we obtain an image with new object but missing some background information. Therefore, we present an image inpainting method in the next section. Figure 1. Flow chart of object morphing. efficient and accurate image inpainting approach, we adopt Huang et al. s [8] priority function to keep the structure consistency between the target region and the source region, and present a new patch classification for inpainting process. The patch classification is designed to determine what edge direction the patch be passed through. Here, we define horizontal variation Varh ( p) and vertical variation Varv ( p) expressed in formula (1-2). I ( s, t) represents the middle point of the patch instead of the started point. The corresponding diagram of the patch classification is shown in Figure 2 that Figure 2(a) represents the estimation of Varh ( p) and Figure 2(b) represents the estimation of Varv ( p ). 1 0 Var ( ) (, ) (, 1) h p = I i + s j + t I i + s j + t + (1) i= 1 j= 1 1 0 Var ( ) (, ) ( 1, ) v p = I i + s j + t I i + s + j + t (2) j= 1 i= 1

Under the order of patch priority, we inpaint the missing points according to formula (3), where α is a positive constant and its value is set in the experiments. If Figure 3. Weight for a 3 x 3 patch. The flow chart of the proposed algorithm is shown in Figure 4. Our algorithm thinks over the different edge directions of the patch to be inpainted and fill the missing pixels according to the patch classification. This is the key difference to the previous method. In the next section, we show advantages of the proposed method. (a) (b) Figure 2. Patch classification. the horizontal variation Var ( p) is larger than the sum of h the vertical variation Var ( p) andα, it means that the v horizontal line is much more significant than the vertical line in the patch. In this case, we fill the missing point I ( s, t) by the average of I ( s-1, t) and I ( s + 1, t) ; if the vertical variation Var ( p) is larger than the sum of the v horizontal variation Var ( p) and α, it means that the h vertical line is much more significant than the horizontal line in the patch. In this case, we fill the missing point I ( s, t) by the average of I ( s, t-1) and I ( s, t +1) ; otherwise, we regard it as a smooth region and fill the missing point by the weighted pixels in the patch. The weight w of each pixel I ( i, j) is defined by equation (4), ij where I ( s, t) represents the middle point of the patch and Z is the sum of all weights in the patch for normalization. For example, the weight for a 3 x 3 patch is shown in Figure 3. if Varh > Varv + α, horizontal lines if Varv > Varh + α, vertical lines otherwise, smooth region (3) w ij 1 1 = 2 2 1 + ( i s) + ( j t) Z (4) Figure 4. Flow chart of object morphing and image inpainting. 4. Individual inpainting performance comparison In this section, we present the proposed method on ranges of full color photos that have smooth and complex background. Based on these test photos, we compare our method with two previous inpainting algorithms. According to experimental experience, the size of patch is set to 3 x 3, and α which is used to determine the edge direction of the patch is set to 25 in the following test photos. The environment of implementation was on Intel Pentium 4CPU 3.40 GHz with 1.24 GB of RAM. In our implementation, objects to be edited in the image are cut out by the GNU Image Manipulation Program (GIMP). Figure 5 presents an example of object morphing, including the original photo, object which is cut out by GIMP, and the morphing result. It s obvious that the object after being edited keeps texture well on the jeans in

Figure 5(c). Figure 6 presents seven 520 x 390 test photos for object morphing and background pixel inpainting. Figure 7(a-g) show the objects to be edited in the seven test photos, and Figure 7(h-n) show the morphing results. way, we are able to measure the PSNR on the inpainted pixels. (a) (b) (c) Figure 5. Example of object morphing. (a) (b) (c) (d) (e) (f) (g) Figure 6. Test photos. (d) (e) (f) (g) Figure 8. Background of test photos. The pixels to be inpainted are colored green in Figure 9(a), and Figure 9(b-d) show the inpainting results of Huang et al. s, Xu et al. s, and our method. Since we have considered the edge direction of each patch to be inpainted, our method results in better visual quality at edge points. On the contrary, owing to the background in Figure 6(a-b) are much smoother than other test photos, our inpainting algorithm has less vantage on these photos. It s observed from Table 1, the much more complexity the background is, the higher PSNR our inpainting results obtain than the other two methods. (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) Figure 7. Objects to be edited. We now compare the proposed inpainting method with Huang et al. s [8] and Xu et al. s exemplar-based [7] inpainting method. For Huang et al. s algorithm [8], the patch size is set to 3 x 3. For Xu et al. s algorithm [7], the size of patch and neighborhood for computing patch similarity are separately set for each test photo in order to obtain the highest image quality. Peak signal-to-noise ratio (PSNR) [11] between the inpainted images and the original images are measured for comparison since it is the widely accepted and commonly used standard of quantitatively measuring image quality. To measure the PSNR between the inpainted images and the original images, we also require the background of seven test photos as shown in Figure 8 which only differ from the original photos on reducing the foreground. In this

Figure 9. Individual inpainting performance comparison. Figure 10 shows magnified views of the inpainting results of seven test photos. These allow us to observe the texture detail after inpainting. The background photo is shown in Figure 10(a) and Figure 10(b-d) show the inpainting results of Huang et al. s, Xu et al. s, and our method. It s obvious that Figure 10(d) keeps better texture than Figure 10(b-c) since we have more consideration to the edge direction. Thus we certainly preserve more precise details than other two compared methods on the complex background. In addition to visual quality, we also concern about the computational overhead. Here, we consider the execution time of the morphing process and the missing pixels inpainting process. As shown in Table 2, our proposed method is approximately equal to Huang et al. s method [8], and is better than Xu et al. s method [7] overwhelmingly with different number of missing pixels in the seven test photos. Besides object morphing, the proposed inpainting method can also be applied in other image processing. Figure 11 shows an application example that replaces the foreground in Figure 6(g) with another object. After the replacement, the missing pixels are colored green in Figure 11(a), and Figure 11(b-d) show the inpainting results of Huang et al. s, Xu et al. s, and our method. Table 3 shows that our inpainting approach performs much better than previous methods. Table 1. PSNR (db). Figure 10. Magnified views of inapinting result. Huang et al. [8] Xu et al. [7] Our method Figure 6(a) 18.57 15.56 17.94 Figure 6(b) 12.06 11.80 11.72 Figure 6(c) 21.47 19.99 22.68 Figure 6(d) 22.40 19.66 23.32 Figure 6(e) 22.68 19.32 23.40 Figure 6(f) 15.60 14.34 16.17 Figure 6(g) 16.38 12.83 18.79 Table 2. Execution Time (s). Figure 11. Object replacement. Huang et al. [8] Xu et al. [7] Our method Figure 6(a):2564 missing pixels 0.04 2.81 0.04 Figure 6(b):2194 missing pixels 0.04 1.44 0.04 Figure 6(c):1293 missing pixels 0.03 8.89 0.03 Figure 6(d):4281 missing pixels 0.05 51.47 0.05 Figure 6(e):4349 missing pixels 0.04 9.58 0.05 Figure 6(f):4860 missing pixels 0.07 5.76 0.07 Figure 6(g):3676 missing pixels 0.08 6.85 0.08

Table 3. Inpainting comparison of application. PSNR (db) Execution Time (s) Huang et al. [8] 16.60 0.07 Xu et al. [7] 14.35 10.71 Our method 17.42 0.08 5. Hybrid inpainting method As shown in Table 1, our method produces the highest PSNR for images whose backgrounds are complex and Huang et al. s method produces the highest PSNR for images whose backgrounds are smooth. Thus, we present a hybrid inpainting method by designing a texture complexity detection to select which inpainting method is much more proper. We design a texture complexity detection. Here, we cover 5 x 5 mask on the source region around the missing pixels on the image to be inpainted, and estimate the variance of the pixels in the mask. The variance is calculated by the intensity of pixels inside the mask around the pixel as shown in formula (5), where N means the number of pixels inside the area, x means intensity of pixel i and x means average intensity in the mask. 1 variance = x x i N 2 ( i ) (5) N i = 1 Then set a threshold 150 according to experiments to sieve out the edge points whose variance is larger than it. Table 4 shows that except Figure 6(c) whose missing pixels are insufficient, the proposed texture complexity detection by variance can much precisely indicate the complex background by percentage of edge points that is upper than 20% and the smooth background by percentage of edge point that is lower than 20%. The flow chart of the proposed hybrid inpainting method is shown in Figure 12. Table 4. The texture complexity detection by variance. Percentage of edge points Figure 6(a) 14.55% Figure 6(b) 19.07% Figure 6(c) 9.40% Figure 6(d) 28.23% Figure 6(e) 27.67% Figure 6(f) 24.49% Figure 6(g) 24.61% Figure 12. Flow chart of hybrid inpainting method. 6. Experiments In this section, we compare the proposed hybrid inpainting method to the previous image inpainting method. Figure 13 shows the test photos with different texture complexities around the missing area, the texture complexities are shown in Table 5 that shows the pixels around missing area in Figure 13(b) belongs to smooth background and in Figure 13(c) belongs to complex background. Hence, Figure 13(d) includes two different background conditions. According to the texture complexity, the proposed hybrid inpainting algorithm chooses the better method between Huang et al. s and our method to perform the best result. Here, the hybrid method can inpaint the smooth area by Huang et al. s method and inpaint the complex area by our method. Figure 13. Test photos with different texture complexity. Table 5. The texture complexity detection by variance. Percentage of edge points Figure 13(b) 9.79% Figure 13(c) 21.17% The comparison of inpainting results is shown in Figure 14. Figure 14(a) represents the missing pixels after object morphing which are colored green, Figure 14(b-d) show the inpainting results of Huang et al. s, our, and the proposed hybrid inpainting method. Table 6 shows the hybrid inpainting method precisely performs the best

inpainting result for both high and low texture complexities. For Figure 13(b), the hybrid method chooses Huang et al. s method to fill the missing area since the percentage of edge points is lower than 20%. Hence, the result of hybrid method is the same as the result of Huang et al. s method. For Figure 13(c), the hybrid method chooses our method to fill the missing area since the percentage of edge points is upper than 20%. Hence, the result of hybrid method is the same as the result of our method. For Figure 13(d), the hybrid method fills the left missing area by Huang et al. s method and fills the right missing area by our method. Since the proposed hybrid inpainting method separately determines the inpainting algorithm for different missing area background complexity, it performs the highest PSNR. Figure 14. Comparison of inpainting results. Table 6. Inpainting comparison of application. Huang et al. [8] Our method Hybrid method Figure 13(b) 14.64 14.21 14.64 Figure 13(c) 17.17 17.56 17.56 Figure 13(d) 15.41 15.16 15.45 7. Conclusions In this study, a texture sensitive image inpainting after object morphing is proposed. The object is edited by adding rows to the object. To preserve texture detail, the approach is based on minimizing the change in gradients after adding rows. Visually, the objects can still keep texture detail after morphing. The image inpainting is presented based on the patch priority and a new patch classification, and recovering the damaged patches according to the edge direction. Here also present a hybrid inpainting algorithm designed by automatic texture complexity detection. Experiments demonstrate that the proposed texture sensitive inpainting method and the hybrid inpainting method not only produce better repair result, but also have advantage in speed. References [1] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, Image inpainting, in Proceedings of SIGGRAPH 2000, New Orleans, LA, 2000. [2] B. Guo, C. B. Xian, Q. C. Sun, L. Liu, and F. Su, A fast image inpainting algorithm based on structure, 2009 Fourth International Conference on Innovative Computing, Information and Control, pp. 310-314, 2009. [3] A. Telea, An image technique based on the fast matching method, Journals of Graphics Tools, vol. 9, no. 1, pp. 25-36, 2004. [4] M. S. Farid, and H. Khan, Image inpainting using dynamic weighted kernels, 2010 3rd IEEE International Conference on Computer Science and Information Technology, vol. 8, pp. 252-255, 2010. [5] J. X. Sun, D. F. Hao, L. F. Hao, H. M. Yang, and D. B. Gu, A digital image inpainting method based on multiscale markov random field, 2010 IEEE International Conference on Information and Automation, pp. 1118-1122, 2010. [6] P. F. Felzenszwalb, and D. P. Huttenloche, Efficient belief propagation for early vision, 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I-261-I-268, 2004. [7] Z. B. Xu, and J. Sun, Image inpainting by patch propagation using patch sparsity, IEEE Transactions on Image Processing, vol. 19, no. 5, pp. 1153-1165, 2010 [8] H. Y. Huang, and C. N. Hsiao, An image inpainting technique based on illumination variation and structure consistency, 2010 3rd International Conference on InformationSciences and Interaction Sciences, pp. 415, 2010. [9] Y. L. Wu, C. Y. Tang, M. K. Hor, and C. T. Liu, "Automatic Image Interpolation Using Homography," EURASIP Journal on Advances in Signal Processing, vol. 2010, Article ID 307546, 12 pages, 2010. [10] M. Grundmann, V. Kwatra, M. Han, and I. Essa, Discontinuous seam-carving for video retargeting, 2010 IEEE Conference on Computer Vision and Pattern Recognition, pp. 569-576, 2010. [11] S. K. Mitra, and G. L. Sicuranza, Nonlinear image processing, San Diego, CA: Academic Press, 2001.