Intent-aware image cloning

Size: px
Start display at page:

Download "Intent-aware image cloning"

Transcription

1 Vis Comput DOI /s ORIGINAL ARTICLE Intent-aware image cloning Xiaohui Bie Wencheng Wang Hanqiu Sun Haoda Huang Minying Zhang Springer-Verlag Berlin Heidelberg 2013 Abstract Currently, gradient domain methods are popular for producing seamless cloning of a source image patch into a target image. However, structure conflicts between the source image patch and the target image may generate artifacts, preventing the general practices. In this paper, we tackle the challenge by incorporating the users intent in outlining the source patch, where the boundary drawn generally has different appearances from the objects of interest. We first reveal that artifacts exist in the over-included region, the region outside the objects of interest in the source patch. Then we use the diversity from the boundary to approximately distinguish the objects from the over-included region, and design a new algorithm to make the target image adaptively take effects in blending. So the structure conflicts can be efficiently suppressed to remove the artifacts around the objects of interest in the composite result. Moreover, we develop an interpolation measure to composite the final image rather than solving a Poisson equation, and speed up the interpolation by treating pixels in clusters and using hierarchical sampling techniques. Our method is simple to use for X. Bie ( ) W. Wang M. Zhang State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing, , China xiaohui@ios.ac.cn W. Wang ( ) whn@ios.ac.cn H. Sun Department of Computer Science and Engineering, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, China H. Huang Google Inc., Mountain View, CA, 94043, USA X. Bie M. Zhang University of Chinese Academy of Sciences, Beijing, China instant and high-quality image cloning, in which users only need to outline a region of interested objects to process. Our experimental results have demonstrated the effectiveness of our cloning method. Keywords Intent representation Image cloning Image composition 1 Introduction Image composition has been widely applied for image generation and editing, which creates new images by pasting an object/region from a source image into a target image. For such needs, gradient domain methods [7, 10, 15] arewell known for their user-friendly interaction, by roughly sketching a boundary to select a region of interest in the source image. These methods work by solving a Poisson equation with Dirichlet boundary conditions. It is in fact to construct a harmonic interpolant (a smooth membrane) for diffusing the intensity discrepancies between the source and target images along the boundary of the patch to the entire cloned region. Thus, the intensity discrepancies on the boundary can be reduced considerably, achieving a seamless blending of the source image patch and the target image. However, artifacts may be resulted when the source patch has different structures from the target image. This is due to the fact that a smooth membrane cannot change the structure of the source image patch, which causes the structure differences noticeable in such composite, shown as artifacts. Though techniques have been proposed to optimize the conditions on the boundary for improving Poisson image cloning, such as the drag-and-drop pasting system [10] and using meanvalue coordinates [7], they cannot efficiently remove the

2 X. Bie et al. structure conflicts inside the patch, due to the produced a smooth membrane for image composition. Our work here is aimed at solving the structure conflicts in image cloning that could not be efficiently solved using existing techniques. We observe that structure conflicts in the source patch occur in the region outside the objects of interest, called over-included region. By image composition, we expect that the objects of interest are pasted into the target image, without the other contents. The existing gradient domain techniques let the source patch preserve its structures in the composite, by forming a smooth membrane to have the pixels inside the patch alter their intensities in similar extents. When the structure of the over-included region differs from that of the target image, the difference can be apparently noticeable and cause visual artifacts in final images. Our work is motivated to eliminate such artifacts of the over-included region in cloning images, so that structure conflicts will not appear around the objects of interest in the final composite. Unlike matting methods to produce a matte for image composition, we propose the novel cloning method that can re-produce the natural intent-aware cloning effects in composited images. It has a seamless blending of the source patch and the target image, with the colors of the source patch corrected by the target image. At the same time, we save laborious labeling work that are required for getting a matte. The pixels of the target image covered by the source patch are all allowed to take effect in image composition, unlike existing gradient domain methods to consider only the pixels of the target image along the boundary of the source patch. Initiated by [7] to interpolate the value at each interior pixel for fast image composition, we develop the interpolant for fast image cloning by executing the interpolant in clusters of pixels, and using hierarchical sampling techniques for further acceleration. In summary, our specific contributions are: An efficient method for cloning images with different structures, which extends the current methods to the general cases in selecting the source/target images for seamless cloning, making image composition more flexible and widely applicable. An easy-to-use approach for image composition, where the users only need to simply outline the region of interest in the source image, not aware of any structure conflicts between the source and target images. The approach is simple to use, fast, and also simple to develop for parallel computing. For the rest of the paper, we briefly review the related work in matting, gradient domain and image composition techniques in Sect. 2. The essential ideas in our intent-aware image cloning are presented in Sect. 3, including distinguishing pixels and blending images, and efficient processes in pixels clustering and hierarchical sampling are given in Sect. 4. The image cloning examples and experimental performance are described in Sect. 5, as well as the discussion of limitations. Finally, the summary is given in Sect Related work Matting methods are commonly used for image composition. They work by linearly interpolating the source and target image using weights specified by the alpha matte, and their effectiveness depends on the accuracy of the matte. Thus, some methods require the user to provide additional constraints using a trimap [8, 17] orasetofbrushstrokes to carefully get the matte [12, 13]. Others [14, 16] employ a controlled environment or a special device to reduce the inherent ambiguity of the matting problem. To leverage the knowledge of the target background to create a more successful matte, Wang et al. [20] proposed to combine matting and compositing into a single optimization process. However, matting methods cannot have the colors of the objects of interests corrected by the target image, which may lead to unnatural looking, and it is always very laborious to get a quality matte in treating complex objects, which prevents their interactive use. Gradient domain compositing methods are well-known for their ease to use and effectiveness on seamless composition. By solving the Poisson equation with the user-specified boundary condition, Poisson image cloning [15] can achieve seamless composition without visual seam along the boundary. To make Poisson image cloning more practical, many improvements have been made. In one aspect, many acceleration techniques [1, 7, 9, 11, 19] are proposed, and some [7, 21] even extended to seamless video composition. In another aspect, many approaches were studied to improve the composition quality, such as optimizing the boundary conditions in the Drag-and-Drop pasting system [10], or incorporating alpha matting to get hybrid solutions for texture smudging and color dis-matching problems [3, 5, 6, 22]. Our method also solves texture smudging and color dismatching problems, but compared with the hybrid solutions [3, 5, 6, 22], our method needs no user interaction to approximate the objects of interest, and can easily treat multiple objects of interests and complex structures in a source image patch, so more efficient for practical use. Recently, Sunkacalli et al. [18] proposed to harmonize the visual appearance of the source and target images before composition, to treat the inconsistency inside the composition region for a quality composite, where harmonization is achieved by transferring the appearance of one image to another. Since harmonizing visual appearance cannot effectively deal with structure conflicts between the source and

3 Intent-aware image cloning the target images, such a treatment may still produce evident artifacts around the interested objects in the output image. With regard to this, Darabi et al. [4] suggested to meld the source and target image using a synthesized transition region between the two images, to have inconsistent color, texture, and structural properties all change gradually from one source to the other for a smooth blending. However, the generated contents in the transition region may be different from the contents in the source and target images, so causing artifacts to impair the composite, though in some cases such transitions are valuable as treating the image of a squirrel inatreeholein[4]. Besides, these methods are quite time consuming, preventing their practical use. 3 Intent-aware image cloning We assume that the source patch Ω with a boundary Ω,in which there are some objects of interest Ω obj, is selected to paste into the target image, as illustrated in Fig. 1. As stated early, the artifacts resulted from structure conflicts always take place in the over-included region. If the objects of interest can be segmented precisely to paste onto the target image, no structure conflicts will appear to impair the final composite. However, precise segmentation of objects are often expensive and sometime even impossible. Thus, a simple outline of the source image patch is certain to have the over-included region. To efficiently remove the artifacts in the over-included region, we propose a novel gradient domain method that can suppress structure conflicts in the over-included region. It is a probability-based method that approximates the over-included region of the source image patch to guide image composition, and takes an interpolation computation to blend the source patch and the target image. In our approach, the surroundings of the objects of interest have the appearances determined mainly by the target image, so the structure conflicts are suppressed to achieve the natural and high-quality final composites. 3.1 Distinguishing pixels As we know, when the users outline a source patch for image composition, it is true that the objects of interest are always inside the patch, and the boundary usually has quite different appearances from the objects of interest. Based on the observation, we can estimate the probability for a pixel to be in the over-included region or not in the source image patch. To estimate such a probability for a pixel i, we first measure the affinity between the pixel i and any pixel j on the boundary, by the intensity difference and the spatial distance between them, and then get the probability by summing up all the affinity values for this pixel. If the pixel is farther from the boundary and with a larger appearance difference from the Fig. 1 For a source image patch, Ω\Ω obj is the over-included region that may have structure conflicts to impair the final composite quality boundary, it is more likely to be of the objects of interest, otherwise, more likely in the over-included region. As suggested in [2], the affinity z(i, j) between two pixels i, j is computed in the following formula: z(i, j) = exp ( f i f j 2) (1) where f i and f j are the feature vectors at pixels i and j, respectively, comprising their positions p i and p j (e.g., in x and y coordinates) and their appearances c i and c j (e.g., color in the Lab space) weighted by parameters σ p, σ c,expressed as f i = (p i /σ p, c i /σ c ) and f j = (p j /σ p, c j /σ c ). The parameters σ p and σ c are used to adjust the effect of the position difference and the appearance difference in distinguishing pixels, taking values between 0.0 and 1.0. In our tests in this paper, we generally have σ p = 1.0 and σ c = 0.2, shown with good experimental results. Finally, the probability S(i) for pixel i is computed as S(i) = γ (i, j)z(i, j) (2) j Ω where γ(i,j) is a spatial weighting, as suggested in [7] to enhance the effect of distances between pixels. It is computed as the mean value coordinates of pixel i with respect to boundary pixel j, as follows: / γ(i,j)= w j w j (3) where j Ω w j = tan(α j 1/2) + tan(α j+1 /2) p j p i and α j is the angle p j p i p j+1, as illustrated in Fig. 2. If the probability value at a pixel is smaller, the pixel is more likely to be of an object of interest, otherwise in the over-included region. By testing the images in our experiments, we find that the measure is very effective to approximate the over-included region in the source patch. Figure 3 illustrates three over-included region examples, which layered the solid basis to further develop our intent-aware image cloning method. (4)

4 X. Bie et al. Fig. 2 Angle definitions for mean-value coordinates Fig. 3 Separating the objects of interest from the over-included region: top row shows the source image patches, and bottom row shows the separation results 3.2 Blending images To blend the source image patch and the target image, Poisson image editing [15] interpolates the intensity function g of the source image and the intensity function of the target image f by taking into account the boundary condition, under guidance of the gradient field of the source image patch g. It is required to compute a function f by solving the Poisson equation: f = div g with f Ω = f (5) It is equivalent to solving the Laplace equation: f = 0 with f Ω = f g (6) and the final outcome of cloning can then simply be obtained as f = g + f. (7) This means that a smooth membrane f is constructed to diffuse the intensity difference f g between the target and source images on the boundary of the source image patch across the entire region Ω. As introduced earlier, the formulation may introduce structure conflicts in the overincluded region. To suppress structure conflicts in the region, we propose the intent aware interpolation membrane r instead of f to take different measures for the pixels in the over-included region and of the objects of interest, respectively. Consider apixeli Ω with a boundary Ω. The value of our membrane at this pixel r(i) is computed in the equation r(i) = 1 z(i, j) ( f g ) (i) + λ(i, j) ( f g ) (j) (8) N j Ω where N is the number of boundary pixels, and λ(i, j) = exp( s z(i, j)) is used to control how the intensity difference on the boundary between the source image patch and the target image take effect in cloning, and its coefficient s is for adjusting the controlling strength. In most cases, s is set to be 2.0 to allow the objects of interest to attain similar intensity diffusion as Poisson image editing. Often, not in all the cases such intensity diffusion on objects is desirable and positive. With regard to this, we permit the users to adjust the value of s to achieve the satisfactory composites, more discussion is given in the experimental results. With Eq. (8), we do not need to explicitly extract the over-included region, but construct a continuous membrane to treat the pixels in the over-included region or of the interested objects adaptively. Here, the probability for distinguishing pixel i is implicitly considered for cloning by using the affinity between the pixel and any pixel j on the boundary, and such an uniform treatment is convenient for computing the membrane. When pixel i is inside the over-included region, its z(i, j) is very possible to be high, so that the appearance of pixel i in the target image will take effect, and represented as (f g)(i), and λ(i, j) will be very small to reduce or almost remove the effect of the intensity difference on the boundary, (f g)(i), in computing the value at pixel i in the composite. On the contrary, when pixel i is of an object of interest, its z(i, j) will be very low, and λ(i, j) will be very high, so that the appearance of pixel i in the target image will take very little effect in image composition, and the appearance difference on the boundary will play an important role to have the objects of interest remain their structures in the source image. Using our membrane r computed in Eq. (8), the final result is computed by f = g + r (9) With Eq. (9), it is clear that the target image can print its structures in the region covered by the over-included region of the source image patch, and the objects of interest in the source patch can be also pasted into the target image seamlessly. 4 Efficient processing Using Eqs. (8) and (9) for image cloning, the computation complexity is O(nm), where n is the number of the pixels in the cloned region, and m is the number of boundary pixels. The process is rather time-consuming. To speed up the

5 Intent-aware image cloning processing, we present the efficient approach by grouping pixels in clusters and computing the affinity values of the interior pixels with hierarchical sampling. 4.1 Grouping pixels in clusters In the source image patch, the pixels with similar appearances will have their appearances changed similarly in the composite. Thus, we can group the pixels into clusters to speed up the cloning composition. In our treatment, the k- means clustering is applied in the 5-dimensional affinity space, where each pixel i is classified according to it s 5- dimensional feature vector f i, and ANN searching is used for acceleration. We then use the clusters to fast compute the membrane for image cloning. In computing the membrane, it is first approximately computed the membrane values for the clusters by their center pixels, respectively, and then these values are propagated to the pixels individually according to the affinities between the pixel and the center pixels of the clusters and the probability for distinguishing the pixel, to obtain the final membrane. In approximating the membrane value for a cluster, we compute it by modifying Eq. (8) as R(k) = 1 N z(k, j) ( f g ) (k) + λ(k, j) ( f g ) (j) j Ω (10) where k is the center pixel of a cluster. In computing the membrane value at a pixel, we select several clusters whose center pixels are very like the pixel to do approximation. As clustering is a probability-based computation, several clusters with very similar appearances can facilitate to reduce the disturbance from the clusters with very different appearances in approximation. In general, three clusters with the most similar appearances are sufficient to get very good composites, according to our tests. With the selected clusters, we first approximate the probability for distinguishing the pixel i according to the affinity values between the center points of the clusters and the pixels on the boundary. It is computed in the following two equations: S(k) = γ(k,j)z(k,j) (11) j Ω and A(i) = / z(i,k)s(k) z(i, k) (12) k C k C where A(i) is the approximated probability for pixel i, and C is the set of the selected clusters. Afterwards, with the approximated probability, we first compute how the membrane values of the selected clusters Fig. 4 Hierarchical sampling: the nearer the portion of the boundary to pixel A, the more the selected pixels in the portion used for distinguishing pixel A are propagated to pixel i in Eq. (13), and then get the membrane value at pixel i in Eq. (14), as expressed in the below. r (i) = / z(i,k)r(k) z(i, k) (13) k C k C r(i) = A(i) ( f g ) (i) + exp ( s A(i) ) r (i) (14) Finally, the cloning composite is obtained by Eq. (15) f = g + r (15) 4.2 Hierarchical sampling Using Eq. (2) to compute the probability for distinguishing a pixel, all the pixels on the boundary are required to investigate, which is very expensive. Motivated by the work [7], we take hierarchical sampling to select some pixels on the boundary to approximate the probability. The number of selected pixels on a portion of the boundary is inversely proportional to the distance from the portion to the pixel under investigation. With hierarchical sampling, considerable acceleration can be obtained, as discussed in [7]. An additional benefit is that small un-continuous regions intersecting the boundary can be excluded from the objects of interest, helping more to simply outlining of the source image patch for image cloning. As illustrated in Fig. 4, the pixels in the green region can be easily determined in the over-included region. In Fig. 5, an example is given to show that small regions intersecting the boundary can be efficiently excluded from the composite using hierarchical sampling, which will result in artifacts using uniform sampling. 5 Comparison and discussion We have applied our method on a variety of source and target images, and also compare the results with Poisson image editing [15], alpha matting [12], drag-and-drop pasting [10], and image melding [4].

6 X. Bie et al. Fig. 5 Hierarchical sampling is more helpful to detect the overincluded region than uniform sampling. (a) the source and target images; (b) by Poisson cloning; (c) with uniform sampling, note that the artifacts above the feet in the cloning region; (d) with hierarchical sampling, such artifacts are well removed in the composite Fig. 6 When the source image patches are pasted onto the target image (a), Poisson image editing causes blurring artifacts around the objects of interest in the composites (b), matting compositing brings the colors of the source image evidently into the composites (c), but our results are in high quality without these artifacts (d) Compared with Poisson image cloning Our method can efficiently remove the artifacts resulted from structure conflicts. As shown in Fig. 6 and Fig. 7, Poisson image cloning preserves the structures of the source image patch, so that evident artifacts appeared around the objects of interest in the final composite, while our method can produce better results with natural looking. Besides, our method allows the user to freely adjust the tone of the objects of interest until a satisfactory result is attained. As shown in Fig. 9, the color bleeding effect makes results of Poisson image cloning look unnatural, while with our method, the user can progressively optimize the result by modifying the parameter s (Fig. 9(d f)). Compared with alpha matting Our method has two main advantages. First, alpha matting relies on user s careful initialization work to extract an accurate alpha matte, which requires experience and usually involves intensive interactions. And sometimes it s difficult even impossible to attain an accurate alpha matte, as shown in the warship and fireworks examples in Fig. 6, where obvious artifacts resulted from un-clean alpha mattes make the composites far from realistic (Fig. 6(c)). However, using our method, the interactions are much more intuitive and simple, and our results are much better. Second, alpha blending directly copies the color of the foreground objects from the source image, so that tone dis-matching between the objects of interest and the target scene may appear, impairing the reality of the composite, as shown in the footprints example in Fig. 7(d). Fortunately, in our result, the color of the source patch will be corrected by the target image, so that our results are much better with natural looking, as shown in Fig. 7(e). Compared with drag-and-drop pasting Our method outperforms it in two aspects. First, drag-and-drop pasting sys-

7 Intent-aware image cloning Fig. 7 When the source image patches are pasted onto the target image (a), Poisson image editing generates unnatural structure properties around the interested objects (b), drag-and-drop pasting cannot paste multiple footprints simultaneously into the desert and exclude the overincluded region by a boundary optimization method (c), matting compositing cannot change the colors of the interested objects for a natural looking (d), butourresultedimagesareverygood withouttheseshortcomings (e) tem can avoid salient structure conflicts by optimizing the boundary, however, it is not suitable for treating multiple objects, as shown in the footprints example in Fig. 7(c), and when applied to the source image patch which has complex structures, drag-and-drop pasting system cannot optimize the boundary efficiently, and will lead to artifacts, as shown in the insect example in Fig. 7(c). Second, drag-anddrop pasting system requires the user to provide additional inputs to extract the objects of interest by using a grabcut method, while our method does not need any additional inputs. Compared with image melding Our method can bring the user two benefits. First, our method can save the user much selection work. As shown in Fig. 8, in which the results by image melding are copied from ref. [4], with image melding, the user should carefully select the objects of interest and the blending region as Fig. 8(b). As for our method, we only need a simple outline of the source image patch and paste it into the target image, shown in Fig. 8(c), which is intuitive and more easy to use. Second, as discussed in [4], the texture interpolation between very different textures in a large background margin around the object may result in artifacts, as shown in Fig. 8(d), where some appearances of the source image in the blending region may be preserved in the final result, smudging the final composite. But with our method, the wave properties in the desert are kept fairly well around the child, shown in Fig. 8(e). Besides, our method are more fast than image melding, so that it is more suitable for interactive image composition. 5.1 Performance In our system, we have developed intent-aware image cloning in two stages. Once a source patch is selected, the pixels are grouped into clusters. For each pixel, we search its three nearest clusters and compute the affinity values between the pixel and the clusters. These are computed in the preprocessing stage and used repeatedly in our image cloning later. When the source patch is pasted into a target image, the blending stage starts to run. Here, we computes the discrepancies f g on the boundary, the membrane value for interpolation, and the final composite using Eqs. (10) (15). In Table 1, we list the statistics data on the performance of our intent-aware image cloning method (preprocessing, compositing), not including disk I/O time. The experimental tests are performed on a personal computer with an Intel Q9400 CPU and 4 GB RAM. From the statistics data, our cloning method can perform image composition instantly. Clearly, for a source image patch used repeatedly, we can execute image composition in real time. The timing performance shows the feasibility of using our method for online and interactive media applications. Limitations Our method is based on the assumption that the objects of interest have different appearances from the

8 X. Bie et al. Fig. 8 Source and target image (a). The initialization work of image melding (b), where the user should carefully identify the kid and mark the blending region in magenta. The inputs of our method (c), a simple outline of the source patch and the target image. The result by image melding has the desert around the kid in a mess (d). Our result has a very realistic looking (e) Fig. 9 When the source patches are pasted onto the target image (a), Poisson image editing generates artifacts around the objects of interest (b), and our results have no structural artifacts and can have the tone adjusted freely with different parameters s for good looking (d f) Table 1 Performance statistics for intent-aware image cloning Examples Cloned pixels Boundary pixels Preprocessing time (ms) warship 107,231 1, insect 139,309 1, fireworks 418,080 2, footprints 136,024 1, kid (Fig. 8) 94,441 1, kid2 (Fig. 9) 73, swimming 85,651 1, Run time (ms) overall appearance of the boundary, so that the over-included region can be approximated to reduce their effects in composite, by which structure conflicts can be compressed to improve image cloning. It works very well on a wide variety of source and target images. However, this may not always be true. When the objects of interest have some parts with similar appearances as the boundary, our method will fail in separating them from the over-included region. As illustrated in Fig. 10, the dog has quite similar colors with the turbid water, so that our method may mistake some parts of the dog as in the over-included region, and have them disappeared in the composite (b). With regard this problem, we need a further study. Some possible ways may be investigated, such as having the user draw some strokes to refine the obtained objects of interest, or incorporating some advanced saliency detection techniques to improve obtainment of the objects of interest. Another case that may cause problems is that some objects of non-interest have very different appearances from the boundary. These objects may be mistaken as interest and lead to structure conflicts. As these objects are in very different appearances from the boundary, they are easy to recognize and so removed using image inpainting tools, after they are identified non-interest. However, how to efficiently distinguish these objects from the objects of interest need a further study. 6 Summary Gradient domain techniques are well known for userfriendly and seamless blending of images in image composition. However, existing techniques cannot well handle the structure conflicts between the source and target images, leading to artifacts in cloning composites thus preventing

9 Intent-aware image cloning Fig. 10 In the source image patch, the swimming dog has some parts in very similar colors with the turbid water. In this case, our method may fail in distinguishing these parts from the over-included region, making them rendered as transparent (b) practical uses. In this paper, we present the novel intentaware cloning method to address the challenging of structure conflicts in image cloning. Based on the observation that structure conflicts which will result in artifacts are mostly in the over-included region, we design the cloning method to suppress such structure conflicts in the over-included region to remove artifacts in the composite, in which the user s intent in outlining the source image patch is used efficiently. We then develop the interpolant for blending images, which incorporates the target image to efficiently suppress the structure conflicts in the cloning composite. Furthermore, our processes are accelerated by grouping pixels in clusters and also using hierarchical sampling techniques. By experiments, our intent-aware method can produce image cloning in real time, and without structure conflicts in natural and high-quality looking. Further, we will extend our cloning method to the latest GPUs and fast video processing for online/mobile applications, which will be studied in the near future. Acknowledgements The work is supported by the Knowledge Innovation Program of the Chinese Academy of Sciences, the National Social Science Foundation of China (Project No. 12AZD118), RGC research grants (ref , ) and UGC direct grant for research (no ). References 1. Agarwala, A.: Efficient gradient-domain compositing using quadtrees. In: SIGGRAPH 07: ACM SIGGRAPH, p. 94. ACM, New York (2007) 2. An, X., Pellacini, F.: Appprop: all-pairs appearance space edit propagation. In: SIGGRAPH 08: ACM SIGGRAPH, pp ACM, New York, NY, USA (2008) 3. Chen, T., Cheng, M.M., Tan, P., Shamir, A., Hu, S.M.: Sketch2photo: Internet image montage. ACM Trans. Graph. 28(5), 124:1 124:10 (2009) 4. Darabi, S., Shechtman, E., Barnes, C., Goldman, D.B., Sen, P.: Image melding: combining inconsistent images using patch-based synthesis. ACM Trans. Graph. 31(4), 82:1 82:10 (2012) (Proceedings of SIGGRAPH 2012) 5. Ding, M., Tong, R.F.: Content-aware copying and pasting in images. Vis. Comput. 26(6 8), (2010) 6. Du, H., Jin, X.: Object cloning using constrained mean value interpolation. Vis. Comput. 29(3), (2013) 7. Farbman, Z., Hoffer, G., Lipman, Y., Cohen-Or, D., Lischinski, D.: Coordinates for instant image cloning. ACM Trans. Graph. 28(3), 1 9 (2009) 8. Gastal, E.S.L., Oliveira, M.M.: Shared sampling for real-time alpha matting. Comput. Graph. Forum 29(2), (2010) 9. Jeschke, S., Cline, D., Wonka, P.: A GPU Laplacian solver for diffusion curves and Poisson image editing. In: SIGGRAPH Asia 09: ACM SIGGRAPH Asia, pp ACM, New York (2009) 10. Jia, J., Sun, J., Tang, C.K., Shum, H.Y.: Drag-and-drop pasting. In: SIGGRAPH 06: ACM SIGGRAPH, pp ACM, New York (2006) 11. Kazhdan, M., Hoppe, H.: Streaming multigrid for gradientdomain operations on large images. ACM Trans. Graph. 27(3), 1 10 (2008) 12. Levin, A., Lischinski, D., Weiss, Y.: A closed-form solution to natural image matting. IEEE Trans. Pattern Anal. Mach. Intell. 30(2), (2008) 13. Levin, A., Rav-Acha, A., Lischinski, D.: Spectral matting. IEEE Trans. Pattern Anal. Mach. Intell. 30(10), (2008) 14. McGuire, M., Matusik, W., Pfister, H., Hughes, J.F., Durand, F.: Defocus video matting. In: SIGGRAPH 05: ACM SIGGRAPH, pp ACM, New York (2005) 15. Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM Trans. Graph. 22(3), (2003) 16. Smith, A.R., Blinn, J.F.: Blue screen matting. In: SIGGRAPH 96: Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, pp ACM, New York (1996) 17. Sun, J., Jia, J., Tang, C.K., Shum, H.Y.: Poisson matting. In: SIG- GRAPH 04: ACM SIGGRAPH, pp ACM, New York (2004) 18. Sunkavalli, K., Johnson, M.K., Matusik, W., Pfister, H.: Multiscale image harmonization. ACM Trans. Graph. 29(4), 125:1 125:10 (2010) (Proc. ACM SIGGRAPH) 19. Szeliski, R.: Locally adapted hierarchical basis preconditioning. In: SIGGRAPH 06: ACM SIGGRAPH, pp ACM, New York (2006) 20. Wang, J., Cohen, M.F.: Simultaneous matting and compositing. In: ACM SIGGRAPH 2006 Sketches, SIGGRAPH 06. ACM, New York (2006) 21. Xie, Z.F., Shen, Y., Ma, L.Z., Chen, Z.H.: Seamless video composition using optimized mean-value cloning. Vis. Comput. 26(6 8), (2010) 22. Zhang, Y., Tong, R.: Environment-sensitive cloning in images. Vis. Comput. 27(6 8), (2011) Xiaohui Bie received his bachelor degree in engineering mechanics form the Huazhong University of Science and Technology. He is currently a Ph.D. student at the Institute of Software, Chinese Academy of Sciences. His main research interests are computer vision and interactive image/video editing.

10 X. Bie et al. Wencheng Wang received the Ph.D. degree from the Institute of Software, Chinese Academy of Sciences in 1998, and been working as a professor at this institute from His research interests include computer graphics, visual analytics, and expressive rendering and editing. Haoda Huang received his Master degree from the Institute of Software, Chinese Academy of Sciences in After that, he joined Microsoft Research Asia and work there for about 5 years. He is currently a software engineer in Google Mountain View. His research interests include facial performance capture, hand deformation, image analysis and editing. Hanqiu Sun received the M.S. degree in electrical engineering from the University of British Columbia and the Ph.D. degree in computer science from the University of Alberta, Canada. She is now an associate professor in the Department of Computer Science and Engineering, Chinese University of Hong Kong (CUHK). Her current research interests include virtual and augmented reality, interactive graphics/animation, hypermedia, telemedicine, mobile image/video processing and navigation and realistic haptics simulation. Minying Zhang received his B.E. degree from Jilin University, P.R. China, in He is currently working toward the Ph.D. degree in computer science at the State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences. His research interests include image and video processing and computer graphics.

Free Appearance-Editing with Improved Poisson Image Cloning

Free Appearance-Editing with Improved Poisson Image Cloning Bie XH, Huang HD, Wang WC. Free appearance-editing with improved Poisson image cloning. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 26(6): 1011 1016 Nov. 2011. DOI 10.1007/s11390-011-1197-5 Free Appearance-Editing

More information

Color Me Right Seamless Image Compositing

Color Me Right Seamless Image Compositing Color Me Right Seamless Image Compositing Dong Guo and Terence Sim School of Computing National University of Singapore Singapore, 117417 Abstract. This paper introduces an approach of creating an image

More information

Drag and Drop Pasting

Drag and Drop Pasting Drag and Drop Pasting Jiaya Jia, Jian Sun, Chi-Keung Tang, Heung-Yeung Shum The Chinese University of Hong Kong Microsoft Research Asia The Hong Kong University of Science and Technology Presented By Bhaskar

More information

Fast Image Stitching and Editing for Panorama Painting on Mobile Phones

Fast Image Stitching and Editing for Panorama Painting on Mobile Phones Fast Image Stitching and Editing for Panorama Painting on Mobile Phones Yingen Xiong and Kari Pulli Nokia Research Center 955 Page Mill Road, Palo Alto, CA 94304, USA {yingen.xiong, kari.pulli}@nokia.com

More information

Fast Image Stitching and Editing for Panorama Painting on Mobile Phones

Fast Image Stitching and Editing for Panorama Painting on Mobile Phones in IEEE Workshop on Mobile Vision, in Conjunction with CVPR 2010 (IWMV2010), San Francisco, 2010, IEEE Computer Society Fast Image Stitching and Editing for Panorama Painting on Mobile Phones Yingen Xiong

More information

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you

This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you This work is about a new method for generating diffusion curve style images. Although this topic is dealing with non-photorealistic rendering, as you will see our underlying solution is based on two-dimensional

More information

Fast Image Labeling for Creating High-Resolution Panoramic Images on Mobile Devices

Fast Image Labeling for Creating High-Resolution Panoramic Images on Mobile Devices Multimedia, IEEE International Symposium on, vol. 0, pp. 369 376, 2009. Fast Image Labeling for Creating High-Resolution Panoramic Images on Mobile Devices Yingen Xiong and Kari Pulli Nokia Research Center

More information

Color Adjustment for Seamless Cloning based on Laplacian-Membrane Modulation

Color Adjustment for Seamless Cloning based on Laplacian-Membrane Modulation Color Adjustment for Seamless Cloning based on Laplacian-Membrane Modulation Bernardo Henz, Frederico A. Limberger, Manuel M. Oliveira Instituto de Informática UFRGS Porto Alegre, Brazil {bhenz,falimberger,oliveira}@inf.ufrgs.br

More information

Manifold Preserving Edit Propagation

Manifold Preserving Edit Propagation Manifold Preserving Edit Propagation SIGGRAPH ASIA 2012 Xiaowu Chen, Dongqing Zou, Qinping Zhao, Ping Tan Kim, Wook 2013. 11. 22 Abstract Edit propagation algorithm more robust to color blending maintain

More information

Adding a Transparent Object on Image

Adding a Transparent Object on Image Adding a Transparent Object on Image Liliana, Meliana Luwuk, Djoni Haryadi Setiabudi Informatics Department, Petra Christian University, Surabaya, Indonesia lilian@petra.ac.id, m26409027@john.petra.ac.id,

More information

Gradient Domain Image Blending and Implementation on Mobile Devices

Gradient Domain Image Blending and Implementation on Mobile Devices in MobiCase 09: Proceedings of The First Annual International Conference on Mobile Computing, Applications, and Services. 2009, Springer Berlin / Heidelberg. Gradient Domain Image Blending and Implementation

More information

AVOIDING BLEEDING IN IMAGE BLENDING. TNList, Tsinghua University School of Computer Science and Informatics, Cardiff University

AVOIDING BLEEDING IN IMAGE BLENDING. TNList, Tsinghua University School of Computer Science and Informatics, Cardiff University AVOIDING BLEEDING IN IMAGE BLENDING Minxuan Wang Zhe Zhu Songhai Zhang Ralph Martin Shi-Min Hu TNList, Tsinghua University School of Computer Science and Informatics, Cardiff University ABSTRACT Though

More information

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION Chiruvella Suresh Assistant professor, Department of Electronics & Communication

More information

Soft Scissors : An Interactive Tool for Realtime High Quality Matting

Soft Scissors : An Interactive Tool for Realtime High Quality Matting Soft Scissors : An Interactive Tool for Realtime High Quality Matting Jue Wang University of Washington Maneesh Agrawala University of California, Berkeley Michael F. Cohen Microsoft Research Figure 1:

More information

EDGE-AWARE IMAGE PROCESSING WITH A LAPLACIAN PYRAMID BY USING CASCADE PIECEWISE LINEAR PROCESSING

EDGE-AWARE IMAGE PROCESSING WITH A LAPLACIAN PYRAMID BY USING CASCADE PIECEWISE LINEAR PROCESSING EDGE-AWARE IMAGE PROCESSING WITH A LAPLACIAN PYRAMID BY USING CASCADE PIECEWISE LINEAR PROCESSING 1 Chien-Ming Lu ( 呂建明 ), 1 Sheng-Jie Yang ( 楊勝傑 ), 1 Chiou-Shann Fuh ( 傅楸善 ) Graduate Institute of Computer

More information

LETTER Local and Nonlocal Color Line Models for Image Matting

LETTER Local and Nonlocal Color Line Models for Image Matting 1814 IEICE TRANS. FUNDAMENTALS, VOL.E97 A, NO.8 AUGUST 2014 LETTER Local and Nonlocal Color Line Models for Image Matting Byoung-Kwang KIM a), Meiguang JIN, Nonmembers, and Woo-Jin SONG, Member SUMMARY

More information

Image Super-Resolution by Vectorizing Edges

Image Super-Resolution by Vectorizing Edges Image Super-Resolution by Vectorizing Edges Chia-Jung Hung Chun-Kai Huang Bing-Yu Chen National Taiwan University {ffantasy1999, chinkyell}@cmlab.csie.ntu.edu.tw robin@ntu.edu.tw Abstract. As the resolution

More information

Pattern Recognition Letters

Pattern Recognition Letters Pattern Recognition Letters 33 (2012) 920 933 Contents lists available at ScienceDirect Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec Dynamic curve color model for image

More information

Assignment 4: Seamless Editing

Assignment 4: Seamless Editing Assignment 4: Seamless Editing - EE Affiliate I. INTRODUCTION This assignment discusses and eventually implements the techniques of seamless cloning as detailed in the research paper [1]. First, a summary

More information

arxiv: v1 [cs.cv] 23 Aug 2017

arxiv: v1 [cs.cv] 23 Aug 2017 Single Reference Image based Scene Relighting via Material Guided Filtering Xin Jin a, Yannan Li a, Ningning Liu c, Xiaodong Li a,, Xianggang Jiang a, Chaoen Xiao b, Shiming Ge d, arxiv:1708.07066v1 [cs.cv]

More information

Object Cloning Using Constrained Mean Value Interpolation (Supplementary Material)

Object Cloning Using Constrained Mean Value Interpolation (Supplementary Material) Object Cloning Using Constrained Mean Value Interpolation (Supplementary Material) Hui Du Xiaogang Jin Received: date / Accepted: date This supplementary file contains a high resolution version of the

More information

Texture Sensitive Image Inpainting after Object Morphing

Texture Sensitive Image Inpainting after Object Morphing Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan

More information

Introduction to Computer Graphics. Image Processing (1) June 8, 2017 Kenshi Takayama

Introduction to Computer Graphics. Image Processing (1) June 8, 2017 Kenshi Takayama Introduction to Computer Graphics Image Processing (1) June 8, 2017 Kenshi Takayama Today s topics Edge-aware image processing Gradient-domain image processing 2 Image smoothing using Gaussian Filter Smoothness

More information

Automated Removal of Partial Occlusion Blur

Automated Removal of Partial Occlusion Blur Automated Removal of Partial Occlusion Blur Scott McCloskey, Michael Langer, and Kaleem Siddiqi Centre for Intelligent Machines, McGill University {scott,langer,siddiqi}@cim.mcgill.ca Abstract. This paper

More information

Image Compression and Resizing Using Improved Seam Carving for Retinal Images

Image Compression and Resizing Using Improved Seam Carving for Retinal Images Image Compression and Resizing Using Improved Seam Carving for Retinal Images Prabhu Nayak 1, Rajendra Chincholi 2, Dr.Kalpana Vanjerkhede 3 1 PG Student, Department of Electronics and Instrumentation

More information

TEMPORALLY CONSISTENT REGION-BASED VIDEO EXPOSURE CORRECTION

TEMPORALLY CONSISTENT REGION-BASED VIDEO EXPOSURE CORRECTION TEMPORALLY CONSISTENT REGION-BASED VIDEO EXPOSURE CORRECTION Xuan Dong 1, Lu Yuan 2, Weixin Li 3, Alan L. Yuille 3 Tsinghua University 1, Microsoft Research Asia 2, UC Los Angeles 3 dongx10@mails.tsinghua.edu.cn,

More information

Use of Shape Deformation to Seamlessly Stitch Historical Document Images

Use of Shape Deformation to Seamlessly Stitch Historical Document Images Use of Shape Deformation to Seamlessly Stitch Historical Document Images Wei Liu Wei Fan Li Chen Jun Sun Satoshi Naoi In China, efforts are being made to preserve historical documents in the form of digital

More information

2.1 Optimized Importance Map

2.1 Optimized Importance Map 3rd International Conference on Multimedia Technology(ICMT 2013) Improved Image Resizing using Seam Carving and scaling Yan Zhang 1, Jonathan Z. Sun, Jingliang Peng Abstract. Seam Carving, the popular

More information

Image Composition. COS 526 Princeton University

Image Composition. COS 526 Princeton University Image Composition COS 526 Princeton University Modeled after lecture by Alexei Efros. Slides by Efros, Durand, Freeman, Hays, Fergus, Lazebnik, Agarwala, Shamir, and Perez. Image Composition Jurassic Park

More information

Panoramic Image Stitching

Panoramic Image Stitching Mcgill University Panoramic Image Stitching by Kai Wang Pengbo Li A report submitted in fulfillment for the COMP 558 Final project in the Faculty of Computer Science April 2013 Mcgill University Abstract

More information

Automatic Trimap Generation for Digital Image Matting

Automatic Trimap Generation for Digital Image Matting Automatic Trimap Generation for Digital Image Matting Chang-Lin Hsieh and Ming-Sui Lee Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, R.O.C. E-mail:

More information

Motion Regularization for Matting Motion Blurred Objects

Motion Regularization for Matting Motion Blurred Objects IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 33, NO. 11, NOVEMBER 2011 2329 Motion Regularization for Matting Motion Blurred Objects Hai Ting Lin, Yu-Wing Tai, and Michael S. Brown

More information

Image-Based Rendering for Ink Painting

Image-Based Rendering for Ink Painting 2013 IEEE International Conference on Systems, Man, and Cybernetics Image-Based Rendering for Ink Painting Lingyu Liang School of Electronic and Information Engineering South China University of Technology

More information

Digital Makeup Face Generation

Digital Makeup Face Generation Digital Makeup Face Generation Wut Yee Oo Mechanical Engineering Stanford University wutyee@stanford.edu Abstract Make up applications offer photoshop tools to get users inputs in generating a make up

More information

3D Editing System for Captured Real Scenes

3D Editing System for Captured Real Scenes 3D Editing System for Captured Real Scenes Inwoo Ha, Yong Beom Lee and James D.K. Kim Samsung Advanced Institute of Technology, Youngin, South Korea E-mail: {iw.ha, leey, jamesdk.kim}@samsung.com Tel:

More information

Efficient Rendering of Glossy Reflection Using Graphics Hardware

Efficient Rendering of Glossy Reflection Using Graphics Hardware Efficient Rendering of Glossy Reflection Using Graphics Hardware Yoshinori Dobashi Yuki Yamada Tsuyoshi Yamamoto Hokkaido University Kita-ku Kita 14, Nishi 9, Sapporo 060-0814, Japan Phone: +81.11.706.6530,

More information

Divide and Conquer: A Self-Adaptive Approach for High-Resolution Image Matting

Divide and Conquer: A Self-Adaptive Approach for High-Resolution Image Matting Divide and Conquer: A Self-Adaptive Approach for High-Resolution Image Matting Guangying Cao,Jianwei Li, Xiaowu Chen State Key Laboratory of Virtual Reality Technology and Systems School of Computer Science

More information

Video Operations in the Gradient Domain. Abstract. these operations on video in the gradient domain. Our approach consists of 3D graph cut computation

Video Operations in the Gradient Domain. Abstract. these operations on video in the gradient domain. Our approach consists of 3D graph cut computation Video Operations in the Gradient Domain 1 Abstract Fusion of image sequences is a fundamental operation in numerous video applications and usually consists of segmentation, matting and compositing. We

More information

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets

A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets A Multiresolutional Approach for Facial Motion Retargetting Using Subdivision Wavelets Kyungha Min and Moon-Ryul Jung Dept. of Media Technology, Graduate School of Media Communications, Sogang Univ., Seoul,

More information

Specular Reflection Separation using Dark Channel Prior

Specular Reflection Separation using Dark Channel Prior 2013 IEEE Conference on Computer Vision and Pattern Recognition Specular Reflection Separation using Dark Channel Prior Hyeongwoo Kim KAIST hyeongwoo.kim@kaist.ac.kr Hailin Jin Adobe Research hljin@adobe.com

More information

Light Field Occlusion Removal

Light Field Occlusion Removal Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image

More information

SINGLE UNDERWATER IMAGE ENHANCEMENT USING DEPTH ESTIMATION BASED ON BLURRINESS. Yan-Tsung Peng, Xiangyun Zhao and Pamela C. Cosman

SINGLE UNDERWATER IMAGE ENHANCEMENT USING DEPTH ESTIMATION BASED ON BLURRINESS. Yan-Tsung Peng, Xiangyun Zhao and Pamela C. Cosman SINGLE UNDERWATER IMAGE ENHANCEMENT USING DEPTH ESTIMATION BASED ON BLURRINESS Yan-Tsung Peng, Xiangyun Zhao and Pamela C. Cosman Department of Electrical and Computer Engineering, University of California,

More information

Image Resizing Based on Gradient Vector Flow Analysis

Image Resizing Based on Gradient Vector Flow Analysis Image Resizing Based on Gradient Vector Flow Analysis Sebastiano Battiato battiato@dmi.unict.it Giovanni Puglisi puglisi@dmi.unict.it Giovanni Maria Farinella gfarinellao@dmi.unict.it Daniele Ravì rav@dmi.unict.it

More information

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction

But, vision technology falls short. and so does graphics. Image Based Rendering. Ray. Constant radiance. time is fixed. 3D position 2D direction Computer Graphics -based rendering Output Michael F. Cohen Microsoft Research Synthetic Camera Model Computer Vision Combined Output Output Model Real Scene Synthetic Camera Model Real Cameras Real Scene

More information

A New Technique for Adding Scribbles in Video Matting

A New Technique for Adding Scribbles in Video Matting www.ijcsi.org 116 A New Technique for Adding Scribbles in Video Matting Neven Galal El Gamal 1, F. E.Z. Abou-Chadi 2 and Hossam El-Din Moustafa 3 1,2,3 Department of Electronics & Communications Engineering

More information

IMA Preprint Series # 2171

IMA Preprint Series # 2171 A GEODESIC FRAMEWORK FOR FAST INTERACTIVE IMAGE AND VIDEO SEGMENTATION AND MATTING By Xue Bai and Guillermo Sapiro IMA Preprint Series # 2171 ( August 2007 ) INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS

More information

Nonlinear Multiresolution Image Blending

Nonlinear Multiresolution Image Blending Nonlinear Multiresolution Image Blending Mark Grundland, Rahul Vohra, Gareth P. Williams and Neil A. Dodgson Computer Laboratory, University of Cambridge, United Kingdom October, 26 Abstract. We study

More information

IMA Preprint Series # 2153

IMA Preprint Series # 2153 DISTANCECUT: INTERACTIVE REAL-TIME SEGMENTATION AND MATTING OF IMAGES AND VIDEOS By Xue Bai and Guillermo Sapiro IMA Preprint Series # 2153 ( January 2007 ) INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS

More information

Object Removal Using Exemplar-Based Inpainting

Object Removal Using Exemplar-Based Inpainting CS766 Prof. Dyer Object Removal Using Exemplar-Based Inpainting Ye Hong University of Wisconsin-Madison Fall, 2004 Abstract Two commonly used approaches to fill the gaps after objects are removed from

More information

FOREGROUND SEGMENTATION BASED ON MULTI-RESOLUTION AND MATTING

FOREGROUND SEGMENTATION BASED ON MULTI-RESOLUTION AND MATTING FOREGROUND SEGMENTATION BASED ON MULTI-RESOLUTION AND MATTING Xintong Yu 1,2, Xiaohan Liu 1,2, Yisong Chen 1 1 Graphics Laboratory, EECS Department, Peking University 2 Beijing University of Posts and

More information

IMA Preprint Series # 1979

IMA Preprint Series # 1979 INPAINTING THE COLORS By Guillermo Sapiro IMA Preprint Series # 1979 ( May 2004 ) INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS UNIVERSITY OF MINNESOTA 514 Vincent Hall 206 Church Street S.E. Minneapolis,

More information

Single Image Motion Deblurring Using Transparency

Single Image Motion Deblurring Using Transparency Single Image Motion Deblurring Using Transparency Jiaya Jia Department of Computer Science and Engineering The Chinese University of Hong Kong leojia@cse.cuhk.edu.hk Abstract One of the key problems of

More information

Video annotation based on adaptive annular spatial partition scheme

Video annotation based on adaptive annular spatial partition scheme Video annotation based on adaptive annular spatial partition scheme Guiguang Ding a), Lu Zhang, and Xiaoxu Li Key Laboratory for Information System Security, Ministry of Education, Tsinghua National Laboratory

More information

VIDEO STABILIZATION WITH L1-L2 OPTIMIZATION. Hui Qu, Li Song

VIDEO STABILIZATION WITH L1-L2 OPTIMIZATION. Hui Qu, Li Song VIDEO STABILIZATION WITH L-L2 OPTIMIZATION Hui Qu, Li Song Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University ABSTRACT Digital videos often suffer from undesirable

More information

IMAGE stitching is a common practice in the generation of

IMAGE stitching is a common practice in the generation of IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 15, NO. 4, APRIL 2006 969 Seamless Image Stitching by Minimizing False Edges Assaf Zomet, Anat Levin, Shmuel Peleg, and Yair Weiss Abstract Various applications

More information

A Sketch Interpreter System with Shading and Cross Section Lines

A Sketch Interpreter System with Shading and Cross Section Lines Journal for Geometry and Graphics Volume 9 (2005), No. 2, 177 189. A Sketch Interpreter System with Shading and Cross Section Lines Kunio Kondo 1, Haruki Shizuka 1, Weizhong Liu 1, Koichi Matsuda 2 1 Dept.

More information

Automatic Generation of An Infinite Panorama

Automatic Generation of An Infinite Panorama Automatic Generation of An Infinite Panorama Lisa H. Chan Alexei A. Efros Carnegie Mellon University Original Image Scene Matches Output Image Figure 1: Given an input image, scene matching from a large

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

GPU-based interactive near-regular texture synthesis for digital human models

GPU-based interactive near-regular texture synthesis for digital human models Technology and Health Care 25 (2017) S357 S365 DOI 10.3233/THC-171339 IOS Press S357 GPU-based interactive near-regular texture synthesis for digital human models Zhe Shen a,b,, Chao Wu c, Bin Ding a,b,

More information

ISSN: (Online) Volume 2, Issue 5, May 2014 International Journal of Advance Research in Computer Science and Management Studies

ISSN: (Online) Volume 2, Issue 5, May 2014 International Journal of Advance Research in Computer Science and Management Studies ISSN: 2321-7782 (Online) Volume 2, Issue 5, May 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at:

More information

Image-Based Deformation of Objects in Real Scenes

Image-Based Deformation of Objects in Real Scenes Image-Based Deformation of Objects in Real Scenes Han-Vit Chung and In-Kwon Lee Dept. of Computer Science, Yonsei University sharpguy@cs.yonsei.ac.kr, iklee@yonsei.ac.kr Abstract. We present a new method

More information

DECOMPOSING and editing the illumination of a photograph

DECOMPOSING and editing the illumination of a photograph IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017 1 Illumination Decomposition for Photograph with Multiple Light Sources Ling Zhang, Qingan Yan, Zheng Liu, Hua Zou, and Chunxia Xiao, Member, IEEE Abstract Illumination

More information

Animating Characters in Pictures

Animating Characters in Pictures Animating Characters in Pictures Shih-Chiang Dai jeffrey@cmlab.csie.ntu.edu.tw Chun-Tse Hsiao hsiaochm@cmlab.csie.ntu.edu.tw Bing-Yu Chen robin@ntu.edu.tw ABSTRACT Animating pictures is an interesting

More information

International Journal of Mechatronics, Electrical and Computer Technology

International Journal of Mechatronics, Electrical and Computer Technology An Efficient Importance Map for Content Aware Image Resizing Abstract Ahmad Absetan 1* and Mahdi Nooshyar 2 1 Faculty of Engineering, University of MohagheghArdabili, Ardabil, Iran 2 Faculty of Engineering,

More information

Note on Industrial Applications of Hu s Surface Extension Algorithm

Note on Industrial Applications of Hu s Surface Extension Algorithm Note on Industrial Applications of Hu s Surface Extension Algorithm Yu Zang, Yong-Jin Liu, and Yu-Kun Lai Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science

More information

Fast self-guided filter with decimated box filters

Fast self-guided filter with decimated box filters INFOTEH-JAHORINA Vol. 15, March 2016. Fast self-guided filter with decimated s Dragomir El Mezeni, Lazar Saranovac Department of electronics University of Belgrade, School of Electrical Engineering Belgrade,

More information

How to Apply the Geospatial Data Abstraction Library (GDAL) Properly to Parallel Geospatial Raster I/O?

How to Apply the Geospatial Data Abstraction Library (GDAL) Properly to Parallel Geospatial Raster I/O? bs_bs_banner Short Technical Note Transactions in GIS, 2014, 18(6): 950 957 How to Apply the Geospatial Data Abstraction Library (GDAL) Properly to Parallel Geospatial Raster I/O? Cheng-Zhi Qin,* Li-Jun

More information

PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing

PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing Barnes et al. In SIGGRAPH 2009 발표이성호 2009 년 12 월 3 일 Introduction Image retargeting Resized to a new aspect ratio [Rubinstein

More information

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering

An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering An Efficient Approach for Emphasizing Regions of Interest in Ray-Casting based Volume Rendering T. Ropinski, F. Steinicke, K. Hinrichs Institut für Informatik, Westfälische Wilhelms-Universität Münster

More information

Fog Simulation and Refocusing from Stereo Images

Fog Simulation and Refocusing from Stereo Images Fog Simulation and Refocusing from Stereo Images Yifei Wang epartment of Electrical Engineering Stanford University yfeiwang@stanford.edu bstract In this project, we use stereo images to estimate depth

More information

Fast 3D Mean Shift Filter for CT Images

Fast 3D Mean Shift Filter for CT Images Fast 3D Mean Shift Filter for CT Images Gustavo Fernández Domínguez, Horst Bischof, and Reinhard Beichel Institute for Computer Graphics and Vision, Graz University of Technology Inffeldgasse 16/2, A-8010,

More information

Region Based Image Fusion Using SVM

Region Based Image Fusion Using SVM Region Based Image Fusion Using SVM Yang Liu, Jian Cheng, Hanqing Lu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences ABSTRACT This paper presents a novel

More information

Ray tracing based fast refraction method for an object seen through a cylindrical glass

Ray tracing based fast refraction method for an object seen through a cylindrical glass 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1 6 December 2013 www.mssanz.org.au/modsim2013 Ray tracing based fast refraction method for an object seen through a cylindrical

More information

A Ray Tracing Approach to Diffusion Curves

A Ray Tracing Approach to Diffusion Curves Eurographics Symposium on Rendering 2011 Ravi Ramamoorthi and Erik Reinhard (Guest Editors) Volume 30 (2011), Number 4 A Ray Tracing Approach to Diffusion Curves John C. Bowers Jonathan Leahey Rui Wang

More information

Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K.

Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K. Image Classification based on Saliency Driven Nonlinear Diffusion and Multi-scale Information Fusion Ms. Swapna R. Kharche 1, Prof.B.K.Chaudhari 2 1M.E. student, Department of Computer Engg, VBKCOE, Malkapur

More information

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space

Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space MATEC Web of Conferences 95 83 (7) DOI:.5/ matecconf/79583 ICMME 6 Dynamic Obstacle Detection Based on Background Compensation in Robot s Movement Space Tao Ni Qidong Li Le Sun and Lingtao Huang School

More information

An Improved Image Resizing Approach with Protection of Main Objects

An Improved Image Resizing Approach with Protection of Main Objects An Improved Image Resizing Approach with Protection of Main Objects Chin-Chen Chang National United University, Miaoli 360, Taiwan. *Corresponding Author: Chun-Ju Chen National United University, Miaoli

More information

Sketch-based Interface for Crowd Animation

Sketch-based Interface for Crowd Animation Sketch-based Interface for Crowd Animation Masaki Oshita 1, Yusuke Ogiwara 1 1 Kyushu Institute of Technology 680-4 Kawazu, Iizuka, Fukuoka, 820-8502, Japan oshita@ces.kyutech.ac.p ogiwara@cg.ces.kyutech.ac.p

More information

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics

Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Term Project Final Report for CPSC526 Statistical Models of Poses Using Inverse Kinematics Department of Computer Science The University of British Columbia duanx@cs.ubc.ca, lili1987@cs.ubc.ca Abstract

More information

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Topics to be Covered in the Rest of the Semester CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Charles Stewart Department of Computer Science Rensselaer Polytechnic

More information

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S

AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the

More information

Quality improving techniques in DIBR for free-viewpoint video Do, Q.L.; Zinger, S.; Morvan, Y.; de With, P.H.N.

Quality improving techniques in DIBR for free-viewpoint video Do, Q.L.; Zinger, S.; Morvan, Y.; de With, P.H.N. Quality improving techniques in DIBR for free-viewpoint video Do, Q.L.; Zinger, S.; Morvan, Y.; de With, P.H.N. Published in: Proceedings of the 3DTV Conference : The True Vision - Capture, Transmission

More information

MULTI-FOCUS IMAGE FUSION USING GUIDED FILTERING

MULTI-FOCUS IMAGE FUSION USING GUIDED FILTERING INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 2320-7345 MULTI-FOCUS IMAGE FUSION USING GUIDED FILTERING 1 Johnson suthakar R, 2 Annapoorani D, 3 Richard Singh Samuel F, 4

More information

Silhouette Extraction with Random Pattern Backgrounds for the Volume Intersection Method

Silhouette Extraction with Random Pattern Backgrounds for the Volume Intersection Method Silhouette Extraction with Random Pattern Backgrounds for the Volume Intersection Method Masahiro Toyoura Graduate School of Informatics Kyoto University Masaaki Iiyama Koh Kakusho Michihiko Minoh Academic

More information

An Adaptive Threshold LBP Algorithm for Face Recognition

An Adaptive Threshold LBP Algorithm for Face Recognition An Adaptive Threshold LBP Algorithm for Face Recognition Xiaoping Jiang 1, Chuyu Guo 1,*, Hua Zhang 1, and Chenghua Li 1 1 College of Electronics and Information Engineering, Hubei Key Laboratory of Intelligent

More information

Color retargeting: interactive time-varying color image composition from time-lapse sequences

Color retargeting: interactive time-varying color image composition from time-lapse sequences Computational Visual Media DOI 10.1007/s41095-xxx-xxxx-x Vol. x, No. x, month year, xx xx Research Article Color retargeting: interactive time-varying color image composition from time-lapse sequences

More information

Interaction of Fluid Simulation Based on PhysX Physics Engine. Huibai Wang, Jianfei Wan, Fengquan Zhang

Interaction of Fluid Simulation Based on PhysX Physics Engine. Huibai Wang, Jianfei Wan, Fengquan Zhang 4th International Conference on Sensors, Measurement and Intelligent Materials (ICSMIM 2015) Interaction of Fluid Simulation Based on PhysX Physics Engine Huibai Wang, Jianfei Wan, Fengquan Zhang College

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

A Feature Point Matching Based Approach for Video Objects Segmentation

A Feature Point Matching Based Approach for Video Objects Segmentation A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer

More information

Image Inpainting. Seunghoon Park Microsoft Research Asia Visual Computing 06/30/2011

Image Inpainting. Seunghoon Park Microsoft Research Asia Visual Computing 06/30/2011 Image Inpainting Seunghoon Park Microsoft Research Asia Visual Computing 06/30/2011 Contents Background Previous works Two papers Space-Time Completion of Video (PAMI 07)*1+ PatchMatch: A Randomized Correspondence

More information

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com

More information

Tomorrow s Photoshop Effects

Tomorrow s Photoshop Effects Tomorrow s Photoshop Effects Johannes Borodajkewycz TU Wien Figure 1: Examples for two of the techniques presented in this paper: Interactive image completion with perspective correction is able to fill

More information

Image Stitching using Watersheds and Graph Cuts

Image Stitching using Watersheds and Graph Cuts Image Stitching using Watersheds and Graph Cuts Patrik Nyman Centre for Mathematical Sciences, Lund University, Sweden patnym@maths.lth.se 1. Introduction Image stitching is commonly used in many different

More information

Tiled Texture Synthesis

Tiled Texture Synthesis International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 16 (2014), pp. 1667-1672 International Research Publications House http://www. irphouse.com Tiled Texture

More information

Multi-View Stereo for Static and Dynamic Scenes

Multi-View Stereo for Static and Dynamic Scenes Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.

More information

Image Classification Using Wavelet Coefficients in Low-pass Bands

Image Classification Using Wavelet Coefficients in Low-pass Bands Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August -7, 007 Image Classification Using Wavelet Coefficients in Low-pass Bands Weibao Zou, Member, IEEE, and Yan

More information

An Algorithm for Seamless Image Stitching and Its Application

An Algorithm for Seamless Image Stitching and Its Application An Algorithm for Seamless Image Stitching and Its Application Jing Xing, Zhenjiang Miao, and Jing Chen Institute of Information Science, Beijing JiaoTong University, Beijing 100044, P.R. China Abstract.

More information

CONTENT ADAPTIVE SCREEN IMAGE SCALING

CONTENT ADAPTIVE SCREEN IMAGE SCALING CONTENT ADAPTIVE SCREEN IMAGE SCALING Yao Zhai (*), Qifei Wang, Yan Lu, Shipeng Li University of Science and Technology of China, Hefei, Anhui, 37, China Microsoft Research, Beijing, 8, China ABSTRACT

More information

Contour-Based Large Scale Image Retrieval

Contour-Based Large Scale Image Retrieval Contour-Based Large Scale Image Retrieval Rong Zhou, and Liqing Zhang MOE-Microsoft Key Laboratory for Intelligent Computing and Intelligent Systems, Department of Computer Science and Engineering, Shanghai

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information