Interactive Image Matting for Multiple Layers

Size: px
Start display at page:

Download "Interactive Image Matting for Multiple Layers"

Transcription

1 Interactive Image Matting for Multiple Layers Dheeraj Singaraju René Vidal Center for Imaging Science, Johns Hopkins University, Baltimore, MD 118, USA Abstract Image matting deals with finding the probability that each pixel in an image belongs to a user specified object or to the remaining background. Most existing methods estimate the mattes for two groups only. Moreover, most of these methods estimate the mattes with a particular bias towards the object and hence the resulting mattes do not sum up to 1 across the different groups. In this work, we propose a general framework to estimate the alpha mattes for multiple image layers. The mattes are estimated as the solution to the Dirichlet problem on a combinatorial graph with boundary conditions. We consider the constrained optimization problem that enforces the alpha mattes to take values in [0, 1] and sum up to 1 at each pixel. We also analyze the properties of the solution obtained by relaxing either of the two constraints. Experiments demonstrate that our proposed method can be used to extract accurate mattes of multiple objects with little user interaction. 1. Introduction Image matting refers to the problem of assigning to each pixel in an image, a probabilistic measure of whether it belongs to a desired object or not. This problem finds numerous applications in image editing, where the user is interested only in the pixels corresponding to a particular object, rather than in the whole image. In such cases, one prefers assigning soft values to the pixels rather than a hard classification. This is because there can be ambiguous areas where one cannot make clear cut decisions about the pixels membership. Matting therefore deals with assigning a partial opacity value α [0, 1] to each pixel, such that pixels that definitely belong to the object or background are assigned a value α = 1 or α = 0 respectively. More specifically, the matting problem tries to estimate the value α i at each pixel i, such that its intensity I i can be expressed in terms of the true foreground and background intensities F i and B i as I i = α i F i + (1 α i )B i. (1) Note that in (1), the number of unknowns is more than the number of independent equations, thereby making the matting problem ill posed. Therefore, matting algorithms typically require some user interaction that specifies the object of interest and the background, thereby embedding Figure 1. Typical example of the image matting problem. Left: Given image with a toy object. Right: Desired matte of the toy. some constraints in the image. User interaction is provided in the form of a trimap that specifies the regions that definitely belong to the object and the background, and an intermediate region where the mattes need to be estimated. Early methods use the trimap to learn intensity models for the object as well as the background [, 11, 3]. These learned intensity models are subsequently used to predict the alpha mattes in the intermediate region. Also, Sun et al. [1] use the image gradients to estimate the mattes as the solution to the Dirichlet problem with boundary conditions. A common criticism of these methods is that they typically require an accurate trimap with a fairly narrow intermediate region to produce good results. Moreover, they give erroneous results if the background and object have similar intensity distributions or if the image intensities do not vary smoothly in the intermediate region. These issues have motivated the development of methods that extract the mattes for images with complex intensity variations. One line of work employs popular segmentation techniques for the problem of image matting [10, 5, 1]. However, these methods enforce the regions in the image to be connected. This is a major limitation, as these methods cannot deal with images such as Figure 1, where the actual matte has holes. More recent work addresses the aforementioned issues and lets the user extract accurate mattes for challenging images with low levels of interaction [14, 8, 6, 16, 15, 13]. However, a fundamental limitation of these methods is that they extract mattes for two groups only. The only existing work dealing with multiple groups is the Spectral Matting algorithm proposed by Levin et al. [9]. They propose to use the eigenvectors of the so-called Matting Laplacian matrix to obtain the mattes for several layers. In reality, the algorithm uses multiple eigenvectors to estimate the matte for a single object. Moreover, the experiments are restricted to extracting the mattes for two layers only.

2 Two additional areas of concern with the aforementioned methods are the following. First, some methods do not enforce the estimated mattes to take values in [0, 1]. If the matte at a pixel does not lie in [0, 1], its original interpretation as a probabilistic measure breaks down. Second, most existing methods exhibit a bias towards the object. In particular, consider repeating the estimation of mattes after switching the labels of the object and background in the trimap. Then, the mattes estimated at a pixel do not necessarily sum up to 1 across both trials. Consequently, one needs to postprocess the estimated mattes in order to enforce these constraints. One would like to devise a method such that these constraints are automatically satisfied, because postprocessing can degrade the quality of the mattes. Paper contributions: In this work, we present what is to the best of our knowledge, the first method that can be used to extract accurate mattes of multiple layers in an image. We demonstrate that the algorithms of [8] and [15] that are designed for layers, admit an equivalent electrical network construction that provides a unifying framework for analyzing them. We show how this framework can be used to generalize the algorithms to estimate the mattes for multiple image layers. We pose the matting problem as a constrained optimization that enforces that the alpha mattes at each pixel (a) take values in [0, 1], and (b) sum up to 1 across the different image layers. Subsequently, we discuss two cases where either of these two constraints can be relaxed and we analyze the properties of the resulting solutions. Finally, we present quantitative and qualitative evaluations of our algorithms performance on a database of nearly 5 images.. Image Matting for Two Groups In this section, we discuss the previous algorithms of Levin et al. [8] and Wang et al. [15] that approach the matting problem via optimization on a combinatorial graph. We describe the construction of equivalent electrical networks, the physics of which exhibit the same optimization scheme as in these methods. In this process, we provide a unifying framework for studying these algorithms and appreciating the fine differences between them. Before delving into the details, we need to introduce some notation. A weighted graph G consists of a pair G = (V, E) with nodes v i V and undirected edges e ij E. The nodes on the graph typically correspond to pixels in the image. An edge that spans two vertices v i and v j is denoted by e ij. The neighborhood of a node v i is given by all the nodes v j that share an edge with v i, and is denoted by N (v i ). Each edge is assigned a value w ij that is referred to as its weight. Since the edges are undirected, we have w ij = w ji. These edge weights are used to define the degree d i of a node v i as d i = v j N (v i) w ij. One can use these definitions to construct a Laplacian matrix L for the graph as L = D W, where D = diag(d 1, d,..., d V ) and W is a V V matrix whose (i, j) entry is given by the edge weight w ij. By construction, the Laplacian matrix has the constant vector of 1s in its null space. In fact, when the graph is connected, the vector of 1s is the only null vector. The algorithms in [8] and [15] require the user to mark representative seed nodes for the different image layers. These seeds embed hard constraints on the mattes and are subsequently used to predict the mattes of the remaining unmarked nodes. The set S V contains the locations of the nodes marked as seeds and the set U V contains the locations of the unmarked nodes. By construction S U = and S U = V. We further split the set S into the sets O S and B S that contain the locations of the seeds for the foreground object and the background, respectively. By construction, we have O B = and O B = S. The alpha matte at the pixel v i is denoted as α i and represents the probability that v i belongs to the object. The mattes of all pixels in the image can be stacked into a vector as α = [α U α S ], where α U and α S correspond to the mattes of the unmarked and marked pixels, respectively..1. Closed Form Solution for Image Matting Levin et al. [8] assume that the image intensities satisfy the line color model. According to this model, the intensities of the object and background vary linearly in small patches of the image. More specifically, let (Ij R, IB j, IG j ) denote the red, blue and green components of the intensity I j R 3 at the pixel v j. The line color model assumes that there exist constants (a R i, ab i, ag i, b i) for every pixel v i V, such that the mattes α j of the pixels v j in a small window W(v i ) around v i can be expressed as α j = a R i I R j + a B i I B j + a G i I G j + b i, v j W i. () The algorithm of [8] then proceeds by minimizing the fitting error with respect to this model in every window, while enforcing the additional constraint that the model parameters vary smoothly across the image. In particular, for some ɛ 0, the algorithm aims to minimize the cost function J(α, a, b) = [ αj v i V j W(v i)( a c ii c ) +ɛ ] j b i a c i. c c (3) The parameter ɛ 0 prevents the algorithm from estimating the trivial model (a R i, ab i, ag i ) = (0, 0, 0). Note that the quadratic cost function J(α, a, b) involves two sets of unknowns, namely, the mattes and the parameters of the line color model. If one assumes that the mattes α are known, the model parameters {(a R i, ab i, ag i, b i)} N i=1 can be linearly estimated in terms of the image intensities and the mattes by minimizing (3). Substituting these model parameters back in to (3) makes the cost J a quadratic function of the mattes. In particular, the cost function can be expressed as J(α) = α Lα, where L is a V V Laplacian matrix. This Laplacian is referred to as the Matting Lapla-

3 cian and essentially captures the local statistics of the intensity variations in a small window around each pixel [8]. We shall now introduce some terms needed for defining the Matting Laplacian. Consider a window W(v k ) around a pixel v k V and let the number of pixels in this window be nk. Let the mean and the covariance of the intensities of the pixels in W(v k ) be denoted as µk R3 and Σk R3 3. Denoting an m m identity matrix as Im, [8] showed that the Matting Laplacian can be constructed from the edge weights w ij that are defined as n 1 1+(I i µk ) (Σk+ I3 ) 1 (I j µk ). (4) k nk k vi,v j W(v k ) Note that these weights can be positive or negative. The neighborhood structure however ensures that the graph is connected. Recall that the user marks some pixels in the image as seeds that are representative of the object and the background. To this effect, we define a vector s R S, such that an entry of s is set to 1 or 0 depending on whether it corresponds to the seed of an object or background, respectively. Also, for some λ 0, we define a matrix Λ =λi S. The mattes in the image are then estimated as α = argmin wij (αi αj ) + λ(αi si ) α eij E i S = argmin α Lα + λ(αs s) (αs s) (5) α αu B 0 LU = argmin αu αs s B LS + Λ Λ αs. α s 0 Λ Λ Noticing that the expression in (3) is non-negative, [8] showed that the Matting Laplacian is positive semi-definite. This makes the cost function in (5) convex and therefore the optimization has a unique minimizer that can be linearly estimated in closed form as αs = [A + Λ] 1 Λs, αu = L 1 U B αs and 1 = L 1 Λs, U B [A + Λ] (6) where A = LS BL 1 U B. The algorithm of Levin et al. [8] employs the above optimization by choosing a large finite valued λ, while the algorithm of Wang et al. [15] works in the limiting case by choosing λ =. We note that [15] employs a modification of the Matting Laplacian. However, it suffices for the purpose of our analysis that the modified matrix is also a positive semi-definite Laplacian matrix... Electrical Networks for Image Matting It is interesting to note that there exists an equivalent electrical network that solves the optimization problem in (5). In particular, one constructs a network as shown in Figure, such that each node in the graph associated with the image is equivalent to a node on the network. O O i j i j B wij B Figure. Equivalent construction of combinatorial graphs and electrical networks. The edge weights correspond to the conductance values of resistors connected between neighboring nodes, i.e. 1 Rij = wij. Since the edge weights wij are not all positive, one can argue that this system might not be physically realizable. However, we note that the network is constrained to dissipate positive energy due to the positive semi-definiteness of the Laplacian. Therefore, we can think of the image as a real resistive load that dissipates energy. The seeded nodes are connected to the network ground and unit voltage sources by resistive impedances of value λ1. In particular, the background seeds are connected to the ground and the object seeds are connected to the unit voltage sources. Hence, we have s = 1V at the voltage sources and s = 0V at the ground, where all measurements are with respect to the network s ground. From network theory, we know that the potentials x at the nodes in the network distribute themselves such that they minimize the energy dissipated by the network. The potentials can therefore be estimated as 1 x = argmin (xi xj ) + λ(xi si ). (7) Rij x eij E i S We note that this is exactly the same expression as in (5). Therefore, if one were to construct the equivalent network and measure the potentials at the nodes they would give us the required alpha mattes. In what follows, we shall show that as λ, the fraction (η) of work done by the electrical sources that is used to drive the load is maximized. Note that one can use the positive definiteness of L and the connectedness of the graph to show that the matrix A is positive definite. Now, consider the singular value decomposition of the matrix A. We have A = U ΣA U, where U = [u1... u S ] and ΣA = diag(σ1,..., σ S ), σ1 σ σ S 0. We can then explicitly evaluate η as x Lx Eload = (8) η= Etotal x Lx + (xs s) Λ(xS s) = S σi (u i s) σi ( + 1) i=1 λ (s Λ) [A + Λ] 1 A[A + Λ] 1 (Λs) =. S (s Λ) Λ 1 [A + Λ] 1 (Λs) σi (u s) i σi ( + 1) λ i=1

4 λ Since λ 0 and i = 1,..., S, σ i 0, we have that 1 + σi λ 1. Given this observation, it can be verified that λ (0, ), η < 1 and lim η = 1. The fractional energy delivered to the load is therefore maximized by setting λ =. In the network, this limiting case corresponds to setting the values of impedances between the sources and the load to zero. In terms of the image, this forces the mattes at the pixels marked by the scribbles to be 1 for the foreground object and 0 for the background. The algorithm of [8] solves the optimization problem in (5) by setting λ to be a large finite valued number. Since there is always a finite potential drop across the resistors connecting the voltage source and the grounds to the image, we note that the mattes (potentials) at the seeds are close to the desired values but not equal. The algorithm of Wang et al. [15] corresponds to the limiting case of λ = and forces the mattes at the seeds to be equal to the desired values. The numerical framework of the latter is referred to as the solution to the combinatorial Dirichlet problem with boundary conditions. Based on the argument given above, we favor the optimization of [15] over that of [8] because it helps in utilizing the scribbles to the maximum extent to provide knowledge about the unknown mattes. 3. Image Matting for Multiple Groups In this section, we show how to solve the matting problem for n image layers, by using generalizations of the methods discussed in Section. Essentially, we assume that the intensity of each pixel in the query image can be expressed as the convex linear combination of the intensities at the same pixel location in the n image layers. We are then interested in estimating partial opacity values at each pixel as the probability of belonging to one of the image layers. In what follows, we denote the intensity of a pixel v i V in the query image by I i. Similarly, the intensity of a pixel v i V in the j th image layer is denoted by F j i, where 1 j n. The alpha matte at a pixel v i V with respect to the j th image layer is denoted by α j i. Given these definitions, the matting problem requires us to estimate alpha mattes {α j i }n and the intensities {F j i }n of the n image layers at each pixel v i V, such that we have n I i = α j i F j i, s.t. α j i =1, and {αj i }n [0, 1]. (9) We shall pose this matting problem as an optimization problem on combinatorial graphs. While we retain most of the notations from Section, we need to introduce new notation for the sets that contain the seeds locations. In particular, we split the set S that contains the locations of all the seeds into the sets R 1, R,..., R n, where R i contains the seed locations for the i th layer. By construction, we have n i=1 R i = S and 1 i < j n, R i R j =. Algorithm 1 (Image matting for n image layers) 1: Given an image, construct the matting Laplacian L for the image as described in Section. : For each image layer j {1,, n}, fix the mattes at the seeds as α j i = 1 if v i R j and α j i = 0 if v i S \ R j. 3: Estimate the alpha mattes {α j U }n for the unmarked pixels with respect to the n image layers as [ {α j U }n =argmin α j Lα j] (10) such that {α j U }n (a) j {1,..., n}, 0 α j U 1, and (b) = 1. (11) α j U We propose to solve this problem by minimizing the sum of the cost functions associated with the estimation of mattes for each image layer. Unlike [8] and [15], we propose to impose constraints that the mattes sum up to 1 at each pixel, and take values in [0, 1]. In particular, we propose that the matting problem can be solved using Algorithm 1. Notice that this is a standard optimization of minimizing a quadratic function subject to linear constraints. Recall from Section that the cost function in (10), is a convex function of the alpha mattes. Also, notice that the set of feasible solutions of the mattes {α j U }n is compact and convex. Therefore, the optimization problem posed in Algorithm 1 is guaranteed to have a unique solution. However, this optimization can be computationally cumbersome due to the large number of unknown variables. In what follows, we shall discuss relaxing either one of the constraints and analyze its effect on the solution to the optimization problem Image matting for multiple layers, without constraining the sum of the mattes at a pixel We analyze the properties of the mattes obtained by solving the optimization problem of (10) without enforcing the mattes to take values between 0 and 1. In particular, Theorem 1 states an important consequence of this case. Theorem 1 The alpha mattes obtained as the solution to Algorithm 1 without imposing the constraint that they take values between 0 and 1, are naturally constrained to sum up to 1 at every pixel. Proof. Notice that the set of feasible solutions for the mattes is convex, even when we do not impose constraint (a). The optimization problem is hence guaranteed to have a unique solution. Now, we know that this solution must satisfy the Karusch-Kuhn-Tucker (KKT) conditions [7]. In particular, the KKT conditions guarantee the existence of a vector Γ R U such that the solution α U satisfies j {1,..., n}, L U α j U + B α j S + Γ = 0. (1)

5 The entries of Γ are the Lagrange multipliers for the constraints that the mattes sum up to one at each pixel. We now assume that the mattes sum up to one, and show that Γ= 0. This is equivalent to proving that the constraint (b) in (11) is automatically satisfied without explicitly enforcing it. Assume that n = 1. Also, recall that by construction, n αj S αj U = 1. Now summing up the KKT conditions in (1) across all the image layers gives us [L U α j U + B α j S + Γ] = L U 1 + B 1 + nγ = 0 (13) Recall that the vector of 1s lies in the null space of L, and hence we have L U 1 + B 1 = 0 Therefore, we can conclude from (13) that Γ = 0. This implies that the solution automatically satisfies the constraint that the mattes sum up to 1 at each unmarked pixel. In fact, if one does not enforce constraint (a) in (11), then the solution is the same, irrespective of whether constraint (b) is enforced or not. This is an important result which follows intuitively from the well known Superposition Theorem in electrical network theory [4]. It is important to notice that the estimation of mattes for the i th image layer is posed as the solution to the Dirichlet problem with boundary conditions. In fact, this is a simple extension of [15], where the i th image layer is treated as the foreground and the rest of the layers are treated as the background. Theorem 1 guarantees that the estimation of the mattes for any n 1 of the n image layers automatically determines the mattes for the remaining layer. Wang et al. [15] claim that this optimization scheme finds analogies in the theory of random walks. Therefore, a standard result states that the mattes are constrained to lie between 0 and 1. However, this result is derived specifically for graphs with positive edge weights. This is not true for the graphs constructed for the matting problem since the graphs edge weights are both positive and negative. In fact, Figure 3 gives an example where the system has a positive semi-definite Laplacian matrix, but the obtained potentials (mattes) do not all lie in [0, 1]. Note that x c takes the value 4 3 on switching the locations of the voltage source and the ground. An easy fix employed by [8] and [15] to resolve this issue, is to clip all the alpha values such that they take values in [0, 1]. This scheme might give results that are visually pleasing, but they might not obey the KKT conditions. Moreover, the postprocessed mattes at each pixel might not sum up to 1 across all the layers anymore. 3.. Image matting for multiple layers, without constraining the limits of the mattes In this section, we discuss the properties of the solution of Algorithm 1, when we do not enforce the constraint that the mattes sum up to 1 at each pixel. Before we discuss the general scheme for n layers, we consider the simple case of estimating the mattes for n = layers. In this case, the mattes at the unmarked pixels are estimated as the solution of the quadratic programming problem given by [ ] α j U = argmin α j U LU α j U + αj U BαS + α S L S α S, α j U subject to 0 α j U 1, for j = 1,. (14) Note that the set of feasible solutions for the mattes of the unmarked pixels is given by [0, 1] U, which is a compact convex set. Also, the objective function being minimized in (14) is a convex function of the alpha mattes. Hence, the optimization problem is guaranteed to have a unique minimizer. In this case, the KKT conditions guarantee the existence of U U diagonal matrices {Λ j 0 } and {Λj 1 }, with non-positive entries such that the solution α j U for the j th image layer satisfies L U α U + B α S + Λ 0 Λ 1 = 0, and Λ j 0 αj U = 0, Λj i (1 αj U ) = 0, j = 1,. (15) In what follows, we denote the (i, i) diagonal entry of {Λ j k }, k=0,1 as λj ki 0. Notice that if the matte αj i at the i th unmarked pixel with respect to the j th image layer lies between 0 and 1, then the KKT conditions state that the associated slack variables λ j 0i and λj 1i are equal to zero. Consequently, the i th row of the first equation in (15) gives us the result that α j i = P v k N (v k ) w ikα j k Pv k N (v i ) w ik. In particular, this implies that α i can be expressed as the weighted average of the mattes of the neighboring pixels. But, when α j i = 0 or α j i = 1, we see that αj i is not the weighted average of the mattes at the neighboring pixels and this is accounted for by the slack variables λ j 0i and λj 1i. Recall from Section 3.1 that the methods of [8] and [15] clip the estimated mattes to take values between 0 and 1, when necessary. This step is somewhat equivalent to using such slack variables. However, one also needs to re-adjust the alpha mattes at the unmarked pixels that lie in the neighborhoods of the pixels whose mattes are clipped. In particular, they need to be adjusted so that they still satisfy the KKT conditions. However, this step is neglected in the methods of [8] and [15]. L = Figure 3. Example where all the potentials do not lie in [0, 1].

6 We now consider the problem of solving this constrained optimization for n image layers. In particular, we want to analyze whether the sum of the mattes at each pixel is constrained to be 1 or not. Theorem states an important property of this optimization that the sum of mattes is enforced to be 1, only when n = and not otherwise. Theorem The alpha mattes obtained as the solution to Algorithm 1 without imposing the constraint that they sum up to 1 at each pixel, are in fact naturally constrained to sum up to 1 at each pixel when n =, but not when n. Proof. Consider the following optimization problem for estimating the mattes at the unmarked nodes. {α j U }n = argmin {α j U }n s.t. 0 {α j U }n 1, and [ ] α j U LU α j U + αj U Bα j S, α j U = 1. (16) As discussed earlier, the KKT conditions guarantee the existence of diagonal matrices {Λ j 0 }n i=1 and {Λj 1 }n i=1, and a vector Γ, such that the solution {α j U }n satisfies 1 j n, L U α j U + B α j S + Λj 0 Λj 1 + Γ = 0, and 1 j n, Λ j 0 αj u = 0, Λ j i (1 αj U ) = 0. (17) Γ acts as a Lagrange multiplier for the constraint that the mattes sum up to 1. We now assume that the mattes sum up to 1 at each pixel and inspect the value of Γ. Essentially, if Γ = 0, it means that the estimated mattes are naturally constrained to sum up to 1 at each pixel. Assuming that the mattes sum up to 1, we sum up the first expression in (17) across all the image layers to conclude that n Λ j 0 Λ j 1 + nγ = 0 (18) This relationship is derived using the fact that L U 1 + B 1 = 0. We now inspect the mattes at a pixel v i U and analyze all the possible solutions. Firstly, consider the case when the mattes {α j i }k of the first k < n 1 image layers are identically equal to 0 and the remaining mattes {α j i }n j=k+1 take values in (0, 1). We can always consider such reordering of the image layers without any loss of generality. We see that the variables {λ j 1i }n and {λj 0i }n j=k+1 are equal to zero. Substituting these values in (18), gives γ i + k λj 0i = 0. Therefore, γ i is non-negative, and is equal to zero if and only if the variables {λ j 0i }k are identically equal to zero. Since these variables are not constrained to be zero valued, we see that γ i is not equal to zero. It is precisely due to this case exactly that the mattes are not constrained to sum up to 1, when n. However, this case cannot arise when n =. In fact for n =, we see that both the mattes take values in either (0, 1) or in {0, 1}. In the first case, we see that λ 1 0i = λ 0i = λ1 1i = λ1 1i = 0. For the second case, assume without loss of generality that αi 1 = 0 and αi = 1. We then have λ 1 1i = λ 0i = 0. Moreover, we notice that setting λ 1 0i = λ 1i satisfies the KKT conditions. Hence, we can conclude that γ i = 0 in both cases. Consequently, the mattes are naturally constrained to sum up to 1 at each pixel, for n =. 4. Experiments We denote the variants of Algorithm 1 discussed in Sections 3.1 and 3. as Algorithm 1a and Algorithm 1b respectively. We now evaluate these algorithms performance on the database proposed by Levin et al. [9]. The database consists of images of three different toys, namely a lion, a monkey and a monster. Each toy is imaged against 8 different backgrounds and so the database has 4 distinct images. The database provides the ground truth mattes for the toys. All the images are of size and are normalized to take values between 0 and 1. In our experiments, the Matting Laplacian for each image is estimated as described in Section.1, using windows of size 3 3 and with ɛ = Quantitative Evaluation for n = Layers We present a quantitative evaluation of our algorithms on the described database. We follow the same procedure as in [9] and define the trimap by considering the pixels whose true mattes lie between 0.05 and 0.95 and dilating this region by 4 pixels. The algorithms performance is then evaluated by considering the SSD (sum of squared differences) error between the estimated mattes and the ground truth. Table 1 shows the statistics of performance. We see that the performance of both variants is on par. Recall that the numerical framework of Algorithm 1a is very similar to that of the closed form matting solution proposed by Levin et al. [8]. Consequently, both these algorithms would have similar performances. The analysis in [9] shows that the algorithm of [8] performs better than the matting algorithms of [1, 5, 14] and is outperformed only by the Spectral Matting algorithm. Therefore, we conclude that both variants of Algorithm 1 potentially match up to the state of the art algorithms. Algorithm 1a Algorithm 1b Toy Mean Median Toy Mean Median Lion Lion Monkey Monster Monkey Monster Table 1. Statistics of SSD errors in estimating mattes of layers.

7 4.. Qualitative Evaluation for n Layers We now present results of using Algorithm 1a to extract mattes for n image layers with low levels of user interaction. Since there is no ground truth data available for multiple layers, we validate the performance visually. In particular, we first estimate the matte for the multiple layers using Algorithm 1a. We then use the scheme outlined in [8] to reconstruct the intensities of each image layer. Finally, we show the contribution of each image layer to the query image. Figures 4 and 5 show the results for the extraction of mattes for 3 and 5 image layers, respectively. The scribbles are color coded to demarcate between the different image layers. We note that the results visually appear to be quite good. The results highlight a disadvantage of our method that the mattes can be erroneous if the intensities near the boundary of adjoining image layers are similar. This problem can be fixed by adding more seeds, and hence at an expense of increased levels of user interaction. 5. Conclusions We have proposed a constrained optimization problem to extract accurate mattes for multiple image layers with low levels of user interaction. We discussed two variants of the method where we relaxed the constraints of the problem and presented a theoretical analysis of the properties of the estimated mattes. Experimental evaluation of both these variants shows that they provide visually pleasing results. Acknowledgments. Work supported by JHU startup funds, by grants NSF CAREER ISS , NSF EHS , and ONR N , and by contract APL [3] Y.-Y. Chuang, B. Curless, D. Salesin, and R. Szeliski. A Bayesian Approach to Digital Matting. In CVPR (), pages 64 71, 001. [4] P. Doyle and L. Snell. Random walks and electric networks. Number in The Carus Mathematical Monographs. The Mathematical Society of America, [5] L. Grady, T. Schiwietz, S. Aharon, and R. Westermann. Random Walks for Interactive Alpha-Matting. In ICVIIP, pages 43 49, September 005. [6] Y. Guan, W. Chen,. Liang, Z. Ding, and Q. Peng. Easy Matting: A Stroke Based Approach for Continuous Image Matting. In Eurographics 006, volume 5, pages , 006. [7] H. W. Kuhn and A. W. Tucker. Nonlinear Programming. In nd Berkeley Symposium, pages , [8] A. Levin, D. Lischinski, and Y. Weiss. A Closed Form Solution to Natural Image Matting. IEEE Trans. on PAMI, 30(): 8 4, 008 [9] A. Levin, A. Rav-Acha, and D. Lischinski. Spectral Matting. In CVPR, Pages 1 8, 007. [10] C. Rother, V. Kolmogorov, and A. Blake. Grabcut : Interactive Foreground Extraction Using Iterated Graph Cuts. ACM Trans. Graph., 3(3): , 004. [11] M. A. Ruzon and C. Tomasi. Alpha Estimation in Natural Images. In CVPR, pages , 000. [1] J. Sun, J. Jia, C.-K. Tang, and H.-Y. Shum. Poisson Matting. ACM Trans. Graph., 3(3):315 31, 004. [13] J. Wang, M. Agrawala, and M. F. Cohen. Soft Scissors: An Interactive Tool for Realtime High Quality Matting. ACM Trans. Graph., 6(3):9, 007. [14] J. Wang and M. F. Cohen. An Iterative Optimization Approach for Unified Image Segmentation and Matting. In ICCV, pages , 005. [15] J. Wang and M. F. Cohen. Optimized Color Sampling for Robust Matting. In CVPR, Pages 1 8, 007. [16] J. Wang and M. F. Cohen. Simultaneous Matting and Compositing. In CVPR, Pages 1 8, 007. References [1]. Bai and G. Sapiro. A Geodesic Framework for Fast Interactive Image and Video Segmentation and Matting. In ICCV, pages 1 8, 007. [] A. Berman, A. Dadourian, and P. Vlahos. Method for Removing from an Image the Background Surrounding a Selected Object. US Patent no. 6,134,346, October 000. (a) Scribbles for marking seeds (b) αmonster Fmonster (c) αsky Fsky (d) αwater Fwater Figure 4. Extracting 3 image layers using Algorithm 1a. (a) Scribbles for marking seeds (b) αlion Flion (c) αsky Fsky (d) αhill Fhill (e) αsand Fsand (f) αwater Fwater Figure 5. Extracting 5 image layers using Algorithm 1a.

Simultaneous Foreground, Background, and Alpha Estimation for Image Matting

Simultaneous Foreground, Background, and Alpha Estimation for Image Matting Brigham Young University BYU ScholarsArchive All Faculty Publications 2010-06-01 Simultaneous Foreground, Background, and Alpha Estimation for Image Matting Bryan S. Morse morse@byu.edu Brian L. Price

More information

Soft Scissors : An Interactive Tool for Realtime High Quality Matting

Soft Scissors : An Interactive Tool for Realtime High Quality Matting Soft Scissors : An Interactive Tool for Realtime High Quality Matting Jue Wang University of Washington Maneesh Agrawala University of California, Berkeley Michael F. Cohen Microsoft Research Figure 1:

More information

A Closed-Form Solution to Natural Image Matting

A Closed-Form Solution to Natural Image Matting 1 A Closed-Form Solution to Natural Image Matting Anat Levin Dani Lischinski Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem Abstract Interactive digital matting,

More information

Color Me Right Seamless Image Compositing

Color Me Right Seamless Image Compositing Color Me Right Seamless Image Compositing Dong Guo and Terence Sim School of Computing National University of Singapore Singapore, 117417 Abstract. This paper introduces an approach of creating an image

More information

Pattern Recognition Letters

Pattern Recognition Letters Pattern Recognition Letters 33 (2012) 920 933 Contents lists available at ScienceDirect Pattern Recognition Letters journal homepage: www.elsevier.com/locate/patrec Dynamic curve color model for image

More information

Divide and Conquer: A Self-Adaptive Approach for High-Resolution Image Matting

Divide and Conquer: A Self-Adaptive Approach for High-Resolution Image Matting Divide and Conquer: A Self-Adaptive Approach for High-Resolution Image Matting Guangying Cao,Jianwei Li, Xiaowu Chen State Key Laboratory of Virtual Reality Technology and Systems School of Computer Science

More information

IMA Preprint Series # 2171

IMA Preprint Series # 2171 A GEODESIC FRAMEWORK FOR FAST INTERACTIVE IMAGE AND VIDEO SEGMENTATION AND MATTING By Xue Bai and Guillermo Sapiro IMA Preprint Series # 2171 ( August 2007 ) INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS

More information

arxiv: v2 [cs.cv] 4 Jul 2017

arxiv: v2 [cs.cv] 4 Jul 2017 Automatic Trimap Generation for Image Matting Vikas Gupta, Shanmuganathan Raman gupta.vikas@iitgn.ac.in, shanmuga@iitgn.ac.in arxiv:1707.00333v2 [cs.cv] 4 Jul 2017 Electrical Engineering Indian Institute

More information

LETTER Local and Nonlocal Color Line Models for Image Matting

LETTER Local and Nonlocal Color Line Models for Image Matting 1814 IEICE TRANS. FUNDAMENTALS, VOL.E97 A, NO.8 AUGUST 2014 LETTER Local and Nonlocal Color Line Models for Image Matting Byoung-Kwang KIM a), Meiguang JIN, Nonmembers, and Woo-Jin SONG, Member SUMMARY

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Maximum Margin Methods Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574

More information

IMA Preprint Series # 2153

IMA Preprint Series # 2153 DISTANCECUT: INTERACTIVE REAL-TIME SEGMENTATION AND MATTING OF IMAGES AND VIDEOS By Xue Bai and Guillermo Sapiro IMA Preprint Series # 2153 ( January 2007 ) INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS

More information

Automatic Trimap Generation for Digital Image Matting

Automatic Trimap Generation for Digital Image Matting Automatic Trimap Generation for Digital Image Matting Chang-Lin Hsieh and Ming-Sui Lee Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, R.O.C. E-mail:

More information

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem

Lecture 10: SVM Lecture Overview Support Vector Machines The binary classification problem Computational Learning Theory Fall Semester, 2012/13 Lecture 10: SVM Lecturer: Yishay Mansour Scribe: Gitit Kehat, Yogev Vaknin and Ezra Levin 1 10.1 Lecture Overview In this lecture we present in detail

More information

High Resolution Matting via Interactive Trimap Segmentation

High Resolution Matting via Interactive Trimap Segmentation High Resolution Matting via Interactive Trimap Segmentation Technical report corresponding to the CVPR 08 paper TR-188-2-2008-04 Christoph Rhemann 1, Carsten Rother 2, Alex Rav-Acha 3, Toby Sharp 2 1 Institute

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

A Short SVM (Support Vector Machine) Tutorial

A Short SVM (Support Vector Machine) Tutorial A Short SVM (Support Vector Machine) Tutorial j.p.lewis CGIT Lab / IMSC U. Southern California version 0.zz dec 004 This tutorial assumes you are familiar with linear algebra and equality-constrained optimization/lagrange

More information

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Automatic Surveillance Video Matting Using a Shape Prior

Automatic Surveillance Video Matting Using a Shape Prior Automatic Surveillance Video Matting Using a Shape Prior Ting Yu Xiaoming Liu Sernam Lim Nils O. Krahnstoever Peter H. Tu {yut,liux,limser,krahnsto,tu}@research.ge.com Computer Vision Lab, GE Global Research,

More information

Adding a Transparent Object on Image

Adding a Transparent Object on Image Adding a Transparent Object on Image Liliana, Meliana Luwuk, Djoni Haryadi Setiabudi Informatics Department, Petra Christian University, Surabaya, Indonesia lilian@petra.ac.id, m26409027@john.petra.ac.id,

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

Color Source Separation for Enhanced Pixel Manipulations MSR-TR

Color Source Separation for Enhanced Pixel Manipulations MSR-TR Color Source Separation for Enhanced Pixel Manipulations MSR-TR-2-98 C. Lawrence Zitnick Microsoft Research larryz@microsoft.com Devi Parikh Toyota Technological Institute, Chicago (TTIC) dparikh@ttic.edu

More information

Extracting Smooth and Transparent Layers from a Single Image

Extracting Smooth and Transparent Layers from a Single Image Extracting Smooth and Transparent Layers from a Single Image Sai-Kit Yeung Tai-Pang Wu Chi-Keung Tang saikit@ust.hk pang@ust.hk cktang@cse.ust.hk Vision and Graphics Group The Hong Kong University of Science

More information

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1

Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanfordedu) February 6, 2018 Lecture 9 - Matrix Multiplication Equivalences and Spectral Graph Theory 1 In the

More information

A Bottom Up Algebraic Approach to Motion Segmentation

A Bottom Up Algebraic Approach to Motion Segmentation A Bottom Up Algebraic Approach to Motion Segmentation Dheeraj Singaraju and RenéVidal Center for Imaging Science, Johns Hopkins University, 301 Clark Hall, 3400 N. Charles St., Baltimore, MD, 21218, USA

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

Supervised texture detection in images

Supervised texture detection in images Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße

More information

A New Technique for Adding Scribbles in Video Matting

A New Technique for Adding Scribbles in Video Matting www.ijcsi.org 116 A New Technique for Adding Scribbles in Video Matting Neven Galal El Gamal 1, F. E.Z. Abou-Chadi 2 and Hossam El-Din Moustafa 3 1,2,3 Department of Electronics & Communications Engineering

More information

Filters. Advanced and Special Topics: Filters. Filters

Filters. Advanced and Special Topics: Filters. Filters Filters Advanced and Special Topics: Filters Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong Kong ELEC4245: Digital Image Processing (Second Semester, 2016 17)

More information

Foundations of Computing

Foundations of Computing Foundations of Computing Darmstadt University of Technology Dept. Computer Science Winter Term 2005 / 2006 Copyright c 2004 by Matthias Müller-Hannemann and Karsten Weihe All rights reserved http://www.algo.informatik.tu-darmstadt.de/

More information

TVSeg - Interactive Total Variation Based Image Segmentation

TVSeg - Interactive Total Variation Based Image Segmentation TVSeg - Interactive Total Variation Based Image Segmentation Markus Unger 1, Thomas Pock 1,2, Werner Trobin 1, Daniel Cremers 2, Horst Bischof 1 1 Inst. for Computer Graphics & Vision, Graz University

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Scalable Clustering of Signed Networks Using Balance Normalized Cut

Scalable Clustering of Signed Networks Using Balance Normalized Cut Scalable Clustering of Signed Networks Using Balance Normalized Cut Kai-Yang Chiang,, Inderjit S. Dhillon The 21st ACM International Conference on Information and Knowledge Management (CIKM 2012) Oct.

More information

CS4670 / 5670: Computer Vision Noah Snavely

CS4670 / 5670: Computer Vision Noah Snavely { { 11/26/2013 CS4670 / 5670: Computer Vision Noah Snavely Graph-Based Image Segmentation Stereo as a minimization problem match cost Want each pixel to find a good match in the other image smoothness

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Iterative Refinement of Alpha Matte using Binary Classifier

Iterative Refinement of Alpha Matte using Binary Classifier Iterative Refinement of Alpha Matte using Binary Classifier Zixiang Lu 1) Tadahiro Fujimoto 1) 1) Graduate School of Engineering, Iwate University luzixiang (at) cg.cis.iwate-u.ac.jp fujimoto (at) cis.iwate-u.ac.jp

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

FOREGROUND SEGMENTATION BASED ON MULTI-RESOLUTION AND MATTING

FOREGROUND SEGMENTATION BASED ON MULTI-RESOLUTION AND MATTING FOREGROUND SEGMENTATION BASED ON MULTI-RESOLUTION AND MATTING Xintong Yu 1,2, Xiaohan Liu 1,2, Yisong Chen 1 1 Graphics Laboratory, EECS Department, Peking University 2 Beijing University of Posts and

More information

Spectral Video Matting

Spectral Video Matting Spectral Video Matting Martin Eisemann, Julia Wolf and Marcus Magnor Computer Graphics Lab, TU Braunschweig, Germany Email: eisemann@cg.tu-bs.de, wolf@tu-bs.de, magnor@cg.tu-bs.de Abstract We present a

More information

Blind Image Deblurring Using Dark Channel Prior

Blind Image Deblurring Using Dark Channel Prior Blind Image Deblurring Using Dark Channel Prior Jinshan Pan 1,2,3, Deqing Sun 2,4, Hanspeter Pfister 2, and Ming-Hsuan Yang 3 1 Dalian University of Technology 2 Harvard University 3 UC Merced 4 NVIDIA

More information

Introduction to Machine Learning

Introduction to Machine Learning Introduction to Machine Learning Clustering Varun Chandola Computer Science & Engineering State University of New York at Buffalo Buffalo, NY, USA chandola@buffalo.edu Chandola@UB CSE 474/574 1 / 19 Outline

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

MATH3016: OPTIMIZATION

MATH3016: OPTIMIZATION MATH3016: OPTIMIZATION Lecturer: Dr Huifu Xu School of Mathematics University of Southampton Highfield SO17 1BJ Southampton Email: h.xu@soton.ac.uk 1 Introduction What is optimization? Optimization is

More information

An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation

An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation An R Package flare for High Dimensional Linear Regression and Precision Matrix Estimation Xingguo Li Tuo Zhao Xiaoming Yuan Han Liu Abstract This paper describes an R package named flare, which implements

More information

Clustering: Classic Methods and Modern Views

Clustering: Classic Methods and Modern Views Clustering: Classic Methods and Modern Views Marina Meilă University of Washington mmp@stat.washington.edu June 22, 2015 Lorentz Center Workshop on Clusters, Games and Axioms Outline Paradigms for clustering

More information

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING Idan Ram, Michael Elad and Israel Cohen Department of Electrical Engineering Department of Computer Science Technion - Israel Institute of Technology

More information

An Integrated System for Digital Restoration of Prehistoric Theran Wall Paintings

An Integrated System for Digital Restoration of Prehistoric Theran Wall Paintings An Integrated System for Digital Restoration of Prehistoric Theran Wall Paintings Nikolaos Karianakis 1 Petros Maragos 2 1 University of California, Los Angeles 2 National Technical University of Athens

More information

Spectral Methods for Network Community Detection and Graph Partitioning

Spectral Methods for Network Community Detection and Graph Partitioning Spectral Methods for Network Community Detection and Graph Partitioning M. E. J. Newman Department of Physics, University of Michigan Presenters: Yunqi Guo Xueyin Yu Yuanqi Li 1 Outline: Community Detection

More information

intro, applications MRF, labeling... how it can be computed at all? Applications in segmentation: GraphCut, GrabCut, demos

intro, applications MRF, labeling... how it can be computed at all? Applications in segmentation: GraphCut, GrabCut, demos Image as Markov Random Field and Applications 1 Tomáš Svoboda, svoboda@cmp.felk.cvut.cz Czech Technical University in Prague, Center for Machine Perception http://cmp.felk.cvut.cz Talk Outline Last update:

More information

Colorization: History

Colorization: History Colorization Colorization: History Hand tinting http://en.wikipedia.org/wiki/hand-colouring Colorization: History Film colorization http://en.wikipedia.org/wiki/film_colorization Colorization in 1986 Colorization

More information

Perceptron Learning Algorithm

Perceptron Learning Algorithm Perceptron Learning Algorithm An iterative learning algorithm that can find linear threshold function to partition linearly separable set of points. Assume zero threshold value. 1) w(0) = arbitrary, j=1,

More information

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs

Graphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs Graphs and Network Flows IE411 Lecture 21 Dr. Ted Ralphs IE411 Lecture 21 1 Combinatorial Optimization and Network Flows In general, most combinatorial optimization and integer programming problems are

More information

Drag and Drop Pasting

Drag and Drop Pasting Drag and Drop Pasting Jiaya Jia, Jian Sun, Chi-Keung Tang, Heung-Yeung Shum The Chinese University of Hong Kong Microsoft Research Asia The Hong Kong University of Science and Technology Presented By Bhaskar

More information

DIGITAL matting refers to the problem of accurate foreground

DIGITAL matting refers to the problem of accurate foreground 1 L1-regularized Reconstruction Error as Alpha Matte Jubin Johnson, Hisham Cholakkal, and Deepu Rajan arxiv:170.0744v1 [cs.cv] 9 Feb 017 Abstract Sampling-based alpha matting methods have traditionally

More information

TELCOM2125: Network Science and Analysis

TELCOM2125: Network Science and Analysis School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 2 Part 4: Dividing Networks into Clusters The problem l Graph partitioning

More information

On the null space of a Colin de Verdière matrix

On the null space of a Colin de Verdière matrix On the null space of a Colin de Verdière matrix László Lovász 1 and Alexander Schrijver 2 Dedicated to the memory of François Jaeger Abstract. Let G = (V, E) be a 3-connected planar graph, with V = {1,...,

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision Due to space limitation in the main paper, we present additional experimental results in this supplementary

More information

An efficient algorithm for rank-1 sparse PCA

An efficient algorithm for rank-1 sparse PCA An efficient algorithm for rank- sparse PCA Yunlong He Georgia Institute of Technology School of Mathematics heyunlong@gatech.edu Renato Monteiro Georgia Institute of Technology School of Industrial &

More information

Motion Regularization for Matting Motion Blurred Objects

Motion Regularization for Matting Motion Blurred Objects IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 33, NO. 11, NOVEMBER 2011 2329 Motion Regularization for Matting Motion Blurred Objects Hai Ting Lin, Yu-Wing Tai, and Michael S. Brown

More information

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200,

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

The Constrained Laplacian Rank Algorithm for Graph-Based Clustering

The Constrained Laplacian Rank Algorithm for Graph-Based Clustering The Constrained Laplacian Rank Algorithm for Graph-Based Clustering Feiping Nie, Xiaoqian Wang, Michael I. Jordan, Heng Huang Department of Computer Science and Engineering, University of Texas, Arlington

More information

Diffusion Wavelets for Natural Image Analysis

Diffusion Wavelets for Natural Image Analysis Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................

More information

A Factorization Method for Structure from Planar Motion

A Factorization Method for Structure from Planar Motion A Factorization Method for Structure from Planar Motion Jian Li and Rama Chellappa Center for Automation Research (CfAR) and Department of Electrical and Computer Engineering University of Maryland, College

More information

Linear Methods for Regression and Shrinkage Methods

Linear Methods for Regression and Shrinkage Methods Linear Methods for Regression and Shrinkage Methods Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 Linear Regression Models Least Squares Input vectors

More information

Kernel Methods & Support Vector Machines

Kernel Methods & Support Vector Machines & Support Vector Machines & Support Vector Machines Arvind Visvanathan CSCE 970 Pattern Recognition 1 & Support Vector Machines Question? Draw a single line to separate two classes? 2 & Support Vector

More information

Matting & Compositing

Matting & Compositing Matting & Compositing Image Compositing Slides from Bill Freeman and Alyosha Efros. Compositing Procedure 1. Extract Sprites (e.g using Intelligent Scissors in Photoshop) 2. Blend them into the composite

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Ranga Rodrigo October 9, 29 Outline Contents Preliminaries 2 Dilation and Erosion 3 2. Dilation.............................................. 3 2.2 Erosion..............................................

More information

5 Day 5: Maxima and minima for n variables.

5 Day 5: Maxima and minima for n variables. UNIVERSITAT POMPEU FABRA INTERNATIONAL BUSINESS ECONOMICS MATHEMATICS III. Pelegrí Viader. 2012-201 Updated May 14, 201 5 Day 5: Maxima and minima for n variables. The same kind of first-order and second-order

More information

Single Image Motion Deblurring Using Transparency

Single Image Motion Deblurring Using Transparency Single Image Motion Deblurring Using Transparency Jiaya Jia Department of Computer Science and Engineering The Chinese University of Hong Kong leojia@cse.cuhk.edu.hk Abstract One of the key problems of

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Algebraic Graph Theory- Adjacency Matrix and Spectrum

Algebraic Graph Theory- Adjacency Matrix and Spectrum Algebraic Graph Theory- Adjacency Matrix and Spectrum Michael Levet December 24, 2013 Introduction This tutorial will introduce the adjacency matrix, as well as spectral graph theory. For those familiar

More information

Lecture 27: Fast Laplacian Solvers

Lecture 27: Fast Laplacian Solvers Lecture 27: Fast Laplacian Solvers Scribed by Eric Lee, Eston Schweickart, Chengrun Yang November 21, 2017 1 How Fast Laplacian Solvers Work We want to solve Lx = b with L being a Laplacian matrix. Recall

More information

Free Appearance-Editing with Improved Poisson Image Cloning

Free Appearance-Editing with Improved Poisson Image Cloning Bie XH, Huang HD, Wang WC. Free appearance-editing with improved Poisson image cloning. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 26(6): 1011 1016 Nov. 2011. DOI 10.1007/s11390-011-1197-5 Free Appearance-Editing

More information

Multiple cosegmentation

Multiple cosegmentation Armand Joulin, Francis Bach and Jean Ponce. INRIA -Ecole Normale Supérieure April 25, 2012 Segmentation Introduction Segmentation Supervised and weakly-supervised segmentation Cosegmentation Segmentation

More information

Dynamic Color Flow: A Motion-Adaptive Color Model for Object Segmentation in Video

Dynamic Color Flow: A Motion-Adaptive Color Model for Object Segmentation in Video Dynamic Color Flow: A Motion-Adaptive Color Model for Object Segmentation in Video Xue Bai 1, Jue Wang 2, and Guillermo Sapiro 1 1 University of Minnesota, Minneapolis, MN 55455, USA 2 Adobe Systems, Seattle,

More information

A propagation matting method based on the Local Sampling and KNN Classification with adaptive feature space

A propagation matting method based on the Local Sampling and KNN Classification with adaptive feature space A propagation matting method based on the Local Sampling and KNN Classification with adaptive feature space Xiao Chen 1, 2) 1, 2)*, Fazhi He 1) (State Key Laboratory of Software Engineering, Wuhan University,

More information

Image Resizing Based on Gradient Vector Flow Analysis

Image Resizing Based on Gradient Vector Flow Analysis Image Resizing Based on Gradient Vector Flow Analysis Sebastiano Battiato battiato@dmi.unict.it Giovanni Puglisi puglisi@dmi.unict.it Giovanni Maria Farinella gfarinellao@dmi.unict.it Daniele Ravì rav@dmi.unict.it

More information

Approximating Fault-Tolerant Steiner Subgraphs in Heterogeneous Wireless Networks

Approximating Fault-Tolerant Steiner Subgraphs in Heterogeneous Wireless Networks Approximating Fault-Tolerant Steiner Subgraphs in Heterogeneous Wireless Networks Ambreen Shahnaz and Thomas Erlebach Department of Computer Science University of Leicester University Road, Leicester LE1

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

AM 221: Advanced Optimization Spring 2016

AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 2 Wednesday, January 27th 1 Overview In our previous lecture we discussed several applications of optimization, introduced basic terminology,

More information

Segmentation: Clustering, Graph Cut and EM

Segmentation: Clustering, Graph Cut and EM Segmentation: Clustering, Graph Cut and EM Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu

More information

DIGITAL matting is the process of extracting a foreground

DIGITAL matting is the process of extracting a foreground IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 30, NO. 10, OCTOBER 2008 1699 Spectral Matting Anat Levin, Alex Rav-Acha, and Dani Lischinski Abstract We present spectral matting:

More information

An Efficient Approximation for the Generalized Assignment Problem

An Efficient Approximation for the Generalized Assignment Problem An Efficient Approximation for the Generalized Assignment Problem Reuven Cohen Liran Katzir Danny Raz Department of Computer Science Technion Haifa 32000, Israel Abstract We present a simple family of

More information

A Course in Machine Learning

A Course in Machine Learning A Course in Machine Learning Hal Daumé III 13 UNSUPERVISED LEARNING If you have access to labeled training data, you know what to do. This is the supervised setting, in which you have a teacher telling

More information

Image Segmentation Via Iterative Geodesic Averaging

Image Segmentation Via Iterative Geodesic Averaging Image Segmentation Via Iterative Geodesic Averaging Asmaa Hosni, Michael Bleyer and Margrit Gelautz Institute for Software Technology and Interactive Systems, Vienna University of Technology Favoritenstr.

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations

Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Inverse KKT Motion Optimization: A Newton Method to Efficiently Extract Task Spaces and Cost Parameters from Demonstrations Peter Englert Machine Learning and Robotics Lab Universität Stuttgart Germany

More information

Graph Based Image Segmentation

Graph Based Image Segmentation AUTOMATYKA 2011 Tom 15 Zeszyt 3 Anna Fabijañska* Graph Based Image Segmentation 1. Introduction Image segmentation is one of the fundamental problems in machine vision. In general it aims at extracting

More information

REGULAR GRAPHS OF GIVEN GIRTH. Contents

REGULAR GRAPHS OF GIVEN GIRTH. Contents REGULAR GRAPHS OF GIVEN GIRTH BROOKE ULLERY Contents 1. Introduction This paper gives an introduction to the area of graph theory dealing with properties of regular graphs of given girth. A large portion

More information

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms

Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms Iterative Algorithms I: Elementary Iterative Methods and the Conjugate Gradient Algorithms By:- Nitin Kamra Indian Institute of Technology, Delhi Advisor:- Prof. Ulrich Reude 1. Introduction to Linear

More information

Interactive segmentation, Combinatorial optimization. Filip Malmberg

Interactive segmentation, Combinatorial optimization. Filip Malmberg Interactive segmentation, Combinatorial optimization Filip Malmberg But first... Implementing graph-based algorithms Even if we have formulated an algorithm on a general graphs, we do not neccesarily have

More information

AMS : Combinatorial Optimization Homework Problems - Week V

AMS : Combinatorial Optimization Homework Problems - Week V AMS 553.766: Combinatorial Optimization Homework Problems - Week V For the following problems, A R m n will be m n matrices, and b R m. An affine subspace is the set of solutions to a a system of linear

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Pantographic polygons

Pantographic polygons 203 Pantographic polygons John Miller and Emanuel Strzelecki Abstract Necessary and sufficient conditions are given for a polygon to be pantographic. The property is held by all regular polygons and by

More information

H. W. Kuhn. Bryn Mawr College

H. W. Kuhn. Bryn Mawr College VARIANTS OF THE HUNGARIAN METHOD FOR ASSIGNMENT PROBLEMS' H. W. Kuhn Bryn Mawr College The author presents a geometrical modelwhich illuminates variants of the Hungarian method for the solution of the

More information

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization

More information

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH Ignazio Gallo, Elisabetta Binaghi and Mario Raspanti Universitá degli Studi dell Insubria Varese, Italy email: ignazio.gallo@uninsubria.it ABSTRACT

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

Parameterized Complexity of Independence and Domination on Geometric Graphs

Parameterized Complexity of Independence and Domination on Geometric Graphs Parameterized Complexity of Independence and Domination on Geometric Graphs Dániel Marx Institut für Informatik, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany. dmarx@informatik.hu-berlin.de

More information