Vision, Moeling, an Visualization (2015) D. Bommes, T. Ritschel an T. Schultz (Es.) A Convex Clustering-base Regularizer for Image Segmentation Benjamin Hell (TU Braunschweig), Marcus Magnor (TU Braunschweig) input k-means our regularizer segmentation with our regularizer Figure 1: From left to right: Input image, k-means clustering result, result from our clustering-base regularizer an our clustering-base regularizer incorporate in a sophisticate image segmentation framework. Abstract In this paper we present a novel way of combining the process of k-means clustering with image segmentation by introucing a convex regularizer for segmentation-base optimization problems. Instea of separating the clustering process from the core image segmentation algorithm, this regularizer allows the irect incorporation of clustering information in many segmentation algorithms. Besies introucing the moel of the regularizer, we present a numerical algorithm to efficiently solve the occurring optimization problem while maintaining complete compatibility with any other graient escent base optimization metho. As a sie-prouct, this algorithm also introuces a new way to solve the rather elaborate relaxe k-means clustering problem, which has been establishe as a convex alternative to the non-convex k-means problem. Categories an Subject Descriptors (accoring to ACM CCS): I.4.6 [IMAGE PROCESSING AND COMPUTER VISION]: Segmentation Relaxation 1. Introuction Clustering can be seen as some kin of segmentation process. Many popular clustering algorithms, like k-means clustering [XJ10] or mean-shift [FS10], have been successfully use for image segmentation tasks. More sophisticate an highly specialize image segmentation algorithms, however, rely on much more information of the image than just the istance of color information with respect to a certain metric like Eucliean istance. Nevertheless, this clustering information can be very useful for the segmentation process. It is therefore common to use clustering algorithms like k-means to obtain initial solutions for iterative numerical algorithms or pre-compute clusters, which serve as reference points throughout the whole segmentation process [HKB 15]. Unfortunately the k-means problem is non-convex, so the solution obtaine by numerical algorithms quite frequently oes not correspon to the global optimum of the k-means problem. To avoi this issue convex clustering problems like the relaxe k-means problem [ABC 14] or maximum likelihoo moels [LG07] have been investigate with great success. Builing upon the ieas of the convex relaxe k-means problem we introuce a novel convex regularizer, which follows a ifferent philosophy than other approaches: Instea of separating the clustering process from the core segmentation proceure we combine both methos an fuse them into a single optimization problem. We therefore introuce a convex regularizer, which can be incorporate in many existing optimization pipelines. The optimization problem lying unerneath this regularizer is essentially the relaxe k- means clustering problem, but mathematically moele in a profounly ifferent way. As a sie-prouct it also leas to a new metho for solving the relaxe k-means clustering problem, which is a non-trivial task (cf. [ABC 14]).
A common practice to inclue clustering information in an optimization process is to pre-compute the clusters an use a eviation term from those pre-compute cluster centers. This keeps the cluster centers fixe throughout optimization, which might have a negative influence on the outcome of the optimization process, especially when a rastically ifferent choice of cluster centers woul lea to a lower overall objective function value, while barely influencing the clustering energy term value. Our approach implicitly allows ajustment of the cluster centers an hence oes not have this isavantage. Paper structure: Section 2 gives an overview of the well known k-means clustering optimization problem an a corresponing (not so well known) relaxe counterpart, which has the property of being convex. Section 3 briges the gap between iscrete an continuous consierations for the theory of this paper. Continuing this line of thought Section 4 introuces a mathematical moel for representing segmentations in a layer structure an introuces our novel clusteringbase segmentation moel afterwars. To solve the problem we propose a numeric graient escent scheme, which is presente in Section 5. Section 6 shows results using our moel an algorithm an Section 7 conclues the paper. 1.1. Contributions This paper contains the following contributions: We show a way of combining the process of image segmentation with clustering algorithms in form of a novel k-means clustering-base segmentation problem, which has a magnitue of applications. For example, the presente approach can be use as a regularizer in several image segmentation problems. We introuce a continuous version of the relaxe k-means clustering problem. We introuce a numerical algorithm to solve our novel clustering-base regularization problem, which can be incorporate in many other optimization frameworks. The introuce clustering-base segmentation moel an the corresponing optimization algorithm yiel a new way of solving the relaxe k-means clustering problem. 1.2. Notation an Abbreviations The following notation will be use in this paper. I : : a box marks essential results an efinitions feature vector in R m (in image case: intensity).,. : inner prouct on a Hilbert space a :. p : R i : (1/m,..., 1/m) T R m (for conversion to grayscale) p-norm,. is 2-norm i-th segmentation region or cluster R i : volume of region R i : R n : v : V : k : istance function basic omain segmentation variable representing layer structure convex restriction set for the segmentation variable v number of segments or clusters 2. Discrete Problems In this section we iscuss iscrete clustering algorithms that serve as an inspiration for this work, specifically the well known iscrete k-means clustering problem as well as its relaxe form. 2.1. The Discrete k-means Clustering Problem One of the most popular clustering algorithms is k-means clustering [KMN 02]. For a given set N of atapoints {x i R m i = 1,...,N}, the k-means clustering tries to fin k centers c i (i = 1,...,k), such that the sum over corresponing istances (x i,c j ), where each x i (i = 1,...,N) is assigne to one specific c j ( j = 1,...,k), is minimal. So k-means clustering is the process of solving the following minimization problem: argmin {c j R m j=1,...,k} N i=1 min j {1,...,k} (x i,c j ) (1) Note that can be any sort of istance measure, but for classic k-means clustering correspons to square Eucliean istance. Due to the min function occurring in (1) the minimization problem is non-convex. Inee most k-means clustering algorithms suffer from the fact, that they o not necessarily converge to a global optimum, which might lea to unesire results (cf. Section 6.1.2). For this an many other reasons, it is therefore of high interest to fin convex relaxations of Problem (1), which shoul yiel approximate solutions to the initial problem while maintaining convexity of the objective function. 2.2. Relaxe Discrete k-means Clustering Problem As it turns out Problem (1) can be relaxe into a convex problem, by introucing an inicator function z(i, j) (i, j = 1,...,N) which is intene to be greater than zero if the points x i an x j belong to the same cluster an zero otherwise. Imposing further constraints on the inicator function z(.,.) we obtain the following linear optimization problem: argmin z N i, j=1 (x i,x j ) z(i, j) N s.t. z(i, j) = 1 i {1,...,N} j=1 N z(i, i) = k (k: number of clusters) i=1 z(i, j) z(i,i) i, j {1,...,N} z(i, j) [0,1] i, j {1,...,N} For further etails see [ABC 14]. The structure of Problem (2) will become more evient in Section 3, when we consier its continuous counterpart an an implicit representation of the optimal solution to this problem. (2)
3. Towars a Continuous Structure Instea of clustering iscrete points we want to shift our focus onto continuous problems. So far we have only consiere iscrete versions of clustering problems. Our ultimate goal is to use the relaxe k-means problem in a continuous setting to obtain a novel regularizer that works well in a continuous image segmentation setup. We consier a omain R n, which will later on just be a rectangle representing an image (case n = 2). Nevertheless, all future consierations work for arbitrary space imension n. Each point x is assigne a corresponing feature vector I(x) R m. If this function I : R m represents a color image, each point x is assigne a color vector, i.e. I : R 2 R 3. 3.1. Continuous Extension of the Relaxe k-means Problem We exten Problem (2) to the continuous case, using the notation mentione at the beginning of this section, an obtain argmin (I(x),I(y)) z(x,y)xy z s.t. z(x,y)x = 1 x R n z(x, x) x = k (k: number of clusters) z(x,y) z(x,x) x,y z(x,y) [0,1] x,y It can be easily verifie, that the optimal solution ẑ(x,y) to this problem is { 1 x,y R ẑ(x,y) = R i i (i = 1,...,k) (4) 0 otherwise with R i being the i-th cluster an R i its corresponing n- imensional volume. In the case of R 2 representing an image, R i correspons to the image area covere by cluster i (i = 1,...,k). Note that representation (4) oes not give an explicit solution to the problem because it relies on knowlege about the clusters R i (i = 1,...,k). 4. From Clustering to Segmentation Clustering can be consiere as some kin of segmentation, where each cluster of atapoints represents one segment. To inclue the clustering process in a continuous image segmentation framework aitional work has to be one. So far we have consiere iscrete clustering problems in Section 2. As we want to maintain convexity in all our consierations, the relaxe Problem (2) seems to be a goo starting point. In Section 3 we extene the relaxe iscrete Problem (2) to a continuous omain R n. Unfortunately, the (3) v 0 v 1 v 2 v 3 = v k ṽ 1 = 1 ṽ 2 = 1 ṽ 3 = 1 Figure 2: Layer Structure Example: This figure shows an example of 4 layers v 0,...,v 3 Ṽ. White color correspons to the value 1, black to the value 0. The re regions are the iniviual segments R i (i = 1,...,3) given by the layer structure an are characterize by ṽ i = v i 1 v i = 1. representation of clusters via the function z(x, y) in the continuous relaxe k-means Problem (3) is not well suite for a segmentation environment. The reason for this is its implicit representation of iniviual clusters. So, the final an most elaborate step we nee to take is to fin a mathematical founation that is compatible with image segmentation an is base on an explicit representation of the iniviual regions R i involve in the segmentation process. 4.1. Layer Structure To represent multiple regions mathematically we use a concatenation of binary functions, each one splitting the image area into one more segment. This is a convenient way for hanling segmentation information an has a variety of benefits. The representation follows the ieas presente in [CPKC11] an [HKB 15], which are base on [PCBC09]. The set of binary variables is efine by: Ṽ := { v = (v 0,...,v k ) : {0,1} k+1 1 v 1 (x) v k 1 (x) 0,x } (5) with v 0 (.) 1 an v k (.) 0. Each region R i is being represente by the ifference ṽ i := v i 1 v i (i = 1,...,k): R i = {x : v i 1 (x) v i (x) = 1} }{{} (i = 1,...,k) (6) ṽ i(x):= Figure 2 shows an example of a specific segmentation into ifferent regions via some variable v Ṽ. As we are intereste in a convex moel we have to allow the variables v i (i = 1,...,k) to take values in the range [0, 1] instea of just assuming binary values. This yiels the following convex restriction set: V := { v = (v 0,...,v k ) : [0,1] k+1 1 v 1 (x) v k 1 (x) 0,x } (7)
Note that ue to the efinition of V we have k k ṽ i = v i 1 v i = v 0 v k = 1 i=1 i=1 4.2. The Novel Clustering-base Regularizer Combining the layer structure v i (i = 1,...,k) an the basic ieas behin the relaxe k-means Problem (3), we obtain the following optimization problem Clustering-base Segmentation Problem = k i=1 { ṽi(x) ṽi(y) }}{ argmin (I(x),I(y)) w(x) ṽ(x), ṽ(y) x y (8) v V s.t. w(x)ṽ i (x)x = 1 (i = 1,...,k) (c1) w(x) a,i(x) ṽ i (x)x w(x) [0,1] x w(x) a,i(x) ṽ i+1 (x)x (c2) (c3) with a := (1/m,...,1/m) T R m such that a,i(x) is a "grayscale" value. As mentione before, can be any kin of istance measure. For our purpose we only consier functions of the form (I(x), I(y)) = ψ( I(x) I(y) ), with ψ being strictly monotonically increasing on R + an hence (.,.) being minimal for I(x) = I(y) (which is a very important property for our moel). Reasonable choices for are square Eucliean istance or negative Parzen ensity (cf. Section 6). If we compare this problem with Problem (3), we see that z(x,y) correspons to w(x) ṽ(x),ṽ(y) an that the constraints have taken a ifferent form. The constraints (c1) an (c3) are motivate by Theorem 1. The only reason for the existence of (c2) is that the layer structure introuce in Section 4.1 nees a specific orer of ṽ i (i = 1,...,k) to guarantee a unique optimum ˆv to Problem (8). Without (c2), regions R i an R j (i, j {1,...,k}) coul be interchange without influencing the objective function value an the result woul still fulfill all the other constraints. In general, there are many ways to moel a clusteringbase segmentation problem with similar properties to Problem (8), most of all by changing the efinition of z(x,y). Our particular moeling of z(x,y) has been inspire by the following consierations: At the optimal point of Problem (8) we want to achieve { 1 x R i ṽ i (x) = (i = 1,...,k) 0 otherwise which leas to ṽ(x),ṽ(y) = { 1 x,y R i (i {1,...,k}) 0 otherwise As we o know the general form of the optimal solution to the relaxe k-means problem (cf. (4)), we introuce an aitional weighting function w, which is intene to take the value w(x) = 1/ R i for x R i at the optimal solution ẑ(x,y). In aition to that, our efinition has the following important property, which states that Problem (8) is in accorance with the relaxe k-means Problem (3). Theorem 1. Let z(x, y) := w(x) ṽ(x), ṽ(y). Then each optimal solution ( ˆv,ŵ) to Problem (8) fulfills z(x,x)x = w(x)x = k z(x,y) z(x,x) x,y z(x,y) [0,1] x,y Proof. It can be shown that each optimal solution ˆv will exhibit the property of ṽ i {0,1} (i = 1,...,k), hence ˆv i {0,1} (i = 1,...,k). Due to v V we also have k i=1 ṽi = 1. So the first assertion follows from ẑ(x,x)x = w(x)x = k w(x) i=1 (9) ṽ i x (c1) = k (10) The secon one is fulfille if ṽ(x),ṽ(y) ṽ(x) 2. This is in general not the case, but ue to ˆv i {0,1} (i = 1,...,k) is fulfille for optimal solutions, which is all we nee. The thir assertion follows from 0 0 {}}{{}}{ z(x,y) = w(x) ṽ(x),ṽ(y) 0 (11) z(x,y) = w(x) ṽ(x),ṽ(y) (c3) ṽ(x),ṽ(y) ṽ(x) ṽ(y) ṽ(x) 1 ṽ(y) 1 = 1 where the fact that 0 ṽ i (x) 1 (i = 1,...,k) ue to v V plays a major role. Essentially Theorem 1 expresses that each optimal solution to our clustering-base regularization Problem (8) (which of course fulfills constraints (c1), (c2), (c3)) automatically fulfills all the constraints impose by the continuous relaxe k-means clustering Problem (3). Note that Problem (8) is not convex in the variable v, but it is convex in z. Our algorithm for solving the problem, which is presente in Section 5, is essentially looking for an optimal solution ẑ by solving the problem in v an w. 5. Solving the Problem To solve Problem (8), representing the moel for our novel regularizer, we propose a projecte graient escent metho, which alternates between taking a graient escent step in the primal variable v, projecting the result on the restriction
set V an ajusting the weighting function w in orer to fulfill the constraints. 5.1. Fining an Initial Solution The first step is to fin an initial solution v 0 V. This can be one in a various number of ways, as long as the initial solution lies in the set V an fulfills the orering constraint (c2). We propose the following simple algorithm (Algorithm 1), which buils its clusters base on the orering of grayscale values. Algorithm 1 Computing the initial solution v 0 1: proceure COMPUTEINITIALSOLUTION(image) 2: choose c i (i = 1,...,k) with 0 c 1 c k 1 3: set region R i := {x : a,i(x) c i a,i(x) c j ( j = 1,...,k)} (i = 1,...,k) { in orer to fulfill (c2), cf. Section 4.2 1 x R i 4: set ṽ i (x) := (i = 1,...,k) 0 otherwise 5: set v i := 1 i j=1 ṽ j (i = 0,...,k) so v V accoring to ef. (7) 6: return v = (v 0,...,v k ) 7: en proceure 5.2. Computing the Graient Let us first efine f (v,w) as the objective function of Problem (8), i.e. f (v,w) := (I(x),I(y)) w(x) ṽ(x),ṽ(y) xy (12) The graient vector can be obtaine by ifferentiating with respect to v i (i = 0,...,k). Due to the structure of the objective function f (v,w) we only nee to know the following ifferential v i ṽ(x),ṽ(y) (13) which can be obtaine by taking the irectional erivative in the irection h i := (0,...,0,h i,0,...,0) T (i = 1,...,k). This irectional erivative follows from the fact that only ṽ i an ṽ i+1 epen on v i, hence ṽ(x),ṽ(y) (h i ) = v i ( ) ( ) (vi 1 (v i +th i ))(x) (vi 1 (v, i +th i ))(y) = t ((v i +th i ) v i+1 )(x) ((v i +th i ) v i+1 )(y) t=0 t ( ) ( ) vi 1 (x) (v i +th i )(x) ṽi (y), + (v i +th i )(x) v i+1 (x) ṽ i+1 (y) t=0 ( ) ṽi (x), ṽ i+1 (x) t ( ) vi 1 (y) (v i +th i )(y) = (v i +th i )(y) v i+1 (y) t=0 (ṽ i+1 (y) ṽ i (y))h i (x) + (ṽ i+1 (x) ṽ i (x))h i (y) (14) We then obtain the graient of the objective function f with respect to v via consiering t f (v +thi (14),w) = t=0 (I(x),I(y)) w(x)(ṽ i+1 (y) ṽ i (y))h i (x)+ (I(x),I(y)) w(x)(ṽ i+1 (x) ṽ i (x))h i (y)xy = (15) (I(x),I(y)) (w(x) + w(y))(ṽ i+1 (y) ṽ i (y))yh i (x)x where the last equality follows from Fubini s theorem an the symmetry of the metric (.,.). So overall we have f (v,w)(x) = v i (I(x),I(y)) (w(x) + w(y))(ṽ i+1 (y) ṽ i (y))y (16) with D v f (v,w)(h) = D v f (v,w),h, where.,. enotes ( the scalar prouct ) in L 2 () an D v f (v,w) = v0 f (v,w),..., v k f (v,w). So computing the graient of the objective function f is one via Algorithm 2. Algorithm 2 Computing the graient of the objective function f 1: proceure COMPUTEGRADIENT(v, w) 2: set v 0 f (v,w) = v k f (v,w) = 0 by ef. of set V 3: for i = 1,...,k 1 o 4: set v i f (v,w) accoring to (16) for all x 5: en for ( ) 6: return D v f (v,w) = v0 f (v,w),..., v k f (v,w) 7: en proceure 5.3. Projection Scheme One of the major constraints in Problem (8) that might easily be overlooke is that v V, with the restriction set V being efine in (7). Given a function v, projecting onto V is the same as projecting each vector v(x) (x ) onto the set V (x) := { y = (1,y 1,...,y k 1,0) R k+1 1 y 1 y k 1 0 } (17) Obtaining the projection ỹ = (1,ỹ 1,...,ỹ k 1,0) of a vector y = (1,y 1,...,y k 1,0) R k+1 onto V (x) is no trivial task. An efficient algorithm for oing that has been introuce in [CCP12, App. B] an is represente by Algorithm 3.
Algorithm 3 Projection on restriction set V 1: proceure PROJECTONTOV(y) 2: set ỹ i := y i an efine k clusters Z i := {i} (i = 1,...,k 1) 3: ientify inex i {1,...,k 2} with ỹ i < ỹ i+1, if there is no such inex jump to 5 4: set clusters Z j := (ỹ Z i ) Z i+1 j Z i Z i+1, set ỹ i+1 := ỹ i := j Zi j / Zi an go back to 3 5: set ỹ i := max(min(ỹ i,1),0) clipping to [0,1] 6: return ỹ 7: en proceure 5.4. Upate Scheme for the Weighting Function The weighting function w has to be chosen such that the constraints of Problem (8) are fulfille. So the challenge is to fin w, which at least approximately satisfies the restrictions (c1), (c2), (c3) of Problem (8) given a vector v = (v 0,...,v k ) V. As we o know how the optimal solution ẑ(x,y) = ŵ(x) ˆṽ(x), ˆṽ(y) looks like we inten to set w(x) := 1/ R i for x R i (i = 1,...,k), where R i is given by the current state of v. This correspons to setting: k ṽ w(x) := i (x) ( i=1 max ṽ i (y)y,ε ) } {{ } R i (18) for some small value ε > 0 an the max expression only compensating for the case of ṽi(y)y = 0, which correspons to ṽ i 0. This formulation fulfills the constraint (c1) in the case of binary ṽ i {0,1} (i = 1,...,k) an approximates the equality in the general case of ṽ i [0,1] (i = 1,..., k). Besies approximating (c1) very well, w efine by (18) has the property k w(x)x = ṽi(x)x i=1 ṽi(y)y = k (19) which is another inicator of (18) being a reasonable choice for upating w. So we arrive at Algorithm 4. Algorithm 4 Upating weighting function w 1: proceure UPDATEWEIGHTS(v) 2: set w(x) accoring to (18) for all x 3: return w 4: en proceure Note that another reasonable choice for upating w woul be to use a graient escent scheme for solving a least squares problem for enforcing (c1). In practice this turne out to work fine, but has no benefits compare to the simpler approach represente by Algorithm 4. 5.5. Overall Numeric Algorithm Putting all the previously mentione steps together we arrive at the final version of our iterative algorithm: Taking an initial solution v 0 for the optimization variable v, which correspons to the layer structure representing the current segmentation. This can be one accoring to Section 5.1. Upating the weighting function w accoring to Section 5.4. Taking the graient from Section 5.2 to upate the optimization variable v. Projecting the obtaine solution on the restriction set V accoring to Section 5.3. Algorithm 5 Solving the Clustering-base Segmentation Problem 1: v 0 = COMPUTEINITIALSOLUTION(image) fulfill (c2) 2: proceure CBSEGMENTATION(image,v 0 ) image: feature vectors I(x), v 0 : initial segmentation 3: while v l+1 v l > ε o 4: w = UPDATEWEIGHTS(v l ) Section 5.4 5: v l+1 = v l τ COMPUTEGRADIENT(v,w) graient escent step, Section 5.2 6: v l+1 = PROJECTONTOV(v l+1 ) projection on restriction set V, Section 5.3 7: v l = v l+1 8: en while 9: return v l 10: en proceure Note that this algorithm can be combine with any optimization framework that uses some sort of graient escent metho an a layer structure similar to the one presente in Section 4.1. 6. Results In this section we evaluate our regularizer in various test scenarios. We first show that our optimization approach from Section 5 works as a stanalone algorithm, which means that it successfully solves the relaxe k-means Problem (3) an Problem (8). We then show a comparison to stanar clustering algorithms an finally show results of our approach incorporate in an elaborate image segmentation framework base on [HKB 15]. We present all segmentation (clustering) results as grayscale images, each gray-level corresponing to one istinct segment. In Section 4 we briefly talke about possible choices for the istance function. If not state explicitly otherwise, we choose negative Parzen ensity (inspire by mean-shift like clustering as in [FS10]), i.e. (I(x),I(y)) := 1 2πα 2 e ( I(x) I(y) 2 /(2α 2 )) (20)
where we set α = 1/10. This rather extreme choice yiels faster convergence an more plausible results than using square Eucliean istance. input pre-compute clustering with our regularizer 6.1. Evaluating the Regularizer We apply our Algorithm 5 to ifferent input images in orer to compute clusters with respect to their RGB color information. All the results shown in Figure 3(c,) are solutions to our clustering-base segmentation Problem (8). It can be observe, that our algorithm yiels a convenient cluster structure with results comparable to k-means clustering. 6.1.1. Choosing Different Distance Functions We compare the results of our algorithm using ifferent istance functions (.,.): Negative Parzen ensity (20) an square Eucliean istance. Both istance functions fulfill our basic requirement state in Section 4.2, which is that (I(x),I(y)) = ψ( I(x) I(y) ), with some function ψ being strictly monotonically increasing on R +. The results in Figure 3(c,) show that the istance function has some influence on the clustering process. While computing about hunre test cases we iscovere that most of the time the final clustering result iffers only slightly for ifferent istance functions, but the rate of convergence of our iterative Algorithm 5 might be influence consierably. 6.1.2. Comparison to k-means Clustering Our clustering-base segmentation Algorithm 5 solves a convex problem, so we are guarantee a globally optimal solution. Figure 3(f,g) shows a sie by sie comparison of our results with results from traitional k-means clustering compute in Mathematica [Res15]. As mentione previously, k- means clustering algorithms might not yiel a globally optimal solution. To investigate this behavior we ran Mathematica s k-means implementation 200 times for each image, each time using a ifferent ranom initial solution. For each of those images the algorithm foun a globally optimal solution in approximately 50% of all test cases. The rest of the results belonge to unesirable local optima, which eviate strongly from the esire global optima (Figure 3(f)). 6.2. Incorporation into Segmentation Framework The intention of this section is to briefly show how our regularizer works in a sophisticate image segmentation environment. We choose the framework from [HKB 15], which is built upon solving the following optimization problem: argmin I(x) x +λ 1 I(x) v V I(x) ν 2 B(x) H n 1 + \B }{{} intensity variation on segments B } {{ } irectional eviation k (21) 1 λ 2 1 + α I(x) Hn 1 +λ 3 ṽ i (x) I(x) Ī i x i=1 B }{{}}{{} weighte bounary length image ata score: 0.808499 score: 0.776736 score: 0.758401 score: 0.757332 Figure 4: Evaluation on BSDS500 Images: The input image from the BSDS500 ataset [AMFM11] (left), the output of the image segmentation framework presente in [HKB 15] with pre-compute k- means clustering (mile) an the output using the same approach with incorporate regularizer (right). Below the output images we enote the corresponing segmentation covering scores. with B being the bounary of the image segments, ν B the corresponing inner normal vector, H n 1 being the (n 1)- imensional Hausorff measure an I i (i = 1,...,k) corresponing to mean color values on the ifferent image segments R i. In orer to incorporate our regularizer into this problem, we replace the "image ata" term with our clustering-base segmentation Problem (8). The corresponing numerical algorithm then has to be ajuste in the graient escent step accoring to Algorithm 5, but this is all that nees to be one. Figure 3(e) shows segmentation results using this approach. It can clearly be observe that the aitional information of weighte bounary length (of the iniviual segments) an bounary normal eviation a consierably to the quality of the segmentation. To further investigate the output when using the propose incorporate regularizer scheme, we apply the algorithm to a few images of the BSDS500 ataset, which are also presente in [HKB 15], an compute their segmentation covering score [AMFM11]. The results can be seen in Figure 4. Compare to the approach of solving Problem (21) with pre-compute k-means clustering (for computing mean color values Ī i in the image ata term), we notice only slight ifferences in the final output an segmentation covering scores. Subjectively we fin the results compute with the incorporate regularizer slightly more appealing an we notice a faster rate of convergence when using the propose regularizer. 7. Conclusion In this paper we have presente a general framework for incorporating k-means like clustering information into an image segmentation process, while maintaining convexity of the problem. We introuce a novel moel for a clusteringbase regularizer an a corresponing numerical algorithm to solve the optimization problem lying unerneath the whole process. This algorithm is flexible enough to work
our results stanar k-means (a) (b) (c) () (e) (f) (g) input initial solution our regularizer our regularizer full image segment- k-means (local) k-means (global) (Parzen ensity) (square Eucliean) ation framework Figure 3: Sie by Sie Comparison of the Results: From left to right: This figure shows the input image (a), the initial solution base on closest grayscale istance (cf. Section 5.1) (b), the output of our clustering-base segmentation Algorithm 5 using negative Parzen ensity (20) (c), our clustering-base segmentation output with square Eucliean istance (), the results using a full-flege image segmentation framework [HKB 15] (e), local optima of stanar k-means clustering (f) an global optima of stanar k-means clustering (g). The first picture, create by Farbman et. al. [FFLS08], is separate into 4 segments, the secon one into 3 segments an the thir one into 4 segments. with any irection of escent base segmentation framework. Our results show the applicability of our approach in various scenarios incluing a sophisticate image segmentation process. In aition to that, we have shown that the propose metho elivers plausible clustering results on its own an might be use as a stanalone clustering algorithm, which guarantees convergence towars its global optimum. Future work inclues further investigation of the incorporation into an image segmentation framework an a fast implementation of the algorithm for GPU computing (e.g. using CUDA). The presente algorithm is well suite for parallel computing an may be accelerate by exploiting certain symmetry properties in the graient computation step. Acknowlegment The research leaing to these results has receive funing from the European Unions Seventh Framework Programme FP7/2007-2013 uner grant agreement no. 256941, Reality CG. References [ABC 14] AWASTHI P., BANDEIRA A. S., CHARIKAR M., KR- ISHNASWAMY R., VILLAR S., WARD R.: Relax, no nee to roun: integrality of clustering formulations, Aug. 2014. 1, 2 [AMFM11] ARBELAEZ P., MAIRE M., FOWLKES C., MALIK J.: Contour etection an hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33, 5 (May 2011), 898 916. 7 [CCP12] CHAMBOLLE A., CREMERS D., POCK T.: A convex approach to minimal partitions. 1113 1158. 5 [CPKC11] CREMERS D., POCK T., KOLEV K., CHAMBOLLE A.: Convex relaxation techniques for segmentation, stereo an multiview reconstruction. In Markov Ranom Fiels for Vision an Image Processing. MIT Press, 2011. 3 [FFLS08] FARBMAN Z., FATTAL R., LISCHINSKI D., SZELISKI R.: Ege-preserving ecompositions for multi-scale tone an etail manipulation. In ACM SIGGRAPH 2008 Papers (New York, NY, USA, 2008), SIGGRAPH 08, ACM, pp. 67:1 67:10. 8 [FS10] FULKERSON B., SOATTO S.: Really quick shift: Image segmentation on a GPU. In Trens an Topics in Computer Vision - ECCV 2010 Workshops, Heraklion, Crete, Greece, September 10-11, 2010, Revise Selecte Papers, Part II (2010), pp. 350 358. 1, 6 [HKB 15] HELL B., KASSUBECK M., BAUSZAT P., EISEMANN M., MAGNOR M.: An approach towar fast graient-base image segmentation. Image Processing, IEEE Transactions on 24, 9 (Sept 2015), 2633 2645. 1, 3, 6, 7, 8 [KMN 02] KANUNGO T., MOUNT D. M., NETANYAHU N. S., PIATKO C. D., SILVERMAN R., WU A. Y.: An efficient k-means clustering algorithm: Analysis an implementation. IEEE Trans. Pattern Anal. Mach. Intell. 24, 7 (July 2002), 881 892. 2 [LG07] LASHKARI D., GOLLAND P.: Convex clustering with exemplar-base moels. In Avances in Neural Information Processing Systems 20, Proceeings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canaa, December 3-6, 2007 (2007), pp. 825 832. 1 [PCBC09] POCK T., CHAMBOLLE A., BISCHOF H., CREMERS D.: A convex relaxation approach for computing minimal partitions. 3 [Res15] RESEARCH W.: Mathematica 10.1, 2015. 7 [XJ10] XIE J., JIANG S.: A simple an fast algorithm for global k-means clustering. Eucation Technology an Computer Science, International Workshop on 2 (2010), 36 40. 1