Conformal and Low-Rank Sparse Representation for Image Restoration

Size: px
Start display at page:

Download "Conformal and Low-Rank Sparse Representation for Image Restoration"

Transcription

1 Conformal and Low-Rank Sparse Representaton for Image Restoraton Janwe L, Xaowu Chen, Dongqng Zou, Bo Gao, We Teng State Key Laboratory of Vrtual Realty Technology and Systems School of Computer Scence and Engneerng, Behang Unversty, Bejng, Chna Abstract Obtanng an approprate dctonary s the key pont when sparse representaton s appled to computer vson or mage processng problems such as mage restoraton. It s expected that preservng data structure durng sparse codng and dctonary learnng can enhance the recovery performance. However, many exstng dctonary learnng methods handle tranng samples ndvdually, whle mssng relatonshps between samples, whch result n dctonares wth redundant atoms but poor representaton ablty. In ths paper, we propose a novel sparse representaton approach called conformal and low-rank sparse representaton CLRSR) for mage restoraton problems. To acheve a more compact and representatve dctonary, conformal property s ntroduced by preservng the angles of local geometry formed by neghborng samples n the feature space. Furthermore, mposng low-rank constrant on the coeffcent matrx can lead more fathful subspaces and capture the global structure of data. We apply our CLRSR model to several mage restoraton tasks to demonstrate the effectveness. 1. Introducton Sparse representaton has been proven to be a promsng model for mage restoraton, such as mage superresoluton [33] and mage denosng [9]. The sparse model s an emergng and powerful method to descrbe sgnals based on the sparsty and redundancy of ther representatons [9, 21]. The model suggests that there exsts a dctonary whch can reconstruct the sgnals. Each sgnal can be represented by a sparse lnear combnaton of atoms n the dctonary. In earler sparsty-based mage restoraton methods, dctonares are analytcally predefned e.g., wavelets [22]). After that, the emergence of new methods [9, 21] approves that the learned dctonary from tranng samples have better performance n mage restoraton. Therefore, the qualty correspondng author emal: chen@buaa.edu.cn) of the learned dctonary becomes the key pont for mage restoraton problem. Many approaches have been devoted to learnng approprate dctonares. Some early dctonary learnng methods, such as K-SVD algorthm [1], focus on reconstructon power of the dctonary, and have been appled to mage restoraton [9]. But these methods are dependent on a large tranng dataset. Besdes, the dctonary sze s fxed, whch may result n dctonary redundancy, makng t dffcult to do the sgnal decompostons. Some other dctonary learnng methods add dscrmnaton constrants to make the dctonary smaller [36, 34]. However, these methods only tran samples ndvdually or consder the dscrmnaton between classes, whle gnorng local relatonshps between samples or structure of the data manfold, whch may result n redundant dctonares wth poor representaton ablty. All these problems may lead to bad restoraton results due to the bad dctonary. Some sparsty-based mage restoraton methods mply that preservng the data structure durng sparse codng by buldng relatonshps of samples can enhance the recovery performance [19]. The relatonshps between samples can be analyzed from two aspects. From local perspectve, local samples have smlar features and form a local subspace, reflectng the affntes between each other. From global perspectve, samples wth smlar features are lnearly related, and thus they le on a low-dmensonal latent space. Therefore, the key problem for dctonary based mage restoraton s how to embed these two relatonshps nto dctonary learnng and sparse representaton. In ths paper, we propose a conformal and low-rank sparse representaton CLRSR) approach for mage restoraton. The work from Wrght et al. [31] mples that the mages or ther features) lyng on a hgh dmensonal manfold exhbt degenerate structure and form ts own nner structure. Ths nner structure ndeed s a knd of local relatonshps. Some researches [16, 10] suggest that the data nner structures can be better modelled through Conformal Egenmaps [27], whch projects data from a hgh dmensonal space to a low dmensonal manfold whle p- reservng the angles formed by neghborng samples. These 235

2 angle relatonshps computed by Conformal Egenmaps model the nner structures of data, whch are called as the conformal property. By consderng the conformal property n dctonary learnng, the local geometrc relatonshps n the samples can be preserved. As a result, all the samples can fnd ther correspondng embeddngs n the dctonary space wth smlar local structures, yeldng a representatve dctonary. Moreover, locally neghborng samples are able to use the same dctonary atoms for ther decompostons. Thus the redundancy of the dctonary can be elmnated by deletng these unnecessary atoms, resultng n a compact dctonary. To nvolve the global structure of data, we enforce the coeffcent matrx to be low-rank. The samples extracted from mage/vdeo are relevant to each other, thus these samples le on low-rank subspaces. Wth the representatve dctonary, samples havng smlar features wll have smlar sparse representatons, resultng n smlar codng coeffcents. Therefore, the coeffcent matrx A s expected to be low-rank, too. Imposng low-rank constrant on the coeffcent matrx can lead more fathful subspace and capture the global structure of data [17]. Fnally, ntroducng global structure can complement the local structure by conformal property and contrbute to dctonary learnng. The contrbutons of ths paper nclude: 1) a novel conformal and low-rank sparse representaton approach s proposed to nvolve the local and global structure nformaton of data to obtan a compact and representatve dctonary; 2) our approach can be appled to mage restoraton tasks ncludng mage super-resoluton and mage denosng, and extended to mage edtng problems such as mage/vdeo recolorng. Varous experments and comparsons demonstrate that our approach can learn a better dctonary wth compettve performance compared to the prevous methods. 2. Related Work Sparse representaton and dctonary learnng have been developed rapdly and receved much attenton n recent years. Here we brefly revew the technques most related to our work. Many classc sparse representaton methods have been proposed to learn a good dctonary. The K-SVD algorthm [1] s a representatve dctonary learnng method appled to mage processng. It focuses on effcently learnng an over-complete dctonary whch can maxmze the reconstructon power. In [14], Lee et al. sped up the sparse codng algorthm by solvng the optmzaton functon n a new way. Maral et al. [20] proposed an onlne dctonary learnng method based on stochastc approxmatons to handle large datasets effcently. These methods most focus on the orgnal sparse codng problems and mprove effcency of the learnng algorthm. Learnng a compact and representatve dctonary s very mportant for sparse representaton. Qu et al. [23] adopted the rule of maxmzaton of mutual nformaton to learn a compact and dscrmnatve dctonary for acton attrbutes. Syahjan et al. [28] learned a context aware dctonary to mprove the recognton and localzaton of multple objects n mages. Other methods mprove the dctonary performance by separatng the partcularty and commonalty [13] and addng dscrmnaton constrants [36, 34]. Most of these methods consder the dscrmnaton between classes, whle seldom nvolvng relatonshps between samples or the local nformaton n the data space. Due to the power of the sparse representaton technque and good effectveness of the l 1 norm mnmzaton algorthm, sparse representaton has been appled to many computer vson and mage processng tasks such as mage restoraton [21, 8]. Elad et al. [9] used the K-SVD algorthm to learn a dctonary to reconstruct the nosy mages. Maral et al. [21] extended the sparsty-based restoraton model to color mages. Yang et al. [33] proposed a sparse representaton based mage super-resoluton method by jontly learnng two dctonares for the low and hgh resoluton mages. Sem-coupled dctonary learnng [30] and double sparsty regularzed manfold learnng [19] are also proposed to acheve good performance n mage superresoluton. Chen et al. [6] ntroduced the sparse representaton technque to the edt propagaton task. Sparse representaton and dctonary learnng can also be appled to face recognton [32] and classfcaton [35]. 3. Conformal and Low-Rank Constrants on Sparse Representaton Sha and Saul [27] proposed a spectral method for nonlnear dmensonalty reducton through conformal transformatons. Specfcally, a d-dmensonal embeddng s frst computed from m bottom egenvectors by LLE wth m > d; then the conformal mappng s appled to these embeddngs va optmzng the degree of neghborhood smlarty. The soluton to the optmzaton yelds the underlyng manfold dmensons. Essentally, the conformal transformatons constran that the local angles formed by neghborng samples mantan unchanged durng dmensonal reducton. Wth ths constrant, ths method could reveal a more fathful subspace whch nonlnearly embedded n hgh dmensonal space. Sparse representaton can be regarded as a member of the manfold learnng famly. In fact, sparse representaton s to buld dctonares of atoms or subspaces that provde effcent representatons of classes of sgnals [29]. Therefore, the conformal constrant for manfold learnng holds for sparse representaton too. Moreover, n sparse representaton, tranng data lyng on a hgh dmensonal manfold exhbts degenerate structure and forms ts own nner structure [31]. Ths nner structure 236

3 Fgure 2. The reconstructon results of mage Barbara. a) s the source mage, b) and c) are the reconstructon results wthout and wth conformal and low-rank constrants, respectvely. Fgure 1. An llustraton of the prncple of conformal and low-rank sparse representaton. a) s the nput data set, b) shows the coeffcents dstrbuton n the dctonary space wthout the conformal and low-rank constrants, and c) shows the correspondng relatonshp between dctonary and coeffcents, whle e) and f) are the coeffcents dstrbuton and relatonshp wth the conformal and low-rank constrants, respectvely. d) s the nput data wth the computed conformal structure. can be treated as one of local geometrc structures whch can be descrbed by conformal constrants. Thus preservng the conformal constrants n sparse representaton brngs more local nformaton than tradtonal sparse codng. For example, as shown n Fg. 1d, e), the local structure x,x j,x k ) n nput dataset s preserved for the correspondng codng coeffcentsα,α j,α k ). Low-rank property has acheved excellent results n matrx completon [5, 11] and been appled to many mage processng problems [18, 7]. The basc observaton of these works s that smlar patches extracted from mages le on a very low-dmensonal subspace. The low-rank subspace constrant can capture the underlyng structure of the patches. We enforce low-rank constrant on the coeffcent matrx to acqure global structure of the codng coeffcents and enable more robust dctonary learnng. The conformal and low-rank constrants can make the learned dctonary more compact. Let M be the hgh dmensonal data manfold, and N be the space of codng coeffcents. The sparse codng algorthm s a mappng process from M to N, as shown n Fg. 1. The samples x,x j,x k ) M are mapped to α,α j,α k ) N. In the space N, the dctonary atoms d can be seen as the orthogonal) axes, and the sparse codng coeffcents α are the coordnate values n the dctonary space, as shown n Fg. 1b, e). If the conformal constrants are added nto sparse representaton, the codng coeffcents can preserve the local geometrc propertes, as shown n Fg. 1e). On the other hand, n the global perspectve, low-rank constrant on the coeffcent matrx makes more fathful geometrc structure n dctonary space, makng a sparser representaton. Therefore samples n a local subspace wll share some atoms, leadng to a compact dctonary, as shown n Fg. 1f). In contrast, codng coeffcents are scattered wthout such constrants Fg. 1b)), resultng n a redundant dctonary Fg. 1c)). The conformal and low-rank constrants could mprove the representaton ablty of the learned dctonary. The conformal property descrbes the local geometrc relatonshps of a sample wth ts K-nearest neghbors. The work [27] shows that the conformal constrants contrbute to reveal a more fathful subspace. Furthermore, due to the low-rank property of the coeffcent matrx, low-rank constrant can help wth optmzng subspaces n a global vew. Consequently, preservng the conformal and low-rank constrants n sparse representaton wll contrbute to buld dctonares whch provde more effcent representatons. As shown n Fg. 2, b) and c) are the reconstructon results wthout and wth conformal and low-rank constrants, respectvely. The dctonary learned from the source mage s only wth 16 atoms. Under the same settngs, the dctonary wth conformal and low-rank constrants can reconstruct the nput data a) better than that wthout such constrants, showng a better representaton ablty. The small sze also reveals the compactness of the learned dctonary. The conformal constrant s better than other local geometrc descrptors such as the classc proxmty relatons used by LLE [25] and Laplacan egenmaps [3]. As ponted out by [27], the proxmty relatonshps can hardly capture precse geometrc propertes, thus these relatonshps cannot reflect the underlyng subspace. Besdes, the proxmty relatonshps are somewhat unpredctably dependent on parameters such as the number of nearest neghbors and boundary condtons. Therefore, the correspondng dctonares learnt wth the constrants of proxmty relatonshps are less representatve n sparse representaton. In contrast, conformal constrants can maxmally preserve the local geometrc propertes. Wth low-rank constrant capturng the global structure property, we can acheve more fathful dctonares. We demonstrate the effectveness of conformal and low-rank constrants through comparng wth proxmty relatonshps [19] for sparse representaton. 237

4 4. The CLRSR Approach Gven an nput sample set X = [x 1,x 2,...,x N ], the over-complete dctonary D = [d 1,d 2,...,d m ] and sparse decomposton coeffcents A = [α 1,α 2,...,α N ] can be learnt by solvng the sparse representaton problem mn D,α x Dα 2 2 +λ α 1, 1) where λ s a scalar constant. However, the tradtonal sparse representaton cannot capture the structure of the data X. To acheve ths goal, we consder the conformal constrant and low-rank constrant to preserve the local and global structure of the data Conformal Constrant We nvolve the local geometrc nformaton of the data X by addng a conformal termfα) nto 1), leadng to x Dα 2 2 +λ 1 α 1 fα). 2) mn D,α Inspred by conformal egenmaps [27], let g : z = gx ) be a mappng between two dscrete pont sets X and Z. Suppose that the pontx and two of tsk-nearest neghbors x j and x k were sampled from nput data, as well as ther correspondng mappng ponts z, z j and z k from the fnal embeddng space. To acheve the conformal mappng, the trangle formed byxponts should be smlar to that formed byz ponts. Thus we have x x j 2 z z j 2 = x x k 2 z z k 2 = x j x k 2 z j z k 2 = s, 3) where s represents the scale rato of these trangles from orgnal manfold to embeddng. In order to fnd maxmally angle-preservng embeddng, thez ponts can be solved by mnmzng the cost functon xj x k 2 s z j z k 2) 2, 4) j,k N where N nvolves the neghbors of x, ncludng x tself. Ths means that every two ponts n the same neghborhood set are connected, formng a locally complete graph. We have the observaton that sparse codng s a mappng process from samples n feature space to the codng coeffcents n dctonary manfold. Specfcally, each sample x n dense feature space has ts embeddng α n sparse dctonary space due to sparse codng technque. We ntroduce the conformal property nto sparse representaton to preserve the local structure nformaton of the nput data and consequently, problem 4) turns to our conformal term as fα ) = xj x k 2 s α j α k 2) 2. 5) j,k N By substtutng 5) nto 2), we obtan the conformal sparse representaton cost functon mn D,α,S x Dα 2 2 +λ 1 α 1 j,k N xj x k 2 s α j α k 2) 2, 6) where S = [s 1,s 2,...,s N ] represents all the scales Low Rank Constrant The samples extracted from the same scene are smlar to each other and le on a very low-dmensonal subspace. The neghborng samples are combned nto a data matrx, and ths matrx can be approxmated by a matrx wth a very low rank [18]. Ths low-rank property s also preserved n the coeffcent matrx. Thus we ntroduce ranka), the lowrank constrant of coeffcent matrx, to capture the global structure of data. However, mnmzng ranka) s a NPhard problem. As a common practce, t can be converted to mnmzng A, where means the nuclear norm of the matrx sum of the sngular values) [5]. We add the low-rank constrant to 6), and get the fnal CLRSR problem mn D,A,S x Dα 2 2 +λ 1 α 1 +λ 3 A j,k N 4.3. Optmzaton xj x k 2 s α j α k 2) 2. 7) We frst convert problem 7) to the followng equvalent problem by ntroducng an auxlary varablep : mn D,A,S,P x Dα 2 2 +λ 1 α 1 +λ 3 P j,k N s.t. A = P. xj x k 2 s α j α k 2) 2, Ths problem can be solved by augmented Lagrange multpler ALM) method [15], whch mnmzes the followng augmented Lagrange functon: mn D,A,S,P x Dα 2 2 +λ 1 α 1 +λ 3 P j,k N xj x k 2 s α j α k 2) 2 +tr[ya P)]+ µ 2 A P 2 F, where Y s Lagrange multpler, and µ > 0 s a penalty parameter. Observe that there are four varables D,A,S,P ) to be solved n 9). We separate the objectve functon 8) 9) 238

5 Algorthm 1 Optmzaton of CLRSR problem 7). Input: DataX, parametersλ 1,λ 2 andλ 3. Intalze: D = randomd,m),a = P = 0,Y = 0,S = 1,µ = whle not converged do 1. Fx the others and update sparse coeffcentsaby A = argmn x Dα 2 2 +λ1 α 1 α j,k N xj x k 2 s α j α k 2) 2 + µ 2 A P Y/µ) 2 F. 2. Fx the others and updatep by P = argmn P λ 3 µ P P A+Y/µ) 2 F. 3. Fx the others and updated by D = argmn x Dα 2 D Fx the others and updates one by one through Algorthm 2 Sparse codng usng IPM. Requre: σ,τ > 0 Intalze: Ã1) = 0,t = 1. whle not converged do t = t+1, à t) = S τ σ à t 1) 1 )) F Ãt 1), 2σ where F Ãt 1)) s the dervatve of FA) w.r.t. à t 1), that s F A) = 2D T DA D T X)+µA P Y/µ)) +4λ 2 s α k α j) x j x k 2 s α j α k 2) 2, j,k N and S τ/σ s a soft-thresholdng operator defned as end whle return Ãt). S τα) = sgnα) max{ α τ/σ),0}. σ s = j,k N x j x k 2 α j α k 2 j,k N α j α k 2 ) Update the multplery by Y = Y +µ A P). 6. Updateµbyµ = mn1.1µ,10 10 ). 7. Check the convergence condton: A P 0. end whle return D. nto four subproblems. In each subproblem, only one varable s tackled whle the others fxed. The program runs through these steps teratvely untl convergng. The detals s outlned n Algorthm 1. In the frst step, we use the Iteratve Projecton Method IPM) [24] to solve the problem. We rewrte the equaton as where A = argmn α Fα)+2τ α 1, 10) Fα) = x Dα µ 2 A P Y/µ) 2 F xj x k 2 s α j α k 2) 2 j,k N and τ = λ 1 /2. Let à = veca) denote the vector obtaned by concatenatng all α n A,.e., à = [ α1,α T 2,...,αN] T T T. Fgure 3. The dctonares learned from the mage Barbara wth nose level σ = 20 usng K-SVD and our approach. The ntal sze of dctonary s 256. Our approach learns a smaller dctonary 155 atoms) by removng the redundant atoms, and has a better representaton wth less nose. The memory usage of à depends on the dctonary sze and the number of tranng samples. For dctonary sze of 1024 and 100k samples, the memory cost s about 390MB. Followng IPM descrbed n [24], the problem 10) can be solved by Algorthm 2. Step 2 can be solved by the sngular value thresholdng operator [4]. In step 3, we requre that each column of the dctonary s a unt vector,.e., d T j d j = 1 for all j. It s a quadratc programmng problem whch can be solved by usng a one-by-one update algorthm [35]. In ths step, f one atom n the dctonary was not used by any sample for decomposton durng sparse codng, t wll be deleted from the dctonary. Ths appears when there exst smlar dctonary atoms whch can replace each other. Thus deletng the smlar atoms can make the dctonary more compact and representatve. 239

6 Table 1. PSNRdB) and SSIM values of the super-resoluton results scalng factor = 2) Image Yel. Butt. Foreman Hat Hexagon House Leaves Parrot Red Butt. Starfsh Average Bcubc[12] SCDL[30] DSRML[19] Ours Fgure 4. Image super-resoluton results of Red Butterfly scalng factor: 2). From left to rght: low resoluton mage, hgh resoluton ground-truth, reconstructed results by Bcubc [12], SCDL [30], DSRML [19] and our approach The Learned Dctonary The ncoherence of the dctonary can ndcate ts representaton power. We use the correlaton coeffcents R between dctonary atoms to measure the ncoherence of the dctonary [36]. The correlaton coeffcent R s defned as Rd,d j ) = covd,d j ) covd,d ) covd j,d j ), 11) where covd,d j ) represents the covarance of atoms d,d j n the dctonary. Smaller correlaton coeffcent R ndcates larger ncoherence. We compute the largest R of all the atom pars to represent the ncoherence of the dctonary,.e., R D = max j Rd,d j ),d,d j D. We compare the correlaton ncoherence between the approaches wth and wthout the conformal and low-rank constrants. We learn the dctonary wth sze from ten dfferent natural mages. The value R D of the dctonary learned by the proposed approach s , whle wthout the conformal and low-rank constrants the value R D s Ths shows that the proposed approach learns a more representatve dctonary. The proposed approach can remove the redundant atoms n the dctonary, and the sze of the fnal dctonary may be smaller than the ntal one. Fgure 3 shows an example of learned dctonares from K-SVD and the proposed approach. The ntal sze of dctonary s 256, whle t reduces to 155 after tranng by the proposed approach. We can see that our approach gets a more compact dctonary wth better representaton and less nose. 5. Applcatons and Comparsons We apply our conformal and low-rank sparse representaton approach to mage restoraton problems ncludng mage super-resoluton and mage denosng. It s also extended to mage edtng problems such as vdeo recolorng. It s mportant to select approprate parameters for dfferent applcatons. In the proposed approach, there are four man parameters K,λ 1,λ 2 and λ 3. Lke most methods, we buld a valdaton set for parameter selecton. We optmze each parameter whle fxng others, untl all parameters acheve the optmal values. The specfc values of each applcaton n ths paper wll be gven below. In general, the tme complexty of our approach s lower than SCDL [30] method, comparable to DSRML [19] method, and a lttle hgher than K-SVD [9] method Image Super resoluton Image super-resoluton ams to reconstruct a hgh resoluton HR) mage from ts correspondng low resoluton LR) mage. We follow the super-resoluton framework of Yang et al. [33] to do our experments. The dctonary conssts of two coupled dctonares of HR mages and LR mages. The two dctonares are traned together so that the sparse representaton for each par of patches are the same. Our CLRSR approach brngs the structure nformaton n the data space to make the coupled dctonares more compact and representatve. As a result, for mage superresoluton, the coupled dctonares can better represent the local and global structure of the HR and LR patches respectvely, constructng better HR mages. As same as prevous super-resoluton methods [30, 19], we transform the mages to YCbCr color space and only perform on the lumnance component. The sze of the dctonary s The patch sze s set to 5 5. The parametersk,λ 1,λ 2,λ 3 are set to be 5, 2, 0.005, 0.005, respectvely. The tranng mages are selected from [33]. We 240

7 Table 2. PSNRdB) and SSIM values of the super-resoluton results scalng factor = 3) Image Yel. Butt. Foreman Hat Hexagon House Leaves Parrot Red Butt. Starfsh Average Bcubc[12] SCDL[30] DSRML[19] Ours Fgure 5. Image super-resoluton results of Hexagon scalng factor: 3). From left to rght: low resoluton mage, hgh resoluton ground-truth, reconstructed results by Bcubc [12], SCDL [30], DSRML [19] and our approach. make comparsons wth the exstng mage super-resoluton approaches, ncludng bcubc [12], SCDL [30] and DSRM- L [19]. The test mages are from [19] and [30], and some are from nternet. As most of prevous methods dd, we frst do the mage super-resoluton wth scalng factor 2. The Peak Sgnal to Nose Rato PSNR) and Structural SIMlarty SSIM) results are lsted n Table 1. The values shown n the table are calculated only on the lumnance channel. From ths table we can see that the proposed approach outperforms other methods. The PSNR s about 0.34dB hgher than D- SRML [19] n average. Fg. 4 shows one example of superresoluton results by these methods. From the example we can notce that SCDL method smoothed the edges well, but sometmes generated artfacts on some spots. The DSRML method generated blurred result. In contrast, our approach can well preserve these local structures to some extent. We also do mage super-resoluton wth scalng factor 3. The results are shown n Table 2. Our CLRSR approach outperforms other methods n terms of PSNR and SSIM values. A texture example n Fg. 5 shows the advantage of our approach n preservng the structure of mage. Whle the SCDL method cannot preserve the structure of the hexagons, and the DSRML method generated small hexagon shadows n t. Fgure 6 llustrates the comparsons n dfferent dctonary szes. Our approach acheves better results even wth small dctonary sze. It means that our learned dctonary has a hgher compactness. We also valdate the contrbutons of each constrant towards the fnal results by settng each λ value to zero. Here we test wth scalng factor 2. Wth all constrants workng, the average PSNR of the test mages s By turnng off conformal constrant, the average PSNR decreases to By turnng off low-rank constrant, the Fgure 6. Comparson of mage super-resoluton wth dfferent dctonary szes usng dfferent methods. The PSNR s the mean value of all the test mages used n the paper. average PSNR becomes We can see that low-rank constrant contrbutes more than conformal constrant Image Denosng We also evaluate our approach on mage denosng wth dfferent nose levels. Many researchers have reported that sparse representatons are very effectve toward mage denosng [1]. The dctonary s frst learnt on the patches set sampled from the nosy mage. Then we employ the OMP method [26] to compute the codng coeffcents to reconstruct the mage. Durng dctonary learnng process, we setλ 1 = 0.03σ as the sparsty factor for all experments, where σ s the standard devaton of Gaussan nose. Other parameters K,λ 2,λ 3 are set to be 5, 0.001, 0.001, respectvely. The patch sze s set to 8 8. The dctonary s ntalzed to DCT bass wth dmenson of 256. We make comparsons wth K-SVD denosng method [9] and Bao s proxmal method [2] n dfferent nose levels. The test mages are selected from these compared methods. Table 3 241

8 Table 3. PSNRdB) values of the denosed results. Image Boat Fngerprnt Hll σ K-SVD [9] Proxmal [2] Ours Image Lena Man Peppers σ K-SVD [9] Proxmal [2] Ours Fgure 7. Denosng comparson of mage man wth K-SVD [9] and proxmal method [2] n nose levelσ = 25. shows PSNR values of the results. Fgure 7 shows one example of the comparson mages. From the table we can see that our approach s better than others, especally n hgher nose levels Vdeo Recolorng Our approach can also be appled to mage edtng tasks such as vdeo recolorng. We have the observaton that edtng problem can be regarded as reconstructon problem. Accordng to some requrements, some content of the mage s modfed to reconstruct a new one. Here we use the learned dctonary to acheve the edtng and reconstructon. Takng mage/vdeo recolorng as an example, users can change object s color by drawng strokes wth desred colors, or edtng the color palettes of the vdeo or mage. Frstly, we learn a dctonary n RGB color space from the orgnal mage. When a user makes some color edts, the correspondng dctonary atoms wll be changed and the fnal recolored mage s reconstructed by the new dctonary. Please refer to [6] for more detals. In ths experment, the parameters K,λ 1,λ 2,λ 3 are set to be 5, 0.05, 0.001, 0.001, respectvely. The dctonary sze s ntalzed to 300. Our recolorng results are compared to Chen s method [6] for vsual effect. As shown n Fg. 8, the user would lke to change the color of the blue car by drawng green strokes on t. Our approach acheves good reconstructon results, whle artfacts appear on the results of Chen et al. [6], whch you can see that some areas of the Fgure 8. Comparson results on vdeo recolorng. From top to bottom: nput vdeo, recolorng results from [6] and our approach. car stll reman blue. Ths ndcates that our approach can get a more representatve dctonary, whch can cover all the nput data and get better mage reconstructon. 6. Conclusons In ths paper, we proposed a novel conformal and lowrank sparse representaton approach for mage restoraton. By consderng conformal property, the local structure of data can be preserved. In the meanwhle, low-rank constrant on the coeffcent matrx can reveal more fathful subspaces durng sparse representaton. Wth these constrants, our approach can get a more representatve and compact dctonary. It was appled to varous applcatons such as mage super-resoluton, mage denosng and vdeo recolorng. We evaluated our approach by comparng to many related methods on varous datasets. Our approach runs not very fast due to the addtonal computaton for conformal relatonshps, especally for large dataset. However, our program s far from optmzng and we wll develop our program n parallel for more effcency. Acknowledgement. We thank the revewers for ther valuable feedback. Ths work s supported n part by grants from NSFC ) & ), 863 Program 2013AA013801), and SRFDP ). 242

9 References [1] M. Aharon, M. Elad, and A. Brucksten. K-SVD: An algorthm for desgnng overcomplete dctonares for sparse representaton. IEEE Trans. Sgnal Process., 5411): , , 2, 7 [2] C. Bao, H. J, Y. Quan, and Z. Shen. l 0 norm based dctonary learnng by proxmal methods wth global convergence. In CVPR, pages , , 8 [3] M. Belkn and P. Nyog. Laplacan egenmaps for dmensonalty reducton and data representaton. Neural Computaton, 156): , [4] J.-F. Ca, E. J. Candès, and Z. Shen. A sngular value thresholdng algorthm for matrx completon. SIAM Journal on Optmzaton, 204): , [5] E. J. Candès and B. Recht. Exact matrx completon va convex optmzaton. Foundatons of Computatonal mathematcs, 96): , , 4 [6] X. Chen, D. Zou, J. L, X. Cao, Q. Zhao, and H. Zhang. Sparse dctonary learnng for edt propagaton of hghresoluton mages. In CVPR, pages , June , 8 [7] Y.-L. Chen and C.-T. Hsu. A generalzed low-rank appearance model for spato-temporally correlated ran streaks. In ICCV, pages , [8] W. Dong, D. Zhang, and G. Sh. Centralzed sparse representaton for mage restoraton. In ICCV, pages , [9] M. Elad and M. Aharon. Image denosng va sparse and redundant representatons over learned dctonares. IEEE Trans. Image Process., 1512): , , 2, 6, 7, 8 [10] T. Igarash, T. Moscovch, and J. F. Hughes. As-rgdas-possble shape manpulaton. ACM Trans. Graph., 243): , July [11] R. Keshavan, A. Montanar, and S. Oh. Matrx completon from nosy entres. In NIPS, pages , [12] R. Keys. Cubc convoluton nterpolaton for dgtal mage processng. IEEE Transactons on Acoustcs, Speech and Sgnal Processng, 296): , , 7 [13] S. Kong and D. Wang. A dctonary learnng approach for classfcaton: separatng the partcularty and the commonalty. In ECCV, pages [14] H. Lee, A. Battle, R. Rana, and A. Y. Ng. Effcent sparse codng algorthms. In Advances n Neural Informaton Processng Systems, pages , [15] Z. Ln, M. Chen, and Y. Ma. The augmented lagrange multpler method for exact recovery of corrupted low-rank matrces. Techncal Report, UILU-ENG , [16] Y. Lpman, D. Levn, and D. Cohen-Or. Green coordnates. ACM Trans. Graph., 273):78:1 78:10, Aug [17] G. Lu, Z. Ln, and Y. Yu. Robust subspace segmentaton by low-rank representaton. In ICML, pages , [18] S. Lu, X. Ren, and F. Lu. Depth enhancement va low-rank matrx completon. In CVPR, pages , , 4 [19] X. Lu, Y. Yuan, and P. Yan. Image super-resoluton va double sparsty regularzed manfold learnng. IEEE TCSVT, 2312): , Dec , 2, 3, 6, 7 [20] J. Maral, F. Bach, J. Ponce, and G. Sapro. Onlne learnng for matrx factorzaton and sparse codng. The Journal of Machne Learnng Research, 11:19 60, [21] J. Maral, M. Elad, and G. Sapro. Sparse representaton for color mage restoraton. IEEE Trans. Image Process., 171):53 69, , 2 [22] S. Mallat. A wavelet tour of sgnal processng. Academc press, [23] Q. Qu, Z. Jang, and R. Chellappa. Sparse dctonary-based representaton and recognton of acton attrbutes. In ICCV, pages , Nov [24] L. Rosasco, A. Verr, M. Santoro, S. Mosc, and S. Vlla. Iteratve projecton methods for structured sparsty regularzaton. CSAIL Techncal Reports, MIT-CSAIL-TR CBCL-282), Oct [25] S. T. Rowes and L. K. Saul. Nonlnear dmensonalty reducton by locally lnear embeddng. Scence, ): , [26] R. Rubnsten, M. Zbulevsky, and M. Elad. Effcent mplementaton of the k-svd algorthm usng batch orthogonal matchng pursut. CS Technon, 408):1 15, [27] F. Sha and L. K. Saul. Analyss and extenson of spectral methods for nonlnear dmensonalty reducton. In ICML, pages , , 2, 3, 4 [28] F. Syahjan and G. Doretto. Learnng a context aware dctonary for sparse representaton. In ACCV, pages [29] I. Tosc and P. Frossard. Dctonary learnng. IEEE Sgnal Processng Magazne, 282):27 38, March [30] S. Wang, L. Zhang, Y. Lang, and Q. Pan. Semcoupled dctonary learnng wth applcatons to mage super-resoluton and photo-sketch synthess. In CVPR, pages , June , 6, 7 [31] J. Wrght, Y. Ma, J. Maral, G. Sapro, T. S. Huang, and S. Yan. Sparse representaton for computer vson and pattern recognton. Proceedngs of the IEEE, 986): , June , 2 [32] J. Wrght, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma. Robust face recognton va sparse representaton. IEEE Trans. Pattern Anal. Mach. Intell., 312): , [33] J. Yang, J. Wrght, T. S. Huang, and Y. Ma. Image superresoluton va sparse representaton. IEEE Trans. Image Process., 1911): , , 2, 6 [34] M. Yang, L. Zhang, X. Feng, and D. Zhang. Fsher dscrmnaton dctonary learnng for sparse representaton. In ICCV, pages , Nov , 2 [35] M. Yang, L. Zhang, J. Yang, and D. Zhang. Metaface learnng for sparse representaton based face recognton. In ICIP, pages , , 5 [36] Q. Zhang and B. L. Dscrmnatve k-svd for dctonary learnng n face recognton. In CVPR, pages , , 2, 6 243

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Robust Dictionary Learning with Capped l 1 -Norm

Robust Dictionary Learning with Capped l 1 -Norm Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 205) Robust Dctonary Learnng wth Capped l -Norm Wenhao Jang, Fepng Ne, Heng Huang Unversty of Texas at Arlngton

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Classification / Regression Support Vector Machines

Classification / Regression Support Vector Machines Classfcaton / Regresson Support Vector Machnes Jeff Howbert Introducton to Machne Learnng Wnter 04 Topcs SVM classfers for lnearly separable classes SVM classfers for non-lnearly separable classes SVM

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dictionary Pair Learning on Grassmann Manifolds for Image Denoising

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dictionary Pair Learning on Grassmann Manifolds for Image Denoising IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, 2015 1 Dctonary Par Learnng on Grassmann Manfolds for Image Denosng Xanhua Zeng, We Ban, We Lu, Jale Shen, Dacheng Tao, Fellow, IEEE Abstract Image

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

Super-resolution with Nonlocal Regularized Sparse Representation

Super-resolution with Nonlocal Regularized Sparse Representation Super-resoluton wth Nonlocal Regularzed Sparse Representaton Wesheng Dong a, Guangmng Sh a, Le Zhang b, and Xaoln Wu c a Key Laboratory of Intellgent Percepton and Image Understandng (Chnese Mnstry of

More information

Laplacian Eigenmap for Image Retrieval

Laplacian Eigenmap for Image Retrieval Laplacan Egenmap for Image Retreval Xaofe He Partha Nyog Department of Computer Scence The Unversty of Chcago, 1100 E 58 th Street, Chcago, IL 60637 ABSTRACT Dmensonalty reducton has been receved much

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Solving two-person zero-sum game by Matlab

Solving two-person zero-sum game by Matlab Appled Mechancs and Materals Onlne: 2011-02-02 ISSN: 1662-7482, Vols. 50-51, pp 262-265 do:10.4028/www.scentfc.net/amm.50-51.262 2011 Trans Tech Publcatons, Swtzerland Solvng two-person zero-sum game by

More information

Hybrid Non-Blind Color Image Watermarking

Hybrid Non-Blind Color Image Watermarking Hybrd Non-Blnd Color Image Watermarkng Ms C.N.Sujatha 1, Dr. P. Satyanarayana 2 1 Assocate Professor, Dept. of ECE, SNIST, Yamnampet, Ghatkesar Hyderabad-501301, Telangana 2 Professor, Dept. of ECE, AITS,

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

A Study on Clustering for Clustering Based Image De-Noising

A Study on Clustering for Clustering Based Image De-Noising Journal of Informaton Systems and Telecommuncaton, Vol. 2, No. 4, October-December 2014 196 A Study on Clusterng for Clusterng Based Image De-Nosng Hossen Bakhsh Golestan* Department of Electrcal Engneerng,

More information

Recognizing Faces. Outline

Recognizing Faces. Outline Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints

Sum of Linear and Fractional Multiobjective Programming Problem under Fuzzy Rules Constraints Australan Journal of Basc and Appled Scences, 2(4): 1204-1208, 2008 ISSN 1991-8178 Sum of Lnear and Fractonal Multobjectve Programmng Problem under Fuzzy Rules Constrants 1 2 Sanjay Jan and Kalash Lachhwan

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

Structure from Motion

Structure from Motion Structure from Moton Structure from Moton For now, statc scene and movng camera Equvalentl, rgdl movng scene and statc camera Lmtng case of stereo wth man cameras Lmtng case of multvew camera calbraton

More information

Joint Example-based Depth Map Super-Resolution

Joint Example-based Depth Map Super-Resolution Jont Example-based Depth Map Super-Resoluton Yanje L 1, Tanfan Xue,3, Lfeng Sun 1, Janzhuang Lu,3,4 1 Informaton Scence and Technology Department, Tsnghua Unversty, Bejng, Chna Department of Informaton

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Support Vector Machines. CS534 - Machine Learning

Support Vector Machines. CS534 - Machine Learning Support Vector Machnes CS534 - Machne Learnng Perceptron Revsted: Lnear Separators Bnar classfcaton can be veed as the task of separatng classes n feature space: b > 0 b 0 b < 0 f() sgn( b) Lnear Separators

More information

AS a classical problem in low level vision, image denoising. Group Sparsity Residual Constraint for Image Denoising

AS a classical problem in low level vision, image denoising. Group Sparsity Residual Constraint for Image Denoising 1 Group Sparsty Resdual Constrant for Image Denosng Zhyuan Zha, Xnggan Zhang, Qong Wang, Lan Tang and Xn Lu arxv:1703.00297v6 [cs.cv] 31 Jul 2017 Abstract Group-based sparse representaton has shown great

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning Internatonal Journal of Intellgent Informaton Systems 2018; 7(2): 15-22 http://www.scencepublshnggroup.com/j/js do: 10.11648/j.js.20180702.11 ISSN: 2328-7675 (Prnt); ISSN: 2328-7683 (Onlne) Kernel Collaboratve

More information

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016) Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

High resolution 3D Tau-p transform by matching pursuit Weiping Cao* and Warren S. Ross, Shearwater GeoServices

High resolution 3D Tau-p transform by matching pursuit Weiping Cao* and Warren S. Ross, Shearwater GeoServices Hgh resoluton 3D Tau-p transform by matchng pursut Wepng Cao* and Warren S. Ross, Shearwater GeoServces Summary The 3D Tau-p transform s of vtal sgnfcance for processng sesmc data acqured wth modern wde

More information

An Optimal Algorithm for Prufer Codes *

An Optimal Algorithm for Prufer Codes * J. Software Engneerng & Applcatons, 2009, 2: 111-115 do:10.4236/jsea.2009.22016 Publshed Onlne July 2009 (www.scrp.org/journal/jsea) An Optmal Algorthm for Prufer Codes * Xaodong Wang 1, 2, Le Wang 3,

More information

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga

Angle-Independent 3D Reconstruction. Ji Zhang Mireille Boutin Daniel Aliaga Angle-Independent 3D Reconstructon J Zhang Mrelle Boutn Danel Alaga Goal: Structure from Moton To reconstruct the 3D geometry of a scene from a set of pctures (e.g. a move of the scene pont reconstructon

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law)

Machine Learning. Support Vector Machines. (contains material adapted from talks by Constantin F. Aliferis & Ioannis Tsamardinos, and Martin Law) Machne Learnng Support Vector Machnes (contans materal adapted from talks by Constantn F. Alfers & Ioanns Tsamardnos, and Martn Law) Bryan Pardo, Machne Learnng: EECS 349 Fall 2014 Support Vector Machnes

More information

A New Approach For the Ranking of Fuzzy Sets With Different Heights

A New Approach For the Ranking of Fuzzy Sets With Different Heights New pproach For the ankng of Fuzzy Sets Wth Dfferent Heghts Pushpnder Sngh School of Mathematcs Computer pplcatons Thapar Unversty, Patala-7 00 Inda pushpndersnl@gmalcom STCT ankng of fuzzy sets plays

More information

Nonlocally Centralized Sparse Representation for Image Restoration

Nonlocally Centralized Sparse Representation for Image Restoration Nonlocally Centralzed Sparse Representaton for Image Restoraton Wesheng Dong a, Le Zhang b,1, Member, IEEE, Guangmng Sh a, Senor Member, IEEE, and Xn L c, Senor Member, IEEE a Key Laboratory of Intellgent

More information

The Research of Support Vector Machine in Agricultural Data Classification

The Research of Support Vector Machine in Agricultural Data Classification The Research of Support Vector Machne n Agrcultural Data Classfcaton Le Sh, Qguo Duan, Xnmng Ma, Me Weng College of Informaton and Management Scence, HeNan Agrcultural Unversty, Zhengzhou 45000 Chna Zhengzhou

More information

Accounting for the Use of Different Length Scale Factors in x, y and z Directions

Accounting for the Use of Different Length Scale Factors in x, y and z Directions 1 Accountng for the Use of Dfferent Length Scale Factors n x, y and z Drectons Taha Soch (taha.soch@kcl.ac.uk) Imagng Scences & Bomedcal Engneerng, Kng s College London, The Rayne Insttute, St Thomas Hosptal,

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

TN348: Openlab Module - Colocalization

TN348: Openlab Module - Colocalization TN348: Openlab Module - Colocalzaton Topc The Colocalzaton module provdes the faclty to vsualze and quantfy colocalzaton between pars of mages. The Colocalzaton wndow contans a prevew of the two mages

More information

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and

More information

The Discriminate Analysis and Dimension Reduction Methods of High Dimension

The Discriminate Analysis and Dimension Reduction Methods of High Dimension Open Journal of Socal Scences, 015, 3, 7-13 Publshed Onlne March 015 n ScRes. http://www.scrp.org/journal/jss http://dx.do.org/10.436/jss.015.3300 The Dscrmnate Analyss and Dmenson Reducton Methods of

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization

Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization Image Deblurrng and Super-resoluton by Adaptve Sparse Doman Selecton and Adaptve Regularzaton Wesheng Dong a,b, Le Zhang b,1, Member, IEEE, Guangmng Sh a, Senor Member, IEEE, and Xaoln Wu c, Senor Member,

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Robust Low-Rank Regularized Regression for Face Recognition with Occlusion

Robust Low-Rank Regularized Regression for Face Recognition with Occlusion Robust Low-Rank Regularzed Regresson for ace Recognton wth Occluson Janjun Qan, Jan Yang, anlong Zhang and Zhouchen Ln School of Computer Scence and ngneerng, Nanjng Unversty of Scence and echnology Key

More information

SPARSE NULL SPACE BASIS PURSUIT AND ANALYSIS DICTIONARY LEARNING FOR HIGH-DIMENSIONAL DATA ANALYSIS. Xiao Bian Hamid Krim Alex Bronstein Liyi Dai

SPARSE NULL SPACE BASIS PURSUIT AND ANALYSIS DICTIONARY LEARNING FOR HIGH-DIMENSIONAL DATA ANALYSIS. Xiao Bian Hamid Krim Alex Bronstein Liyi Dai SPARSE NULL SPACE BASIS PURSUIT AND ANALYSIS DICTIONARY LEARNING FOR HIGH-DIMENSIONAL DATA ANALYSIS Xao Ban Hamd Krm Alex Bronsten Ly Da Department of Electrcal Engneerng, North Carolna State Unversty,

More information

Towards Semantic Knowledge Propagation from Text to Web Images

Towards Semantic Knowledge Propagation from Text to Web Images Guoun Q (Unversty of Illnos at Urbana-Champagn) Charu C. Aggarwal (IBM T. J. Watson Research Center) Thomas Huang (Unversty of Illnos at Urbana-Champagn) Towards Semantc Knowledge Propagaton from Text

More information

Palmprint Recognition Using Directional Representation and Compresses Sensing

Palmprint Recognition Using Directional Representation and Compresses Sensing Research Journal of Appled Scences, Engneerng and echnology 4(22): 4724-4728, 2012 ISSN: 2040-7467 Maxwell Scentfc Organzaton, 2012 Submtted: March 31, 2012 Accepted: Aprl 30, 2012 Publshed: November 15,

More information

Face Recognition Based on SVM and 2DPCA

Face Recognition Based on SVM and 2DPCA Vol. 4, o. 3, September, 2011 Face Recognton Based on SVM and 2DPCA Tha Hoang Le, Len Bu Faculty of Informaton Technology, HCMC Unversty of Scence Faculty of Informaton Scences and Engneerng, Unversty

More information

Simplification of 3D Meshes

Simplification of 3D Meshes Smplfcaton of 3D Meshes Addy Ngan /4/00 Outlne Motvaton Taxonomy of smplfcaton methods Hoppe et al, Mesh optmzaton Hoppe, Progressve meshes Smplfcaton of 3D Meshes 1 Motvaton Hgh detaled meshes becomng

More information

Multi-stable Perception. Necker Cube

Multi-stable Perception. Necker Cube Mult-stable Percepton Necker Cube Spnnng dancer lluson, Nobuuk Kaahara Fttng and Algnment Computer Vson Szelsk 6.1 James Has Acknowledgment: Man sldes from Derek Hoem, Lana Lazebnk, and Grauman&Lebe 2008

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

Shape-adaptive DCT and Its Application in Region-based Image Coding

Shape-adaptive DCT and Its Application in Region-based Image Coding Internatonal Journal of Sgnal Processng, Image Processng and Pattern Recognton, pp.99-108 http://dx.do.org/10.14257/sp.2014.7.1.10 Shape-adaptve DCT and Its Applcaton n Regon-based Image Codng Yamn Zheng,

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

A Bilinear Model for Sparse Coding

A Bilinear Model for Sparse Coding A Blnear Model for Sparse Codng Davd B. Grmes and Rajesh P. N. Rao Department of Computer Scence and Engneerng Unversty of Washngton Seattle, WA 98195-2350, U.S.A. grmes,rao @cs.washngton.edu Abstract

More information

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Human Face Recognition Using Generalized. Kernel Fisher Discriminant Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of

More information

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval

IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1. SSDH: Semi-supervised Deep Hashing for Large Scale Image Retrieval IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY SSDH: Sem-supervsed Deep Hashng for Large Scale Image Retreval Jan Zhang, and Yuxn Peng arxv:607.08477v2 [cs.cv] 8 Jun 207 Abstract Hashng

More information

CLASSIFICATION OF ULTRASONIC SIGNALS

CLASSIFICATION OF ULTRASONIC SIGNALS The 8 th Internatonal Conference of the Slovenan Socety for Non-Destructve Testng»Applcaton of Contemporary Non-Destructve Testng n Engneerng«September -3, 5, Portorož, Slovena, pp. 7-33 CLASSIFICATION

More information

Tone-Aware Sparse Representation for Face Recognition

Tone-Aware Sparse Representation for Face Recognition Tone-Aware Sparse Representaton for Face Recognton Lngfeng Wang, Huayu Wu and Chunhong Pan Abstract It s stll a very challengng task to recognze a face n a real world scenaro, snce the face may be corrupted

More information

Learning Semantic Visual Dictionaries: A new Method For Local Feature Encoding

Learning Semantic Visual Dictionaries: A new Method For Local Feature Encoding Learnng Semantc Vsual Dctonares: A new Method For Local Feature Encodng Bng Shua, Zhen Zuo, Gang Wang School of Electrcal and Electronc Engneerng, Nanyang Technologcal Unversty, Sngapore. Emal: {bshua001,

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

The Research of Ellipse Parameter Fitting Algorithm of Ultrasonic Imaging Logging in the Casing Hole

The Research of Ellipse Parameter Fitting Algorithm of Ultrasonic Imaging Logging in the Casing Hole Appled Mathematcs, 04, 5, 37-3 Publshed Onlne May 04 n ScRes. http://www.scrp.org/journal/am http://dx.do.org/0.436/am.04.584 The Research of Ellpse Parameter Fttng Algorthm of Ultrasonc Imagng Loggng

More information

An efficient method to build panoramic image mosaics

An efficient method to build panoramic image mosaics An effcent method to buld panoramc mage mosacs Pattern Recognton Letters vol. 4 003 Dae-Hyun Km Yong-In Yoon Jong-Soo Cho School of Electrcal Engneerng and Computer Scence Kyungpook Natonal Unv. Abstract

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

A DCVS Reconstruction Algorithm for Mine Video Monitoring Image Based on Block Classification

A DCVS Reconstruction Algorithm for Mine Video Monitoring Image Based on Block Classification 1 3 4 5 6 7 8 9 10 11 1 13 14 15 16 17 18 19 0 1 Artcle A DCVS Reconstructon Algorthm for Mne Vdeo Montorng Image Based on Block Classfcaton Xaohu Zhao 1,, Xueru Shen 1,, *, Kuan Wang 1, and Wanme L 1,

More information

Competitive Sparse Representation Classification for Face Recognition

Competitive Sparse Representation Classification for Face Recognition Vol. 6, No. 8, 05 Compettve Sparse Representaton Classfcaton for Face Recognton Yng Lu Chongqng Key Laboratory of Computatonal Intellgence Chongqng Unversty of Posts and elecommuncatons Chongqng, Chna

More information

4580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 10, OCTOBER 2016

4580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 10, OCTOBER 2016 4580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 0, OCTOBER 206 Sparse Representaton Wth Spato-Temporal Onlne Dctonary Learnng for Promsng Vdeo Codng Wenru Da, Member, IEEE, Yangme Shen, Xn Tang,

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures A Novel Adaptve Descrptor Algorthm for Ternary Pattern Textures Fahuan Hu 1,2, Guopng Lu 1 *, Zengwen Dong 1 1.School of Mechancal & Electrcal Engneerng, Nanchang Unversty, Nanchang, 330031, Chna; 2. School

More information

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data Malaysan Journal of Mathematcal Scences 11(S) Aprl : 35 46 (2017) Specal Issue: The 2nd Internatonal Conference and Workshop on Mathematcal Analyss (ICWOMA 2016) MALAYSIAN JOURNAL OF MATHEMATICAL SCIENCES

More information

Optimal Workload-based Weighted Wavelet Synopses

Optimal Workload-based Weighted Wavelet Synopses Optmal Workload-based Weghted Wavelet Synopses Yoss Matas School of Computer Scence Tel Avv Unversty Tel Avv 69978, Israel matas@tau.ac.l Danel Urel School of Computer Scence Tel Avv Unversty Tel Avv 69978,

More information

General Regression and Representation Model for Face Recognition

General Regression and Representation Model for Face Recognition 013 IEEE Conference on Computer Vson and Pattern Recognton Workshops General Regresson and Representaton Model for Face Recognton Janjun Qan, Jan Yang School of Computer Scence and Engneerng Nanjng Unversty

More information

Wavefront Reconstructor

Wavefront Reconstructor A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Review of approximation techniques

Review of approximation techniques CHAPTER 2 Revew of appromaton technques 2. Introducton Optmzaton problems n engneerng desgn are characterzed by the followng assocated features: the objectve functon and constrants are mplct functons evaluated

More information

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming

Kent State University CS 4/ Design and Analysis of Algorithms. Dept. of Math & Computer Science LECT-16. Dynamic Programming CS 4/560 Desgn and Analyss of Algorthms Kent State Unversty Dept. of Math & Computer Scence LECT-6 Dynamc Programmng 2 Dynamc Programmng Dynamc Programmng, lke the dvde-and-conquer method, solves problems

More information

S INGLE1 image super resolution is the process of reconstructing

S INGLE1 image super resolution is the process of reconstructing Sngle Image Interpolaton va Adaptve Non-Local Sparsty-ased Modelng Yanv Romano, Matan Protter, and Mchael Elad, Fellow, IEEE Abstract Sngle mage nterpolaton s a central and extensvely studed problem n

More information