IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dictionary Pair Learning on Grassmann Manifolds for Image Denoising

Size: px
Start display at page:

Download "IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dictionary Pair Learning on Grassmann Manifolds for Image Denoising"

Transcription

1 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dctonary Par Learnng on Grassmann Manfolds for Image Denosng Xanhua Zeng, We Ban, We Lu, Jale Shen, Dacheng Tao, Fellow, IEEE Abstract Image denosng s a fundamental problem n computer vson and mage processng that holds consderable practcal mportance for real-world applcatons. The tradtonal patch-based and sparse codng-drven mage denosng methods convert two-dmensonal mage patches nto one-dmensonal vectors for further processng. Thus, these methods nevtably break down the nherent two-dmensonal geometrc structure of natural mages. To overcome ths lmtaton pertanng to prevous mage denosng methods, we propose a two-dmensonal mage denosng model, namely, the Dctonary Par Learnng (DPL) model, and we desgn a correspondng algorthm called the Dctonary Par Learnng on the Grassmann-manfold (DPLG) algorthm. The DPLG algorthm frst learns an ntal dctonary par (.e., the left and rght dctonares) by employng a subspace partton technque on the Grassmann manfold, wheren the refned dctonary par s obtaned through a sub-dctonary par mergng. The DPLG obtans a sparse representaton by encodng each mage patch only wth the selected sub-dctonary par. The non-zero elements of the sparse representaton are further smoothed by the graph Laplacan operator to remove nose. Consequently, the DPLG algorthm not only preserves the nherent two-dmensonal geometrc structure of natural mages but also performs manfold smoothng n the two-dmensonal sparse codng space. We demonstrate that the DPLG algorthm also mproves the SSIM values of the perceptual vsual qualty for denosed mages usng expermental evaluatons on the benchmark mages and Berkeley segmentaton datasets. Moreover, the DPLG also produces the compettve PSNR values from popular mage denosng algorthms. Index Terms mage denosng, dctonary par, twodmensonal sparse codng, Grassmann manfold, smoothng, graph Laplacan operator. I. INTRODUCTION AN mage s usually corrupted by nose durng the processes of beng captured, recorded and transmtted. One general assumpton s that an observed nosy mage x s generated by addng a Gaussan nose corrupton to the orgnal Ths work was partly supported by the Natonal Natural Scence Foundaton of Chna (No , ), the State Key Program of Natonal Natural Scence of Chna (No.U ), the Chongqng Natural Scence Foundaton (No. cstc2015cya40036) and Australan Research Councl Proects (No. FT , DP , and LP ). X. Zeng s wth Chongqng Key Laboratory of Computatonal Intellgence, College of Computer Scence and Technology, Chongqng Unversty of Posts and Telecommuncatons, Chongqng , Chna. E- mal:xanhuazeng@gmal.com W. Ban and D. Tao are wth Centre for Quantum Computaton and Intellgent Systems, Faculty of Engneerng and Informaton Technology, Unversty of Technology, Sydney, NSW, 2007, Australa. Emal: Dacheng.Tao@uts.edu.au W. Lu s wth Department of Electrcal Engneerng, Columba Unversty, New York, NY, 10027, USA. J. Shen s wth School of Informaton Systems, Sngapore Management Unversty, Sngapore, clear mage y, that s, x = y + v, (1) where v s the addtve whte Gaussan nose wth a mean of zero and a standard devaton σ. Image denosng plays an mportant role n the felds of computer vson [1], [2] and mage processng [3], [4]. Its goal s to restore the orgnal clear mage y from the observed nosy mage x, whch amounts to fndng an nverse transformaton from the nosy mage to the orgnal clear mage. Over the past decades, many denosng methods have been proposed for reconstructng the orgnal mage from the observed nosy mage by explotng the nherently spatal correlatons [5] [12]. The mage denosng methods are generally dvded nto three categores ncludng () nternal denosng methods (e.g., BM3D [5], K-SVD [11], NCSR [12]): usng only the nosy mage patches from a sngle nosy mage; () external denosng methods (e.g., SSDA [13], SDAE [14]): tranng the mappng from nosy mages to clean mages usng only external clean mage patches; and () nternal-external denosng methods (e.g. SCLW [15], NSCDL [16]): ontly usng the external statstcs nformaton from a clean tranng mage set and the nternal statstcs nformaton from the observed nosy mage. To the best of our knowledge, among these methods, BM3D [10] s consdered to be the current state of the art n the mage denosng area over the past several years. BM3D combnes two classcal technques, non-local smlarty and doman transformaton. However, BM3D s a complex engneerng method and has many tunable parameters, such as the choces of bases, patch-sze, transformaton thresholds, and smlarty measures. In recent years, machne learnng technques based on doman transformaton have ganed popularty and success n terms of a good denosng performance [11], [12], [14] [16]. For example, K-SVD [11] s one of the most well-known and effectve denosng methods that apply machne learnng technques. Ths method assumes that a clear mage patch can be represented as a sparse lnear combnaton of the atoms from an over-complete dctonary. Hence, the K-SVD method denoses a nosy mage by approxmatng the nosy patch usng a sparse lnear combnaton of atoms, whch s formulated as mnmzng the followng obectve functon: argm n D,α { Dα X 2 + α 1 }, (2) where D s an over-complete dctonary and each column theren corresponds to an atom, and α s the sparse codng

2 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, coeffcent combnaton of all atoms for reconstructng the clean mage patch from the nosy mage patch X under the convex sparse pror regularzaton constrant. 1. However, the above dctonary D s not easy to learn, and the correspondng denosng model uses a one-dmensonal vector, rather than the orgnal two-dmensonal matrx to represent each mage patch. Addtonally, regardng the K- SVD bass, several effectve, adaptve denosng methods, such as [11], [12], [17] [19] were also proposed n the theme of convertng mage patches nto one-dmensonal vectors and clusterng nosy mage patches nto regons wth smlar geometrc structures. Takng the NCSR algorthm [12] as a classcal example, t unfes both prors n mage local sparsty and non-local smlarty va a clusterng-based sparse representaton. The NCSR algorthm ncorporates consderable pror nformaton to mprove the denosng performance through ntroducng sparse codng nose, (.e., the thrd regularzaton term of the followng model, whch s an extenson of the model n Eq.(2)) as follows: argm n D,α { Dα X 2 + λ a 1 + γ α β }, (3) where β s a good estmaton of the sparse codes α, and λ and γ are the balance factors of two regularzaton terms (.e., the convex sparse regularzaton term and sparse codng nose term). In the NCSR model, whle enforcng the sparsty of codng coeffcents, the sparse codes α s are also centralzed to attan a good estmatons β s. Dctonary D s acqured by adoptng an adaptve sparse doman selecton strategy, whch executes K-Means clusterng and then learns a PCA sub-dctonary for each cluster. Nevertheless, ths strategy stll needs to convert the nosy mage patches nto one-dmensonal vectors, so good estmatons β s are dffcult to obtan. To summarze, almost all patch-based and sparse codngdrven mage denosng methods convert raw, two-dmensonal matrx representatons of mage patches nto one-dmensonal vectors for further processng, and thereby break down the nherent two-dmensonal geometrc structure of the natural mages. Moreover, the learned dctonary and sparse codng representatons cannot capture the ntrnsc poston correlatons between the pxels wthn each mage patch. On the one hand, to preserve the two-dmensonal geometrc structure of mage patches n the transformaton doman, a blnear transformaton s partcularly approprate (for mage patches n the matrx representaton) for extractng the semantc features of the rows and columns from the mage matrxes [20], whch s smlar to 2DPCA [21] on two drectons or can also be vewed as a specal case of some exstng tensor feature extracton methods such as TDCS [22], STDCS [23] and HOSVD [24]. On the other hand, we assume that mage patches sampled from a denosed mage le on an ntrnsc smooth manfold. However, the nosy mage patches almost never exactly le on the same manfold due to nose. A related work [26] shows that the manfold smoothng s a usual trck for effectvely removng the nose. The weghted neghborhood graph, constructed from mage patches, can approxmate the ntrnsc manfold structure. The graph Laplacan operator s the generator of the smoothng process on the neghborhood graph [25]. Therefore, the recent promsng graph Laplacan operator, n [26] [29], [31], for approxmatng the manfold structure s leveraged as a generc smooth regularzer whle removng the nose of two-dmensonal mage patches based on the sparse codng model. Wth the above consderatons, we propose a Dctonary Par Learnng model (DPL model) for mage denosng. In the DPL model, the dctonary par s used to capture the semantc features of two-dmensonal mage patches, and the graph Laplacan operator guarantees a dscplned smoothng accordng to the mage patch geometrc dstrbuton n the twodmensonal sparse codng space. However, we wll face the NP-hardness of drectly solvng the dctonary par and the two-dmensonal sparse codng matrxes for mage denosng. In the NCSR model, the vectorzed mage patches are clustered nto K subsets by K-means, and then one compact PCA subdctonary for each cluster s used. So, n our DPL model, twodmensonal mage patches can, of course, be clustered nto some subsets wth nonlocal smlartes. The two-dmensonal patches n a subset are very smlar to each other. Obvously, one needs only to extend the PCA sub-dctonary to a 2DPCA sub-dctonary for each cluster. However, the 2D mage patches sampled from the nosy mage wth a mult-resoluton and sldng wndow n our DPL model are of a hgh quantty and have a non-lnear dstrbuton, such that clusterng faces a serous computatonal challenge. Fortunately, the lterature [30] proposed a Subspace Indexng Model on Grassmann Manfold (SIM-GM) that can top-to-bottom partton the non-lnear space nto local subspaces wth a herarchcal tree structure. Mathematcally, a Grassmann manfold s the set of all lnear subspaces wth a fxed dmenson [32], [33], and so an extracted PCA subspace n each leaf node of the SIM-GM model corresponds to a pont on a Grassmann manfold. To obtanng the most effectve local space, ntroducng the Grassmann manfold dstances (.e., the angles between lnear subspaces [34]), the SIM-GM s able to automatcally manpulate the leaf nodes n the data partton tree and buld the most effectve local subspace by usng a bottom-up mergng strategy. Thus, by extendng the knd of PCA subspace parttonng on a Grassmann manfold to a 2DPCA subspace par parttonng on two Grassmann manfolds, we propose a Dctonary Par Learnng algorthm on Grassmann-manfolds (DPLG algorthm n shorthand). Expermental results on benchmark mages and Berkeley segmentaton datasets show that the proposed DPLG algorthm s more compettve than the state-of-theart mage denosng methods ncludng the nternal denosng methods and the external denosng methods. The rest of ths paper s organzed as follows: In Secton II, we buld a novel dctonary par learnng model for twodmensonal mage denosng. Secton III frst analyzes the learnng methods of the dctonary par and sparse codng matrxes, and then summarzes the dctonary par learnng algorthm on Grassmann-manfolds for mage denosng. In Secton IV, a seres of expermental results are shown, and we present the concludng remarks and future work n Secton V.

3 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, II. DICTIONARY PAIR LEARNING MODEL Accordng to the above dscusson and analyss, to preserve the orgnal two-dmensonal geometrc structure and to construct a sparse codng model for mage denosng, the twodmensonal nosy mage patches are encoded by proectons on a dctonary par that correspond to left multplyng a matrx and rght multplyng a matrx. Then by explotng sparse codng and graph Laplacan operator smoothng to remove noses, we desgn a Dctonary Par Learnng model (DPL model) for mage denosng n ths secton. A. Dctonary Par Learnng Model for Two-dmensonal S- parse Codng To preserve the two-dmensonal geometrcal structure wth sparse sensng n the transformaton doman, we need only to fnd two lnear transformatons for smultaneously mappng the columns and rows of mage patches under the sparse constrant. Let the mage patches set be {X 1, X 2,..., X,..., X n }, X R M N ; our method computes the left and rght two-dmensonal lnear transformatons to map the mage patches nto the two-dmensonal sparse matrx space. Thus, the correspondng obectve functon may be defned as follows: argm n A,B,S { A T X B S F + λ S F,1 }, (4) where A R M M1 and B R N N1 are respectvely called the left codng dctonary and the rght codng dctonary, S = {S }, S R M1 N1 s the sparse coeffcent matrx, λ s the regularzaton parameter,. F denotes the matrx Frobenous norm, and. F,1 denotes the matrx L 1 -norm whch s defned as the sum of the absolute values of all ts entres. In ths paper, the left and rght codng dctonares are combned and called as the dctonary par < A, B >. Once the dctonary par and the sparse representatons are learned, especally, the left and rght dctonares constraned by block orthogonalty, each patch X can be reconstructed by multplyng the selected sub-dctonary par < A k, B k > wth ts sparse representaton, that s: X A k S B T k, (5) where the orthogonal sub-dctonares A k, B k are selected to code the mage patch X, and k s the ndex of the selected sub-dctonary par. Note that the selecton method of the k th dctonary par s descrbed n Secton III-B. B. Graph Laplacan Operator Smoothng Nonlocal smoothng and co-sparsty are the prevalng technques for removng noses. Clearly, a natural assumpton s that the codng matrxes of smlar patches should be smlar. If smlar mage patches are encoded only on a sub-dctonary par of the learned dctonary par, then, explotng the graph Laplacan as a smoothng operator, both smoothng and cosparsty can be smultaneously guaranteed whle mnmzng a penalty term on the weghted L 1 -norm dvergence between the codng matrx of a gven mage patch and those codng matrxes of ts nonlocal neghborhood patches, as n: w S S F,1, (6), where w s the smlarty between the th patch and ts th neghbor. Accordng to our prevous research n manfold learnng, a patch smlarty metrc s selected to apply the generalzed Gaussan kernel functon n lterature [31]: 1 w = 0, otherwse. Γ exp ( ( X X F /2σ ) τ ) f X s k nearest neghbors of X, where Γ s the normalzaton factor, σ s the varance of neghborhood dstrbuton and τ s the generalzaton Gaussan exponent. In ths paper, the neghborhood smlarty s assumed to obey the super-gaussan dstrbuton: w = 1 ( ( X Γ exp X F,1 / )) 2σ. (8) C. The Fnal Obectve Functon Combnng the sparse codng term n Eq. (4) and the smoothng term n Eq. (6), the fnal obectve functon of the DPL model s defned as follows: argm n { A T X B S F + λ S F,1 A,B,S +γ w S S F,1 },, (1) (9) w = 1 S.t. (2)A T k A k = I, Bk T B k = I, k = 1,..., K where. F,1 denotes the matrx L 1 -norm whch s defned as the sum of the absolute values of all matrx elements, and A and B are constraned to be block orthogonal matrces n the followng learnng algorthm. The above Eq. (9) s an accurate descrpton of the Dctonary Par Learnng model (DPL model), and Fg. 1 shows an llustraton of the DPL model. In the DPL model, two smlar 2-dmensonal mage patches, X and X, extracted from the gven nosy mage are encoded on two dctonares (.e., the left dctonary A and the rght dctonary B), whch are respectvely conssted of sub-dctonary sets A = {A 1,..., A k,..., A K } and B = {B 1,..., B k,..., B K } for computatonal smplcty, as analyzed n Secton III-A. The left codng dctonary A s used to extract the features of the column vectors from the mage patches, and the rght codng dctonary B s used to extract the features of the row vectors from the mage patches. For sparse response characterstcs, the two learned dctonares are usually requred to be redundant such that they can represent the varous local structures of two-dmensonal mages. Unlke tradtonal sparse codng, the sparse codng of each mage patch n our DPL model s a two-dmensonal sparse matrx. For sparsely codng each two-dmensonal mage patch, a smple method s fndng the most approprate sub-dctonary par from the learned dctonary par < A, B > to carry out compact codng on t whle constranng the zero codng (7)

4 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, coeffcents on those un-selected sub-dctonary pars. Ths method can ensure the attanment of a global sparse codng representaton. As for the thrd term n Eq. (9), correspondng to the rght of Fg. 1, t s expected to help realze as close and co-sparse as possble between the two-dmensonal sparse representatons of nonlocal smlar mage patches (that s, the constrants of smoothng and nonlocal co-sparsty). Thus, the two-dmensonal sparse codng matrces wth correspondng to nonlocal smlar mage patches are regularzed under the manfold smoothng assumpton wth a L 1 -norm metrc. Extractng mage patches A1 A2 Ak A1 A2 Ak Extractng mage patches X X B1 B2 smlarty: w w s -s 1 B1 Bk B2 Bk nonzero codes S S nonzero codes Fg. 1. Smlar mage patches encoded by the dctonary par < A, B >. III. DICTIONARY PAIR LEARNING ALGORITHM ON GRASSMANN-MANIFOLD In the DPL model (.e., Eq. (9)), the dctonary par < A, B > and the sparse codng matrxes S are all unknown, and ther smultaneous soluton s a NP problem. Therefore, our learnng strategy s to decompose the problem nto three subtasks: (1) learnng the dctonary par < A, B > from two-dmensonal nosy mage patches by egen-decomposton, as shown n Secton III-A; (2) fxng the dctonary par < A, B >, and then updatng the two-dmensonal sparse codng matrxes wth smoothng, as shown n Secton III-B; and (3) reconstructng the denosed mage as shown n Secton III-C. Thus, the so-called Dctonary Par Learnng algorthm on Grassmann-manfold (DPLG) s analyzed and summarzed as follows. A. Learnng the Dctonary Par For solvng Eq. (9), one mportant ssue centers on how to learn the dctonary par < A, B > for sparsely and smoothly codng the two-dmensonal mage patches. Due to the dffculty and nstablty n the learned dctonary by drectly optmzng the sparse codng model, the dctonares can also be drectly selected n conventonal sparsty-based codng models (.e., analytcally desgned dctonares). Thus, we desgn the 2DPCA subspace par partton on two Grassmann manfolds to mplement the clusterng-based sub-dctonary par learnng. Two sub-dctonares for each cluster are computed, correspondng to decomposng the covarance matrx and ts transposed matrx from two-dmensonal mage patches (.e., the sub-dctonary par). All such sub-dctonary pars construct two large over-complete dctonares to characterze all the possble local structures of a gven observed mage. It s assumed that the k th subset s extracted to obtan the k th sub-dctonary par < A k, B k >, where k = 1,..., K. Then, n the dctonary par < A, B >= {< A k, B k >} K k=1, the left dctonary A = {A 1,..., A k,..., A K } s vewed as a pont set on a Grassmann manfold, and the rght dctonary B = {B 1,..., B k,..., B K } s also vewed as a pont set on other Grassmann manfold because a Grassmann manfold s the set of all lnear subspaces wth the fxed dmenson [32]. In ths paper, obtanng the dctonary par < A, B > ncludes two basc stages: the ntal dctonary par < A, B > s obtaned by the followng Top-bottom Two-dmensonal Subspace Partton (TTSP algorthm); next the refned dctonary par < A, B > s obtaned by the Sub-dctonary Mergng algorthm (SM algorthm). 1) Obtanng the Intal Dctonary Par by TTSP Algorthm: For overcomng the dffculty n drectly learnng the effectve dctonary par < A, B > under the nonlnear dstrbuton characterstc of all of the two-dmensonal mage patches, the entre tranng mage patch set s dvded nto nonoverlappng subsets wth lnear structures suted to the classcal lnear method, such as 2DPCA, and the sub-dctonary par on each subset are easly learned by the egen-decompostons of two covarance matrxes 1. The lterature [30] constructed a knd of data partton tree for subspace ndexng based on the global PCA, but t s not sutable for our two-dmensonal subspace partton for learnng the dctonary par < A, B >. We propose a Top-bottom Two-dmensonal Subspace Partton algorthm (TTSP algorthm) for obtanng the ntal dctonary par < A, B >. The TTSP algorthm recursvely generates a bnary tree, and each leaf node s used n learnng a sub-dctonary par by usng an extended 2DPCA technque. The detaled steps of the TTSP algorthm are descrbed n Algorthm 1. 2) Mergng Sub-dctonary Pars by SM Algorthm: In the TTSP algorthm, each leaf node corresponds to two subspaces, namely, the left sub-dctonary and rght sub-dctonary, called a sub-dctonary par. However, as the number of levels n the partton ncreases, the number of tranng mage patches n each leaf node decreases. Leaf nodes may not be the most effectve local space for descrbng the mage nonlocal smlarty and local dstrbuton because each leaf node may contan an nsuffcent number of samples. One reasonable method s to merge the leaf nodes that span almost the same left sub-dctonares, and almost the same rght subdctonares. Because a Grassmann manfold s the set of all lnear subspaces wth a fxed dmenson and any two ponts on a Grassmann manfold correspond to two subspaces. Therefore, to merge the very smlar leaf nodes, we assume that all left sub-dctonares from all leaf nodes le on one Grassmann manfold and that all rght sub-dctonares from all leaf nodes le on the other Grassmann manfold. The angles between lnear subspaces have ntutvely become a reasonable measure for descrbng the dvergence 1 Two non-symmetrcal covarance matrxes [21] of a matrx dataset {X 1, X 2,..., X L }, L cov = 1 L L =1 (X C k )(X C k ) T and R cov = 1 L L =1 (X C k ) T (X C k ) where C k = 1 L L =1 X.

5 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Algorthm 1 (TTSP algorthm) Top-bottom Two-dmensonal Subspace Partton Input: Tranng mage patches, the maxmum depth of the bnary tree. Output: the Dctonary par < A, B > and centers {C k } of all leaf nodes. PROCEDURES: Step1, The frst node s the root node ncludng all mage patches. Step2, For all mage patches n the current leaf node, run the followng 1)-4)steps: 1) Compute respectvely the maxmum egenvectors u and v of the two covarance matrxes n the Footnote1. 2) Compute the one-dmensonal proecton representatons of all mage patches from ths node, that s, s = u T X v, = 1,..., L. 3) Partton the one-dmensonal real number set {s } nto two clusters by K-means. 4) Partton the mage patches correspondng to these two clusters nto the left chld and the rght chld. Smultaneously the depth of the node s added one. Step3, IF the depth of the node s larger than the maxmum depth or the number of mage patches n ths leaf node s smaller than the row number or column number of the mage patches, THEN stop the partton. ELSE repeat Step2 recursvely for the left chld node and the rght chld node. Step4, Compute the left sub-dctonary and the rght subdctonary for each leaf node by the followng 1)-4) steps: 1) Compute the center n the gven leaf node k. 2) Compute the two covarance matrxes L cov and R cov n the Footnote1. 3) Compute respectvely the correspondng egenvectors u 1, u 2,.., u d and v 1, v 2,.., v d to the d largest egenvalues; that s, to solve the two egen-equatons L cov u = λu and R cov v = λv. 4) Compute the left sub-dctonary A k = [u 1, u 2,.., u d ] and the rght sub-dctonary B k = [v 1, v 2,.., v d ]. Step5, Collect the sub-dctonares of K leaf nodes nto the dctonary par < A, B > (.e., the left dctonary A = {A 1,..., A k,..., A K } and the rght dctonary B = {B 1,..., B k,..., B K }). between subspaces on a Grassmann manfold [32]. Thus, for computatonal convenence, the smlarty metrc between two subspaces s typcally defned by takng the cosnes of the prncpal angles. Takng the left sub-dctonares for example, the cosnes of the prncpal angles are defned as follows: Defnton 1. Let A 1 and A 2 be two m-dmensonal subspaces correspondng to the two left sub-dctonares. The cosne of the t th prncpal angle between the two subspaces span(a 1 ) and span(a 2 ) s defned by: cos(θ t ) = S.t. Max { Max u t span(a 1) v ut t v t } { t span(a 2) u T t u t = vt T v t = 1 u T t u r = vt T v r = 0, (t r), (10) where 0 θ t π/2, t, r = 1,..., m, and u t and v t are the bass vectors from two subspaces, respectvely. In Eq. (10), the frst prncpal angle θ 1 s the smallest angle among those between all pars (each corresponds to two unt bass vectors), whch are respectvely from the two subspaces. The rest of the prncpal angles can be obtaned by other bass vectors n each subspace, as shown n Fg. 2. The smaller the prncpal angles are, the more smlar the two subspaces are (.e., the closer they are on the Grassmann manfold). In fact, the cosnes of all prncpal angles can be computed by a more numercally stable method, the Sngular Value Decomposton (SVD) [34] soluton, as descrbed n Theorem 1, for whch we provde a smple proof n Appendx A. Subspace AK Subspace A2 θ1 θ2 Subspace A1 Fg. 2. Prncpal angles between sub-dctonares. Let A 1 and A 2 be two m-dmensonal column-orthogonal matrxes that respectvely consst of orthogonal bases from two left sub-dctonares. Then, the cosnes of all prncpal angles between the two subspaces (.e., the two sub-dctonares) are computed by the followng SVD equaton: Theorem 1. If A 1 and A 2 are two m-dmensonal subspaces, then A T 1 A 2 = UΛV T, (11) where the dagonal matrx Λ = dag(cos θ 1,..., cosθ m ), UU T = I m and V V T = I m. In the followng subspace mergng algorthm, the smlarty Sm(A 1, A 2 ) between the two subspaces A 1 and A 2 s defned as the average of all prncpal angle cosne values: Sm(A 1, A 2 ) = 1 m cos θ l. (12) m Therefore, the larger Sm(A, A ) are, the more smlar the two subspaces are (.e., the closer they are on the Grassmann manfold). Those almost same subspaces should be merge nto a sngle subspace. On the other hand, the same stuaton should be consdered for the rght sub-dctonares B, = 1,..., K. The smlarty metrc between the rght sub-dctonares s defned n the same manner as the above method. Therefore, smultaneously takng the left sub-dctonares and the rght sub-dctonares nto account, our Sub-dctonary Mergng algorthm (SM algorthm) s descrbed n Algorthm 2. B. Updatng Sparse Codng Matrxes Secton III-A descrbes a method to rapdly learn the dctonary par < A, B >, where A = {A 1,..., A k,..., A K }, l=1

6 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Algorthm 2 (SM algorthm) Sub-dctonary Mergng algorthm Input: Sub-dctonary pars < A, B >, = 1,..., K1, the pre-specfed constant δ (emprcal value 0.99). Output: The reduced sub-dctonary pars < A, B >, k = 1,..., K, where K <= K1. PROCEDURES: Step1, Fnd the subset and subset, f Sm(A, A ) > δ and Sm(B, B ) > δ. Step2, Delete A, A and B, B, and replace wth the newly merged new left sub-dctonary and rght sub-dctonary from updated the mage patch set subset subset. Step3, Go Step1 untl any Sm(A, A ) < δ or Sm(B, B ) < δ. Step4, Update the dctonary par < A, B > usng the reduced sub-dctonary pars. As for the balance factor γ, when the two terms of Eq. (14) are smultaneously optmzed, we can reach the followng concluson (the proof s shown n Appendx B) Theorem 2. If X s the corrupted mage patch by nose N(0, σ), and the non-local smlarty obeys to the Laplacan dstrbuton wth the parameter σ, then the balance factor γ = σ2 2σ. Clearly, the obectve functon of S n Eq. (14) s convex and can be effcently solved. The frst term s to mnmze the reconstructon error on the sub-dctonary par < A k, B k >, and the second term s to ensure the smoothng and cosparsty n coeffcent matrx space. We ntalze the codng matrx S and S by the proectons of the mage patch X and ts neghbors X on the selected sub-dctonary par < A k, B k >, that s: B = {B 1,..., B k,..., B K }. For sparsely codng each twodmensonal nosy mage patch and deletng nose, we need only to fnd the most approprate sub-dctonary par < A k, B k > from the learned dctonary par < A, B > to represent the patch, and denose the mage patch by smoothng the sparse representaton. For the th nosy mage patch, we assume that the most approprate sub-dctonary par < A k, B k > s used to encode t and that the other sub-dctonary pars are constraned to provdng zero coeffcent codng. Accordng to the nearest center, the most approprate sub-dctonary par for the th nosy mage patch X can be selected by the smallest L 1 norm codng, that s: k = argmn{ A T k (X C k )B k F,1 }, k = 1,..., K, (13) k where K s the total number of sub-dctonary pars, C k denotes the center of the k th leaf node, and. F,1 denotes the matrx L1 norm, whch s defned as the sum of the absolute values of all matrx elements. For obtanng sparse representatons, we assume that any nosy mage patch s only encoded by one sub-dctonary par and that the codng coeffcents on the other sub-dctonary pars are constraned to zero. Therefore, for any nosy mage patch X, we can smplfy Eq. (9) to obtan the followng obectve functon defnton: Defnton 2. For mage patch X, let the selected nearest sub-dctonary par be < A k, B k > n Eq. (13). Then, the smoothng sparse codng s computed by the followng formula: argm n{ A T k S X B k S F +γ w S S F,1 }, (14) S.t. w = 1 where S s the sparse codng matrx of the th nearest mage patch on the sub-dctonary par < A k, B k >, w s the non-local neghborhood smlarty, and γ s the balance factor. S (t) = A T kx B k, (15) S (t) = A T kx B k, = 1,.., k1, (16) where mage patch X s one of the k1-nearest neghbors of mage patch X. Addtonally, for computatonal convenence, we can reformat and relax Eq. (14) nto the followng obecton functon: argm n{ A T k S X B k S F +γ S w S F,1 }. (17) S.t. w = 1 Accordng to the lterature [35], a threshold-shrnkage algorthm s adopted to solve the Eq. (17) (.e., usng the gradent descent method and the threshold-shrnkage strategy). Therefore, the sparse codng matrx S on the sub-dctonary par < A k, B k > s updated by the followng formula: S (t + 1) = f(s (t) w S (t), ηγ) + w S (t), (18) S.t. X A k S Bk T F < cnσ 2 where σ s the nose varance, N s the number of mage patch pxels, η s the gradent decent step, c s a scalng factor, whch s emprcally set 1.15, and f(.,.) s the soft thresholdshrnkage functon, that s: { 0, fz < δ f(z, δ) =, (19) z sgn(z)δ, otherwse where sgn(z) s a sgn functon. C. Reconstructng the Denosed Image As a type of non-local smlarty and transformaton doman approach, a gven nosy mage needs to be dvded nto many overlappng small mage patches. The correspondng denosed mage s obtaned by combnng all of the denosed mage patches. Let x denote a nosy mage, and let the bnary matrx

7 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, R be used for extractng the th mage patch at the poston, that s: mult-resoluton down-samplng X = R x, = 1, 2,..., n, (20) where n denotes the number of possble mage patches. If we let S be the codng matrx, wth smoothng and co-sparsty obtaned by usng the sub-dctonary par < A k, B k >, then the denosed mage x s reconstructed by: { } x = (R T A k S Bk) T (R T R 1), (21) where denotes an element-wse dvson and 1 denotes a matrx of ones. That s, Eq. (21) puts all denosed patches together as the denosed mage x (the overlapped pxels between neghborng patches are averaged). D. Summary of the DPLG Algorthm 1) The Descrpton of the DPLG Algorthm: Summarzng the above analyss, for adaptvely learnng and denosng from a gven nosy mage tself, we put forward the Dctonary Par Learnng algorthm on Grassmann-manfold (DPLG). The DPLG algorthm allows the dctonary par to be updated accordng to the last denosed result and then obtans better representatons of the nosy patches. Thus, the DPLG algorthm s desgned as an teratve mage denosng method. Each teraton ncludes three basc tasks, namely, learnng the dctonary par < A, B > from the nosy mage patches sampled from the current nosy mage at a mult-resoluton, updatng the 2D sparse representatons for mage patches from the current nosy mage, and reconstructng the denosed mage, where the current nosy mage s a slght translaton from the current denosed mage to the orgnal nosy mage. Fg. 3 shows the basc workng flowchart of the DPLG algorthm, and the detaled procedures of the DPLG algorthm are descrbed n the Algorthm 3. 2) Tme Complexty Analyss: Our DPLG method preserves the orgnal 2-dmensonal structure of each mage patch to un-change. If the sze of the sampled mage patches s b b, and the sub-dctonary par < A k, B k > s computed by usng 2DPCA on each mage patch subset, then A k and B k are two b b orthogonal matrces. Comparatvely, NCSR needs to compute a more complex b 2 b 2 orthogonal matrx as the dctonary by usng the PCA on one-dmensonal presentatons of mage patches. For example, n the NCSR method, the matrx sze appears to be 64 tmes larger than our method when b = 8. Therefore, for DPLG, less tme complexty s requred to compute the egenvectors. Moreover, comparng our DPLG method wth the NCSR method, the former s to rapdly top-bottom dvde each leaf node nto the left-chld and rght-chld by the frst prncpal component proecton on the current sub-dctonary par (.e., the two-way partton of one-dmensonal real numbers). The latter s to dvde the whole tranng set (.e., b 2 -dmensonal vectors) nto the specfed clusters by applyng K-means wth more tme complexty. Compared wth the K-SVD method, each atom of ts sngle dctonary D needs to be updated by SVD decomposton. If the number of the dctonary atoms 2-dmensonal subspace parttonng& mergng <A 1,B 1> <A 2,B 2>... <A k,b k> <A k+1,b k+1>... Sub-dctonary pars Extractng Fg. 3. The workng flowchart of DPLG algorthm. Input Select <A k,b k> Smoothng sparse codngs Reconstructon Each nosy Image patch & ts neghbors Denosed mage N Convergence? Y Output n K-SVD s equal to the amount of all sub-dctonary atoms n the DPLG or NCSR, then the computatonal complexty of K-SVD s the largest. However, the dctonary D of K-SVD n real-world applcatons s only ever emprcally set to a smaller over-complete dctonary atom number than the DPLG and NCSR method, so that K-SVD has a faster computng speed. Addtonally, n the sparse codng step, the three nternal denosng methods DPLG, NCSR and K-SVD have slght dfferences n tme complexty, as shown n Table I. Wthout loss of generalty, lettng the number of clusters equal K, the number of mage patches equal n, the sze of each mage patch equal b b, the teraton of K-means clusterng equal l, the k1-nearest neghbors equal k1, the number of dctonary atoms n K-SVD equal H, and the max number of nonzero codes for each mage patch n K- SVD equal M, we compare the computatonal complexty of the dctonary learnng step and the sparse codng step n three teratve dctonary learnng methods (nternal denosng methods), namely, DPLG, NCSR and KSVD, as shown n the Table I. Due to computng the non-local neghborhood smlarty wthn each cluster n our manfold smoothng strategy, computng the Laplacan smlarty only needs lnear computatonal tme. Fnally, the total tme complexty of the DPLG s less than the NCSR and K-SVD algorthms wth the same sze of ther dctonares (that s, when H = Kb). TABLE I TIME COMPLEXITY FOR ONE UPDATE OF TWO BASIC STEPS IN THREE DICTIONARY LEARNING ALGORITHMS: DPLG, NCSR AND K-SVD Algorthm Dctonary learnng step Sparse codng step DPLG O(Knl) + O(Kb 3 ) O(Knb) + O(nk1) NCSR O(Knlb 2 ) + O(Kb 6 ) O(Knb 2 ) K-SVD O(Hn) + O(Hb 6 ) O(HnM)

8 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Algorthm 3 (DPLG algorthm) Dctonary Par Learnng on Grassmann-manfold Input: Nosy mage N Im0 and estmated nose varance σ 0. Output: Denosed mage D Im. PROCEDURES: Step1, Set the ntal parameters, ncludng teratons, patch sze, the maxmum depth of leaf nodes, and the pre-specfed constants µ1 and µ2. Step2, Let the current denosed mage and nosy mage be D Im = N Im = N Im0. Step3, Loop the followng steps from 1) to 9) untl the gven teratons. 1) N Im = D Im + µ1(n Im0 D Im), σ = µ2 σ0 2 1 N (N Im0 N Im) 2., 2) Extract the 2D nosy mage patch set X from the gven nosy mage N Im at mult-resoluton. 3) Dvde the 2D mage patch set X nto K subsets by usng the Step1-3 of the TTSP algorthm. 4) Compute the two-dmensonal sub-dctonary pars < A k, B k > and the center C k for each 2D patch subset by usng the Step4 of the above TTSP algorthm. 5) Merge those almost the same sub-dctonary pars usng the SM algorthm. 6) Select the correspondng sub-dctonary par < A k, B k > for each nosy mage patch X from the current nosy mage N Im usng the followng formula: k = argmn{ A T k (X C k )B k F,1 } k 7) Compute the neghborhood smlarty w between these nosy mage patches {X } usng Eq. (8). 8) Compute the smooth and sparse 2D representatons for each mage patch and ts neghbors from the current nosy mage by usng Eq. (18). 9) Reconstruct the denosed mage D Im by ntegratng all denosed mage patches Y = A k S Bk T usng Eq. (21). qualty metrcs. Common knowledge holds that the smaller the RMSE s, the better the denosng s. Equvalently, the larger the PSNR s, the better the denosng s. Moreover, the RMSE and PSRN have the same assessment ablty, although they are not very well matched n the perceptual vsual qualty of denosed mages. The thrd quanttatve evaluaton method, the Structural SIMlarty (SSIM), focuses on the perceptual qualty metrc, whch compares normalzed local patterns of pxel ntenstes. In our experments, the PSNR and SSIM are used as obectve assessments. B. Experments on Benchmark Images To evaluate the performance of the proposed model, we explot the proposed DPLG algorthm for denosng ten nosy benchmark mages [38] and another dffcult-to-be-denosed nosy mage (named the ChangE-3 mage [39]), whch s sgnfcant. Several state-of-the-art denosng methods wth default parameters are used for comparson wth the proposed DPLG algorthm, ncludng the nternal denosng methods B- M3D [10], K-SVD [11] and NCSR [12], the external denosng methods SSDA [13] and SDAE [14], SCLW [15], and NSCDL [16]. As for the parameter settng of our DPLG algorthm, the k1-nearest neghbor parameter, the maxmum depth of leaf nodes and the number of teratons of the DPLG are emprcally set to 6, 7 and 18, respectvely, from a seres of tentatve test. Takng the k1-nearest-neghbor parameter as an example, we analyze the performance of our method at the dfferent k1-nearest-neghbor parameters, as shown n Fg.4. Accordngly, when the sze of neghbors s not large enough (for example the k1-nearest neghbors at [6,60]), the performance of our DPLG method does not sgnfcantly change. However, the DPLG can obtan the largest SSIM value when the k1-nearest-neghbor parameter s set to 6. SSIM House mage,σ =50 PSNR House mage,σ =50 IV. EXPERIMENTS In ths secton, we wll verfy the mage denosng performance of the proposed DPLG method. We test the performance of the DPLG method on benchmark mages [38], [39] and on 100 test mages from the Berkeley Segmentaton Dataset [40]. Moreover, these expermental results of the proposed DPLG method are compared wth seven developed state-of-the-art denosng methods, ncludng three nternal denosng methods and four denosng methods usng external nformaton from clean natural mages. A. Quanttatve Assessment of Denosed Images An obectve mage qualty metrc plays an mportant role n mage denosng applcatons. Currently, three classcal mage qualty assessment metrcs are typcally used: the Root mean square error (RMSE), the Peak Sgnal-to-Nose Rato (PSNR) and the measure of Structural SIMlarty (SSIM) [36]. The PSNR and RMSE are the smplest and most wdely used mage Neghbors k Neghbors k1 Fg. 4. The denosng performance of the DPLG at dfferent k1-nearest neghbors. 1) Comparng wth Internal Denosng Methods: The 20 dfferent nosy versons of the 11 benchmark mages, that s, correspondng to 220 nosy mages, are denosed respectvely by the prevously mentoned four nternal denosng methods: DPLG, NCSR, BM3D and K-SVD. The SSIM results of the four test methods are reported n Table II, and the hghest SSIM values are dsplayed n black bold. The PSNR results are reported n Table III, and the hghest PSNR values are dsplayed n black bold. It s worth notng that our DPLG method preserves the two-dmensonal geometrcal structure of the mage patches and thus can sgnfcantly acheve the best vsual qualty, as shown n columns 5-17 n Table II. From Table III, we can see that when the nose level s not very hgh (seemly nose

9 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, varance σ < 30), all of the four methods can acheve very good denosed mages. When the nose level s hgh (seemly nose varance 30 σ < 80), our DPLG method can obtan bascally the best denosng performance correspondng to columns 9-13 n Table III. Moreover, Fg. 5 shows the plots of the average PSNR and SSIM of the 11 mages at dfferent nose corrupton levels. Regardng the structural smlarty (SSIM) assessment of restored mages, our DPLG algorthm obtans the best denosng results for 87 nosy mages, the NCSR method s best for 60 nosy mages, the BM3D method s best for 71 nosy mages, and the K-SVD method s best for 2 nosy mages. Experments show that the proposed DPLG algorthm has the best average performance for restorng the perceptual vsual effect, as shown on the bottom of Table II and Fg. 5 (a). Under the PSNR assessment, our DPLG method obtans the best denosng results for 67 nosy mages, whle the NCSR method s best for 31 nosy mages, the BM3D method s best for 105 nosy mages, and the K-SVD method s best for 18 nosy mages. The DPLG also has a compettve performance n reconstructng the pxel ntensty, as shown n Table II and Fg. 5 (b). SSIM DPLG NCSR 0.6 BM3D K SVD nose varance σ (a) Average SSIM values of 11 denosed mages PSNR DPLG NCSR BM3D K SVD nose varance σ (b) Average PSNR values of 11 denosed mages Fg. 5. The average SSIM values and average PSNR values of 11 denosed mages at dfferent nose varance σ. 2) Comparng wth External Denosng Methods: In ths experment, we compare wth several denosng methods that explot the statstcs nformaton of external, nose-free natural mages. Our DPLG method only explots the nternal statstcs nformaton of the tested nosy mage tself. The SCLW and NSCDL denosng methods all explot external statstcs nformaton from a clean tranng mage set and the nternal statstcs from the observed nosy mage. The SCLW learns the dctonary from external and nternal examples, and the NSCDL learns the coupled dctonares from clean natural mages and explots the non-local smlarty from the test nosy mages. The SSDA and SDAE adopt the same denosng technque, (.e., learnng a denosed mappng usng a stacked Denosng Auto-encoder algorthm wth sparse codng characterstcs and a deep neural network structure [37]). Ther ams are to fnd the mappng relatons from nosy mage patches to nose-free mage patches by tranng on a large scale of external natural mage set. Table IV shows the comparson of the DPLG wth several nternal-external denosng methods and external denosng methods, n terms of characterstcs and the denosng performance on benchmark mages. Our experments show that the ont utlzaton of external and nternal examples generally outperforms ether stand-alone method, but no method s the best for all mages. For example, our DPLG can obtan the best denosng result on the House benchmark mage by usng only the smoothng, sparseness and non-local self-smlarty of the nosy mage. Furthermore, our DPLG stll mantans a better performance than the two external denosng methods SSDA and SDAE. TABLE IV COMPARISON OF DPLG WITH SEVERAL DENOISING METHODS USING EXTERNAL TRAINING IMAGES Methods Internal External Combnng PSNR(σ=25) Informaton Informaton (In-Ex) Barbara Boat House Average DPLG yes no no SCLW [15] yes yes yes NSCDL [16] yes yes yes SSDA [13] no yes no SDAE [14] no yes no ) Comparng wth Iteraton Denosng Methods: Our D- PLG method s an teratve method that allows the dctonary par to be updated usng the last denosed result and then obtans better 2-dmensonal representatons of the nosy patches from the nosy mage. Fg. 6 and Fg. 7 show the denosng results of two typcal nosy mages ( House and ChangE- 3 ) wth strong nose corrupton (nose varance =50) after 60 teratons. The expermental results emprcally demonstrate the convergence of the DPLG, as shown n Fg. 6. As the number of teratons ncreases, the denosed results get better. Fg. 6(a)-(b) dsplay the plots of ther PSNR values and SSIM values versus teratons,respectvely. Comparng wth two known teratve methods: K-SVD and NCSR, Fg.6 shows that our DPLG has a more rapdly ncreasng speed of PSNR and SSIM versus the teratons. It shows that our algorthm can acheve the best denosng performance among several teratve methods. The DPLG has compettve performance for reconstructng the smooth, the texture and the edge regons, as shown n the second row of Fg. 7. C. Experments on BSD test mages To further demonstrate the performance of the proposed DPLG method, the mage denosng experments were also conducted on 100 test mages from the publc benchmark Berkeley Segmentaton Dataset [40]. Amng at 10 dfferent nosy versons of these mages, that s, correspondng to a

10 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, TABLE II THE SSIM VALUES BY DENOISING 11 IMAGES AT DIFFERENT NOISE VARIANCE SSIM \ σ Algorthm DPLG NCSR Barbara BM3D K-SVD Boat DPLG NCSR BM3D K-SVD Camera Man Couple Fnger prnt House DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD Lena Man DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD Monarc h_full Peppers ChangE- 3 DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR Average BM3D K-SVD

11 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, TABLE III THE PSNR VALUES BY DENOISING 11 IMAGES AT DIFFERENT NOISE VARIANCE PSNR \ σ Algorthm DPLG NCSR Barbara BM3D K-SVD Boat Camera Man Couple Fnger prnt House Lena Man DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD Monarch _full Peppers ChangE-3 Average DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD DPLG NCSR BM3D K-SVD

12 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, House mage House mage 32 1 PSNR DPLG 24 NCSR K SVD Iteratons 25 ChangE 3 mage SSIM DPLG NCSR K SVD Iteratons 0.8 ChangE 3 mage PSNR 23 SSIM DPLG NCSR K SVD Iteratons (a) the plots of PSNR verus teratons 0.5 DPLG NCSR K SVD Iteratons (b) the plots of SSIM verus teratons Fg. 6. (a) The PSNR values versus teratons by usng DPLG, NCSR and KSVD when σ = 50; (b) The SSIM values versus teratons by usng DPLG, NCSR and KSVD when σ = 50. total of 1000 nosy mages, the comparson experments were completed by respectvely runnng the NCSR, BM3D, K-SVD and our DPLG method. In the experments, the parameter settngs for the DPLG are the same as n the above experments. Under the dfferent Gaussan nose corrupton, the average PSNR and SSIM values of 100 nosy mages are shown n Fg. 8. Our DPLG method can acheve the best total performance for restorng the perceptual vsual effect, as shown n the dstrbuton of the SSIM values of 100 denosed mages by four methods n Fg. 8(a), and has compettve performance for reconstructng pxel ntensty, as shown n the dstrbuton of the PSNR values of 100 denosed mages n Fg. 8(b). V. CONCLUSION In ths paper, we proposed a DPLG algorthm whch s a novel two-dmensonal mage denosng method workng on Grassmann manfolds and leadng to state-of-the-art performance. The DPLG algorthm has three prmary advantages: 1) the adaptve dctonary pars are rapdly learned va subspace parttonng and sub-dctonary par mergng on the Grassmann manfolds; 2) two-dmensonal sparse representatons are notably easy to obtan; and 3) a graph Laplacan operator makes two-dmensonal sparse code representatons vary smoothly for denosng. Moreover, extensve expermental results acheved on the benchmark mages and the Berkeley segmentaton datasets demonstrated that our DPLG algorthm can obtan better-than-average performance for restorng the perceptual vsual effect than the state-of-the-art nternal denosng methods. In the future, we would consder several potental problems, such as learnng three-dmensonal multple dctonares for vdeo denosng and explorng the fuson of manfold denosng and mult-dmensonal sparse codng technques. Fg. 7. The performance of the denosed House mage and ChangE-3 mage at nose varance =50 by usng three teraton algorthms: DPLG, NCSR and K-SVD, whch are terated 60 tmes. Our DPLG method acheves the best denosed results (wth correspondng to the second row, the denosed House mage, PSNR:30.042, SSIM: , and the denosed ChangE-3 mage, PSNR: , SSIM: ) of the several methods. APPENDIX A THE PROOF OF THEOREM1 IN SECTIONIII-A We gve a smple proof of Theorem 1. Proof. A 1 and A 2 are two D m-dmensonal columnorthogonal matrxes. A T 1 A 2 s a m-dmensonal matrx. Accordng to the SVD decomposton of the matrx A T 1 A 2, f λ 1, λ 2,..., λ m are m egen-values from the largest to the smallest, and U, V are two m-dmensonal orthogonal matrxes

13 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Nose varance σ (a) the dstrbuton of SSIM values APPENDIX B THE PROOF OF THEOREM2 IN SECTIONIII-B Proof. : Consderng the frst term of Eq. (14). X s the corrupted mage patch by nose N(0, σ), and S s the code of the correspondng clear patch. And E ( A T k X B k S ) = E ( ( ) ) A T k X A k S Bk T Bk F Accordng to Eq.(18), A k S Bk T s the reconstructon of the clear mage patch, {A k, B k } are the orthogonal dctonary par. E ( A T k X B k S ) = σ 2 As for the second term of Eq. (14). Accordng to that the F 1 -norm S S F,1 obeys to the Laplacan dstrbuton n Eq. (6) E(γ w S S F,1 ) = γe( = γe( w E( S S F,1 )) w 2σ ) = γe( 2σ w ) = γe( 2σ ), ( w = 1) Nose varance σ (b) the dstrbuton of PSNR values Fg. 8. The dstrbuton of the SSIM and PSNR values of denosng 100 nosy mages corrupted by dfferent noses by usng NCSR, BM3D, KSVD and our DPLG algorthm. correspondng to the egen-values λ 1, λ 2,..., λ m, then U T A T 1 A 2 V = dag(λ 1, λ 2,..., λ m ) And V s a m m orthogonal matrx. A 2 V s a rotaton transformaton of subspace A 2 by the rotaton matrx V, that s: span(a 2 ) = span(a 2 V ) Smlarly, the same equaton span(a 1 ) = span(a 1 U). Let u k and v k of Eq.(11) respectvely be the k th column of matrxes A 1 and A 2, that s: [u 1, u 2,..., u k,..., u m ] = A 1 U [v 1, v 2,..., v k,..., v m ] = A 2 V then, dag(cos θ 1,..., cos θ k,..., cos θ m ) = [u 1, u 2,..., u k,..., u m ] T [v 1, v 2,..., v k,..., v m ] = U T A T 1 A 2 V = dag(λ 1, λ 2,..., λ m ). = γ 2σ For preservng the scalng consstency, the rato of two terms should be equal to 1. = 1 γ = 2σ σ. 2 σ2 γ 2σ REFERENCES [1] P. Chatteree and P. Mlanfar, Is denosng dead, IEEE Trans. Image Process., vol. 19, no. 4, pp , Apr [2] S. Camlle, C. Deledalle and J. Auol, Adaptve regularzaton of the NLmeans: Applcaton to mage and vdeo denosng, IEEE Trans. Image Process., vol.23, no. 8, pp , Aug [3] S. G. Chang, B. Yu and M. Vetterl, Adaptve wavelet thresholdng for mage denosng and compresson, IEEE Trans. Image Process., vol. 9, no.9, pp , Sep [4] C. H. Xe, J. Y. Chang and W. B. Xu, Medcal mage denosng by generalsed Gaussan mxture modellng wth edge nformaton, IET Image Process., vol. 8, no. 8, pp , Aug [5] J-L. Starck, E. J. Candes and D. L. Donoho, The curvelet transform for mage denosng, IEEE Trans. Image Process., vol. 11, no. 6, pp , Jun [6] A. Buades, B. Coll and J. M. Morel, A revew of mage denosng algorthms, wth a new one, Multscale Model. Smul., vol. 4, no. 2, pp , Feb [7] J. Portlla, V. Strela, M. J. Wanwrght and E. P. Smoncell, Image denosng usng scale mxtures of Gaussans n the wavelet doman, IEEE Trans. Image Process., vol. 12, no. 11, pp , Nov [8] A. Buades, B. Coll and J. M. Morel, A non-local algorthm for mage denosng, n Proc. IEEE Comput. Soc. Conf. CVPR, Jun. 2005, vol. 2, pp [9] A. Rawade, A. Rangaraan and A. Baneree, Image denosng usng the hgher order sngular value decomposton, IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 4, pp , Apr [10] K. Dabov, A. Fo, V. Katkovnk and K. Egazaran, Image denosng by sparse 3-D transform-doman collaboratve flterng, IEEE Trans. Image Process., vol.16, no.8, pp , Aug [11] E. Mchael and A. Mchal, Image denosng va sparse and redundant representatons over learned dctonares, IEEE Trans. Image Process., vol. 15, no. 12, pp , Dec [12] W. Dong, L. Zhang, G. Sh and X. L, Nonlocally centralzed sparse representaton for mage restoraton, IEEE Trans. Image Process., vol. 22, no. 4, pp , Apr

14 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, [13] J. Y. Xe, L. L. Xu, and E. H. Chen, Image denosng and npantng wth deep neural networks, n Proc. Adv. NIPS, Dec. 2012, pp [14] H. M. L, Deep Learnng for Image Denosng, Internatonal Journal of Sgnal Processng, Image Processng and Pattern Recognton, vol. 7, no. 3, pp , Mar [15] Z. Y. Wang, Y. Z. Yang, J. C. Yang and T. S. Huang, Desgnng A Composte Dctonary Adaptvely From Jont Examples, n Proc. IEEE Comput. Soc. Conf. CVPR, Jun. 2015, [16] L. X. Chen, and X. J. Lu, Nonlocal Smlarty Based Coupled Dctonary Learnng for Image Denosng, Journal of Computatonal Informaton Systems, vol. 9, no. 11, pp , Nov [17] S. Hawe, M. Klensteuber and K. Depold, Analyss operator learnng and ts applcaton to mage reconstructon, IEEE Trans. Image Process., vol. 22, no. 6, pp , Jun [18] P. Chatteree and P. Mlanfar, Clusterng-based denosng wth locally learned dctonares, IEEE Trans. Image Process., vol. 18, no. 7, pp , Jul [19] W. M. Zuo, L. Zhang, C. W. Song and D. Zhang, Texture enhanced mage denosng va gradent hstogram preservaton, n Proc. IEEE Comput. Soc. Conf. CVPR, Jun. 2013, pp [20] J. P. Ye, Generalzed low rank approxmatons of matrces, Mach. Learn., vol. 61. no.1-3, pp , Jan [21] J. Yang, D. Zhang, A. F. Frang and J. Y. Yang, Two-dmensonal PCA: a new approach to appearance-based face representaton and recognton, IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 1, pp , Jan [22] S. J. Wang, J. Yang, M. F. Sun, X. J. Peng, M. M. Sun and C. G. Zhou, Sparse tensor dscrmnant color space for face verfcaton, IEEE Trans. Neural Netw. Learn. Syst., vol. 23, no. 6, pp , Jun [23] S. J. Wang, J. Yang, N. Zhang and C. G. Zhou, Tensor dscrmnant color space for face recognton, IEEE Trans. Image Process., vol. 20, no. 9, pp , Sep [24] J. Lang, Y. He, D. Lu and X. Zeng, Image fuson usng hgher order sngular value decomposton, IEEE Trans. Image Process., vol. 21, no. 5, pp , May [25] A. Elmoataz, O. Lezoray, S. Bougleux, Nonlocal dscrete regularzaton on weghted graphs: a framework for mage and manfold processng, IEEE Trans. Image Process., vol. 17, no. 7, pp , Jul [26] M. Hen and M. Maer, Manfold denosng, n Proc. Adv. NIPS, Jun. 2006, pp [27] M. Zheng, J. J. Bu, C. Chen and C. Wang, Graph regularzed sparse codng for mage representaton, IEEE Trans. Image Process., vol. 20, no. 5, pp , May [28] S. J. Wang, S. C. Yan, J. Yang, C. G. Zhou and X. L. Fu, A general exponental framework for dmensonalty reducton, IEEE Trans. Image Process., vol. 23, no. 2, pp , Feb [29] M. Belkn and P. Nyog, Laplacan egenmaps for dmensonalty reducton and data representaton, Neural Comput., vol. 15, no. 6, pp , Jun [30] X. C. Wang, L. Zhu and D. C. Tao, Subspaces ndexng model on Grassmann manfold for mage search, IEEE Trans. Image Process., vol. 20, no. 9, pp , Sep [31] X. H. Zeng, S. W. Luo, J. Wang and J. L. Zhao, Geodesc dstancebased generalzed Gaussan Laplacan egenmap, Journal of Software (Chnese), vol. 20, no. 4, pp , Apr [32] J. Hamm, Subspace-based learnng wth Grassmann manfolds, Ph.D thess, Unversty of Pennsylvana, [33] P. Turaga, A. Veeraraghavan, A. Srvastava and R. Chellappa, Statstcal computatons on Grassmann and Stefel manfolds for mage and vdeobased recognto, IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 11, pp , Nov [34] G. H. Golub, and C. F. Van Loan, Matrx computatons (3rd ed.), Johns Hopkns Unversty Press, Baltmore, MD, USA, [35] I. Daubeches, M. Defrse, and C. De Mol, An teratve thresholdng algorthm for lnear nverse problems wth a sparsty constrant, Commun. Pur. Appl. Math., vol. 57, no. 11, pp , Nov [36] Z. Wang, A. C. Bovk, H. R. Shekh and E. P. Smoncell, Image qualty assessment: from error vsblty to structural smlarty, IEEE Trans. Image Process., vol. 13, no. 4, pp , Apr [37] P. Vncent, H. Larochelle, I. Laoe, Y. Bengo and P. A. Manzagol, Stacked denosng autoencoders: Learnng useful representatons n a deep network wth a local denosng crteron, J. Mach. Learn. Res., vol. 11, no. 3, pp , Mar [38] benchmark mages from http : // / elad/software/ [39] ChangE-3 mage from http : //englsh.cntv.cn/ / shtml [40] 100 test mages from http : // /P roects/cs/vson/groupng/fg/ Xanhua Zeng s currently an assocate professor wth the Chongqng Key Laboratory of Computatonal Intellgence, College of Computer Scence and Technology, Chongqng Unversty of Posts and T- elecommuncatons, Chongqng, Chna. He receved hs PhD degree n Computer software and theory from Beng Jaotong Unversty n And he was a Vstng Scholar n the Unversty of Technology, Sydney, from Aug to Aug Hs man research nterests nclude mage processng, machne learnng and data mnng. We Ban (M 14) receved the BEng degree n electronc engneerng and the BSc Degree n Appled mathematcs n 2005, the MEng degree n electronc engneerng n 2007, all from the Harbn Insttute of Technology, Chna, and the PhD Degree n computer scence n 2012 from the Unversty of Technology, Sydney, Australa. Hs research nterests nclude pattern recognton and machne learnng. We Lu (M 14) receved the Ph.D. degree from Columba Unversty, New York, NY, USA, n He was a recpent of the 2013 Jury Award for Best Thess of Columba Unversty. He s currently a research staff member of the IBM Thomas J. Watson Research Center, Yorktown Heghts, NY, USA, wth research nterests n machne learnng, computer vson, pattern recognton, and nformaton retreval. Jale Shen s a faculty member School of Informaton Systems at the Sngapore Management Unversty (SMU), Sngapore. He receved hs PhD n Computer Scence from the Unversty of New South Wales (UNSW), Australa n the area of large-scale meda retreval and database access methods. He worked as a faculty member at UNSW, Sydney and researcher at nformaton retreval research group, the Unversty of Glasgow for a few years, before movng to the SMU, Sngapore. Dacheng Tao (F 15) s Professor of Computer Scence wth the Centre for Quantum Computaton & Intellgent Systems, and the Faculty of Engneerng and Informaton Technology n the Unversty of Technology, Sydney. He manly apples statstcs and mathematcs to data analytcs problems and hs research nterests spread across computer vson, data scence, mage processng, machne learnng, and vdeo survellance. Hs research results have expounded n one monograph and 100+ publcatons at prestgous ournals and promnent conferences, such as IEEE T-PAMI, T-NNLS, T-IP, JMLR, IJCV, NIPS, ICML, CVPR, ICCV, ECCV, AISTATS, ICDM; and ACM SIGKDD, wth several best paper awards, such as the best theory/algorthm paper runner up award n IEEE ICDM 07, the best student paper award n IEEE ICDM 13, and the 2014 ICDM 10 Year Hghest-Impact Paper Award.

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

A Study on Clustering for Clustering Based Image De-Noising

A Study on Clustering for Clustering Based Image De-Noising Journal of Informaton Systems and Telecommuncaton, Vol. 2, No. 4, October-December 2014 196 A Study on Clusterng for Clusterng Based Image De-Nosng Hossen Bakhsh Golestan* Department of Electrcal Engneerng,

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

An efficient method to build panoramic image mosaics

An efficient method to build panoramic image mosaics An effcent method to buld panoramc mage mosacs Pattern Recognton Letters vol. 4 003 Dae-Hyun Km Yong-In Yoon Jong-Soo Cho School of Electrcal Engneerng and Computer Scence Kyungpook Natonal Unv. Abstract

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Machine Learning: Algorithms and Applications

Machine Learning: Algorithms and Applications 14/05/1 Machne Learnng: Algorthms and Applcatons Florano Zn Free Unversty of Bozen-Bolzano Faculty of Computer Scence Academc Year 011-01 Lecture 10: 14 May 01 Unsupervsed Learnng cont Sldes courtesy of

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

PCA Based Gait Segmentation

PCA Based Gait Segmentation Honggu L, Cupng Sh & Xngguo L PCA Based Gat Segmentaton PCA Based Gat Segmentaton Honggu L, Cupng Sh, and Xngguo L 2 Electronc Department, Physcs College, Yangzhou Unversty, 225002 Yangzhou, Chna 2 Department

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization

Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization Image Deblurrng and Super-resoluton by Adaptve Sparse Doman Selecton and Adaptve Regularzaton Wesheng Dong a,b, Le Zhang b,1, Member, IEEE, Guangmng Sh a, Senor Member, IEEE, and Xaoln Wu c, Senor Member,

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

AS a classical problem in low level vision, image denoising. Group Sparsity Residual Constraint for Image Denoising

AS a classical problem in low level vision, image denoising. Group Sparsity Residual Constraint for Image Denoising 1 Group Sparsty Resdual Constrant for Image Denosng Zhyuan Zha, Xnggan Zhang, Qong Wang, Lan Tang and Xn Lu arxv:1703.00297v6 [cs.cv] 31 Jul 2017 Abstract Group-based sparse representaton has shown great

More information

Unsupervised Learning and Clustering

Unsupervised Learning and Clustering Unsupervsed Learnng and Clusterng Why consder unlabeled samples?. Collectng and labelng large set of samples s costly Gettng recorded speech s free, labelng s tme consumng 2. Classfer could be desgned

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

Efficient Segmentation and Classification of Remote Sensing Image Using Local Self Similarity

Efficient Segmentation and Classification of Remote Sensing Image Using Local Self Similarity ISSN(Onlne): 2320-9801 ISSN (Prnt): 2320-9798 Internatonal Journal of Innovatve Research n Computer and Communcaton Engneerng (An ISO 3297: 2007 Certfed Organzaton) Vol.2, Specal Issue 1, March 2014 Proceedngs

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Super-resolution with Nonlocal Regularized Sparse Representation

Super-resolution with Nonlocal Regularized Sparse Representation Super-resoluton wth Nonlocal Regularzed Sparse Representaton Wesheng Dong a, Guangmng Sh a, Le Zhang b, and Xaoln Wu c a Key Laboratory of Intellgent Percepton and Image Understandng (Chnese Mnstry of

More information

Structure from Motion

Structure from Motion Structure from Moton Structure from Moton For now, statc scene and movng camera Equvalentl, rgdl movng scene and statc camera Lmtng case of stereo wth man cameras Lmtng case of multvew camera calbraton

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Hybrid Non-Blind Color Image Watermarking

Hybrid Non-Blind Color Image Watermarking Hybrd Non-Blnd Color Image Watermarkng Ms C.N.Sujatha 1, Dr. P. Satyanarayana 2 1 Assocate Professor, Dept. of ECE, SNIST, Yamnampet, Ghatkesar Hyderabad-501301, Telangana 2 Professor, Dept. of ECE, AITS,

More information

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros. Fttng & Matchng Lecture 4 Prof. Bregler Sldes from: S. Lazebnk, S. Setz, M. Pollefeys, A. Effros. How do we buld panorama? We need to match (algn) mages Matchng wth Features Detect feature ponts n both

More information

An Image Compression Algorithm based on Wavelet Transform and LZW

An Image Compression Algorithm based on Wavelet Transform and LZW An Image Compresson Algorthm based on Wavelet Transform and LZW Png Luo a, Janyong Yu b School of Chongqng Unversty of Posts and Telecommuncatons, Chongqng, 400065, Chna Abstract a cylpng@63.com, b y27769864@sna.cn

More information

A Clustering Algorithm for Key Frame Extraction Based on Density Peak

A Clustering Algorithm for Key Frame Extraction Based on Density Peak Journal of Computer and Communcatons, 2018, 6, 118-128 http://www.scrp.org/ournal/cc ISSN Onlne: 2327-5227 ISSN Prnt: 2327-5219 A Clusterng Algorthm for Key Frame Extracton Based on Densty Peak Hong Zhao

More information

DWT based Novel Image Denoising by Exploring Internal and External Correlation

DWT based Novel Image Denoising by Exploring Internal and External Correlation ISSN(Onlne): 319-8753 ISSN (Prnt): 347-6710 Internatonal Journal of Innovatve Research n Scence, Engneerng and Technology (An ISO 397: 007 Certfed Organzaton) DWT based Novel Image Denosng by Explorng

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and

More information

Object-Based Techniques for Image Retrieval

Object-Based Techniques for Image Retrieval 54 Zhang, Gao, & Luo Chapter VII Object-Based Technques for Image Retreval Y. J. Zhang, Tsnghua Unversty, Chna Y. Y. Gao, Tsnghua Unversty, Chna Y. Luo, Tsnghua Unversty, Chna ABSTRACT To overcome the

More information

Brushlet Features for Texture Image Retrieval

Brushlet Features for Texture Image Retrieval DICTA00: Dgtal Image Computng Technques and Applcatons, 1 January 00, Melbourne, Australa 1 Brushlet Features for Texture Image Retreval Chbao Chen and Kap Luk Chan Informaton System Research Lab, School

More information

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification

12/2/2009. Announcements. Parametric / Non-parametric. Case-Based Reasoning. Nearest-Neighbor on Images. Nearest-Neighbor Classification Introducton to Artfcal Intellgence V22.0472-001 Fall 2009 Lecture 24: Nearest-Neghbors & Support Vector Machnes Rob Fergus Dept of Computer Scence, Courant Insttute, NYU Sldes from Danel Yeung, John DeNero

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information

Human Face Recognition Using Generalized. Kernel Fisher Discriminant

Human Face Recognition Using Generalized. Kernel Fisher Discriminant Human Face Recognton Usng Generalzed Kernel Fsher Dscrmnant ng-yu Sun,2 De-Shuang Huang Ln Guo. Insttute of Intellgent Machnes, Chnese Academy of Scences, P.O.ox 30, Hefe, Anhu, Chna. 2. Department of

More information

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures

A Novel Adaptive Descriptor Algorithm for Ternary Pattern Textures A Novel Adaptve Descrptor Algorthm for Ternary Pattern Textures Fahuan Hu 1,2, Guopng Lu 1 *, Zengwen Dong 1 1.School of Mechancal & Electrcal Engneerng, Nanchang Unversty, Nanchang, 330031, Chna; 2. School

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

Laplacian Eigenmap for Image Retrieval

Laplacian Eigenmap for Image Retrieval Laplacan Egenmap for Image Retreval Xaofe He Partha Nyog Department of Computer Scence The Unversty of Chcago, 1100 E 58 th Street, Chcago, IL 60637 ABSTRACT Dmensonalty reducton has been receved much

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

Module Management Tool in Software Development Organizations

Module Management Tool in Software Development Organizations Journal of Computer Scence (5): 8-, 7 ISSN 59-66 7 Scence Publcatons Management Tool n Software Development Organzatons Ahmad A. Al-Rababah and Mohammad A. Al-Rababah Faculty of IT, Al-Ahlyyah Amman Unversty,

More information

Accounting for the Use of Different Length Scale Factors in x, y and z Directions

Accounting for the Use of Different Length Scale Factors in x, y and z Directions 1 Accountng for the Use of Dfferent Length Scale Factors n x, y and z Drectons Taha Soch (taha.soch@kcl.ac.uk) Imagng Scences & Bomedcal Engneerng, Kng s College London, The Rayne Insttute, St Thomas Hosptal,

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches Proceedngs of the Internatonal Conference on Cognton and Recognton Fuzzy Flterng Algorthms for Image Processng: Performance Evaluaton of Varous Approaches Rajoo Pandey and Umesh Ghanekar Department of

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Nonlocally Centralized Sparse Representation for Image Restoration

Nonlocally Centralized Sparse Representation for Image Restoration Nonlocally Centralzed Sparse Representaton for Image Restoraton Wesheng Dong a, Le Zhang b,1, Member, IEEE, Guangmng Sh a, Senor Member, IEEE, and Xn L c, Senor Member, IEEE a Key Laboratory of Intellgent

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity

Corner-Based Image Alignment using Pyramid Structure with Gradient Vector Similarity Journal of Sgnal and Informaton Processng, 013, 4, 114-119 do:10.436/jsp.013.43b00 Publshed Onlne August 013 (http://www.scrp.org/journal/jsp) Corner-Based Image Algnment usng Pyramd Structure wth Gradent

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Optimal Workload-based Weighted Wavelet Synopses

Optimal Workload-based Weighted Wavelet Synopses Optmal Workload-based Weghted Wavelet Synopses Yoss Matas School of Computer Scence Tel Avv Unversty Tel Avv 69978, Israel matas@tau.ac.l Danel Urel School of Computer Scence Tel Avv Unversty Tel Avv 69978,

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

A Bilinear Model for Sparse Coding

A Bilinear Model for Sparse Coding A Blnear Model for Sparse Codng Davd B. Grmes and Rajesh P. N. Rao Department of Computer Scence and Engneerng Unversty of Washngton Seattle, WA 98195-2350, U.S.A. grmes,rao @cs.washngton.edu Abstract

More information

Joint Example-based Depth Map Super-Resolution

Joint Example-based Depth Map Super-Resolution Jont Example-based Depth Map Super-Resoluton Yanje L 1, Tanfan Xue,3, Lfeng Sun 1, Janzhuang Lu,3,4 1 Informaton Scence and Technology Department, Tsnghua Unversty, Bejng, Chna Department of Informaton

More information

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task

Term Weighting Classification System Using the Chi-square Statistic for the Classification Subtask at NTCIR-6 Patent Retrieval Task Proceedngs of NTCIR-6 Workshop Meetng, May 15-18, 2007, Tokyo, Japan Term Weghtng Classfcaton System Usng the Ch-square Statstc for the Classfcaton Subtask at NTCIR-6 Patent Retreval Task Kotaro Hashmoto

More information

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes

R s s f. m y s. SPH3UW Unit 7.3 Spherical Concave Mirrors Page 1 of 12. Notes SPH3UW Unt 7.3 Sphercal Concave Mrrors Page 1 of 1 Notes Physcs Tool box Concave Mrror If the reflectng surface takes place on the nner surface of the sphercal shape so that the centre of the mrror bulges

More information

An Accurate Evaluation of Integrals in Convex and Non convex Polygonal Domain by Twelve Node Quadrilateral Finite Element Method

An Accurate Evaluation of Integrals in Convex and Non convex Polygonal Domain by Twelve Node Quadrilateral Finite Element Method Internatonal Journal of Computatonal and Appled Mathematcs. ISSN 89-4966 Volume, Number (07), pp. 33-4 Research Inda Publcatons http://www.rpublcaton.com An Accurate Evaluaton of Integrals n Convex and

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

An Improved Image Segmentation Algorithm Based on the Otsu Method

An Improved Image Segmentation Algorithm Based on the Otsu Method 3th ACIS Internatonal Conference on Software Engneerng, Artfcal Intellgence, Networkng arallel/dstrbuted Computng An Improved Image Segmentaton Algorthm Based on the Otsu Method Mengxng Huang, enjao Yu,

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Recognizing Faces. Outline

Recognizing Faces. Outline Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &

More information

Graph-based Clustering

Graph-based Clustering Graphbased Clusterng Transform the data nto a graph representaton ertces are the data ponts to be clustered Edges are eghted based on smlarty beteen data ponts Graph parttonng Þ Each connected component

More information

A DCVS Reconstruction Algorithm for Mine Video Monitoring Image Based on Block Classification

A DCVS Reconstruction Algorithm for Mine Video Monitoring Image Based on Block Classification 1 3 4 5 6 7 8 9 10 11 1 13 14 15 16 17 18 19 0 1 Artcle A DCVS Reconstructon Algorthm for Mne Vdeo Montorng Image Based on Block Classfcaton Xaohu Zhao 1,, Xueru Shen 1,, *, Kuan Wang 1, and Wanme L 1,

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Two-Dimensional Supervised Discriminant Projection Method For Feature Extraction

Two-Dimensional Supervised Discriminant Projection Method For Feature Extraction Appl. Math. Inf. c. 6 No. pp. 8-85 (0) Appled Mathematcs & Informaton cences An Internatonal Journal @ 0 NP Natural cences Publshng Cor. wo-dmensonal upervsed Dscrmnant Proecton Method For Feature Extracton

More information

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems

A Unified Framework for Semantics and Feature Based Relevance Feedback in Image Retrieval Systems A Unfed Framework for Semantcs and Feature Based Relevance Feedback n Image Retreval Systems Ye Lu *, Chunhu Hu 2, Xngquan Zhu 3*, HongJang Zhang 2, Qang Yang * School of Computng Scence Smon Fraser Unversty

More information

A Robust Method for Estimating the Fundamental Matrix

A Robust Method for Estimating the Fundamental Matrix Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.

More information

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.

More information

UB at GeoCLEF Department of Geography Abstract

UB at GeoCLEF Department of Geography   Abstract UB at GeoCLEF 2006 Mguel E. Ruz (1), Stuart Shapro (2), June Abbas (1), Slva B. Southwck (1) and Davd Mark (3) State Unversty of New York at Buffalo (1) Department of Lbrary and Informaton Studes (2) Department

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between

More information

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces

Range images. Range image registration. Examples of sampling patterns. Range images and range surfaces Range mages For many structured lght scanners, the range data forms a hghly regular pattern known as a range mage. he samplng pattern s determned by the specfc scanner. Range mage regstraton 1 Examples

More information

Wavefront Reconstructor

Wavefront Reconstructor A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes

More information

Image segmentation by using the localized subspace iteration algorithm

Image segmentation by using the localized subspace iteration algorithm Image segmentaton by usng the localzed subspace teraton algorthm Jnlong Wu and Tejun L May 8, 28 Abstract An mage segmentaton algorthm called segmentaton based on the localzed subspace teratons (SLSI)

More information

The Comparison of Calibration Method of Binocular Stereo Vision System Ke Zhang a *, Zhao Gao b

The Comparison of Calibration Method of Binocular Stereo Vision System Ke Zhang a *, Zhao Gao b 3rd Internatonal Conference on Materal, Mechancal and Manufacturng Engneerng (IC3ME 2015) The Comparson of Calbraton Method of Bnocular Stereo Vson System Ke Zhang a *, Zhao Gao b College of Engneerng,

More information

Feature-Area Optimization: A Novel SAR Image Registration Method

Feature-Area Optimization: A Novel SAR Image Registration Method Feature-Area Optmzaton: A Novel SAR Image Regstraton Method Fuqang Lu, Fukun B, Lang Chen, Hao Sh and We Lu Abstract Ths letter proposes a synthetc aperture radar (SAR) mage regstraton method named Feature-Area

More information

Positive Semi-definite Programming Localization in Wireless Sensor Networks

Positive Semi-definite Programming Localization in Wireless Sensor Networks Postve Sem-defnte Programmng Localzaton n Wreless Sensor etworks Shengdong Xe 1,, Jn Wang, Aqun Hu 1, Yunl Gu, Jang Xu, 1 School of Informaton Scence and Engneerng, Southeast Unversty, 10096, anjng Computer

More information

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning Internatonal Journal of Intellgent Informaton Systems 2018; 7(2): 15-22 http://www.scencepublshnggroup.com/j/js do: 10.11648/j.js.20180702.11 ISSN: 2328-7675 (Prnt); ISSN: 2328-7683 (Onlne) Kernel Collaboratve

More information

4580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 10, OCTOBER 2016

4580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 10, OCTOBER 2016 4580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 0, OCTOBER 206 Sparse Representaton Wth Spato-Temporal Onlne Dctonary Learnng for Promsng Vdeo Codng Wenru Da, Member, IEEE, Yangme Shen, Xn Tang,

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

Robust Dictionary Learning with Capped l 1 -Norm

Robust Dictionary Learning with Capped l 1 -Norm Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 205) Robust Dctonary Learnng wth Capped l -Norm Wenhao Jang, Fepng Ne, Heng Huang Unversty of Texas at Arlngton

More information