Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization

Size: px
Start display at page:

Download "Image Deblurring and Super-resolution by Adaptive Sparse Domain Selection and Adaptive Regularization"

Transcription

1 Image Deblurrng and Super-resoluton by Adaptve Sparse Doman Selecton and Adaptve Regularzaton Wesheng Dong a,b, Le Zhang b,1, Member, IEEE, Guangmng Sh a, Senor Member, IEEE, and Xaoln Wu c, Senor Member, IEEE a Key Laboratory of Intellgent Percepton and Image Understandng (Chnese Mnstry of Educaton), School of Electronc Engneerng, Xdan Unversty, Chna b Dept. of Computng, The Hong Kong Polytechnc Unversty, Hong Kong c Dept. of Electrcal and Computer Engneerng, McMaster Unversty, Canada Abstract: As a powerful statstcal mage modelng technque, sparse representaton has been successfully used n varous mage restoraton applcatons. The success of sparse representaton owes to the development of l 1 -norm optmzaton technques, and the fact that natural mages are ntrnscally sparse n some doman. The mage restoraton qualty largely depends on whether the employed sparse doman can represent well the underlyng mage. Consderng that the contents can vary sgnfcantly across dfferent mages or dfferent patches n a sngle mage, we propose to learn varous sets of bases from a pre-collected dataset of example mage patches, and then for a gven patch to be processed, one set of bases are adaptvely selected to characterze the local sparse doman. We further ntroduce two adaptve regularzaton terms nto the sparse representaton framework. Frst, a set of autoregressve (AR) models are learned from the dataset of example mage patches. The best ftted AR models to a gven patch are adaptvely selected to regularze the mage local structures. Second, the mage non-local self-smlarty s ntroduced as another regularzaton term. In addton, the sparsty regularzaton parameter s adaptvely estmated for better mage restoraton performance. Extensve experments on mage deblurrng and super-resoluton valdate that by usng adaptve sparse doman selecton and adaptve regularzaton, the proposed method acheves much better results than many state-of-the-art algorthms n terms of both PSNR and vsual percepton. Key Words: Sparse representaton, mage restoraton, deblurrng, super-resoluton, regularzaton. 1 Correspondng author: cslzhang@comp.polyu.edu.hk. Ths work s supported by the Hong Kong RGC General Research Fund (PolyU 5375/09E). 1

2 I. Introducton Image restoraton (IR) ams to reconstruct a hgh qualty mage x from ts degraded measurement y. IR s a typcal ll-posed nverse problem [1] and t can be generally modeled as y=dhx+υ, (1) where x s the unknown mage to be estmated, H and D are degradng operators and υ s addtve nose. When H and D are denttes, the IR problem becomes denosng; when D s dentty and H s a blurrng operator, IR becomes deblurrng; when D s dentty and H s a set of random projectons, IR becomes compressed sensng [-4]; when D s a down-samplng operator and H s a blurrng operator, IR becomes (sngle mage) super-resoluton. As a fundamental problem n mage processng, IR has been extensvely studed n the past three decades [5-0]. In ths paper, we focus on deblurrng and sngle mage super-resoluton. Due to the ll-posed nature of IR, the soluton to Eq. (1) wth an l -norm fdelty constrant,.e., xˆ = arg mn y DHx, s generally not unque. To fnd a better soluton, pror knowledge of natural mages x can be used to regularze the IR problem. One of the most commonly used regularzaton models s the total varaton (TV) model [6-7]: ˆ = arg mn { +λ 1} x y DHx x, where x 1 s the l 1 -norm of the frst order x dervatve of x and λ s a constant. Snce the TV model favors the pecewse constant mage structures, t tends to smooth out the fne detals of an mage. To better preserve the mage edges, many algorthms have been later developed to mprove the TV models [17-19, 4, 45, 47]. The success of TV regularzaton valdates the mportance of good mage pror models n solvng the IR problems. In wavelet based mage denosng [1], researchers have found that the sparsty of wavelet coeffcents can serve as good pror. Ths reveals the fact that many types of sgnals, e.g., natural mages, can be sparsely represented (or coded) usng a dctonary of atoms, such as DCT or wavelet bases. That s, denote by Φ the dctonary, we have x Φα and most of the coeffcents n α are close to zero. Wth the sparsty pror, the representaton of x over Φ can be estmated from ts observaton y by solvng the followng l 0 -mnmzaton problem: ˆ α = arg mn { y DHΦα +λ α 0} α, where the l 0 -norm counts the number of nonzero coeffcents n vector α. Once ˆα s obtaned, x can then be estmated as xˆ = Φα ˆ. The

3 l 0 -mnmzaton s an NP-hard combnatoral search problem, and s usually solved by greedy algorthms [48, 60]. The l 1 -mnmzaton, as the closest convex functon to l 0 -mnmzaton, s then wdely used as an alternatve approach to solvng the sparse codng problem: ˆ = arg mn { y DHΦα +λ α 1} α [60]. In addton, recent studes showed that teratvely reweghtng the l 1 -norm sparsty regularzaton term can lead to better IR results [59]. Sparse representaton has been successfully used n varous mage processng applcatons [-4, 13, 1-5, 3]. A crtcal ssue n sparse representaton modelng s the determnaton of dctonary Φ. Analytcally desgned dctonares, such as DCT, wavelet, curvelet and contourlets, share the advantages of fast mplementaton; however, they lack the adaptvty to mage local structures. Recently, there has been much effort n learnng dctonares from example mage patches [13-15, 6-31, 55], leadng to state-of-the-art results n mage denosng and reconstructon. Many dctonary learnng (DL) methods am at learnng a unversal and over-complete dctonary to represent varous mage structures. However, sparse decomposton over a hghly redundant dctonary s potentally unstable and tends to generate vsual artfacts [53-54]. In ths paper we propose an adaptve sparse doman selecton (ASDS) scheme for sparse representaton. By learnng a set of compact sub-dctonares from hgh qualty example mage patches. The example mage patches are clustered nto many clusters. Snce each cluster conssts of many patches wth smlar patterns, a compact sub-dctonary can be learned for each cluster. Partcularly, for smplcty we use the prncpal component analyss (PCA) technque to learn the sub-dctonares. For an mage patch to be coded, the best sub-dctonary that s most relevant to the gven patch s selected. Snce the gven patch can be better represented by the adaptvely selected sub-dctonary, the whole mage can be more accurately reconstructed than usng a unversal dctonary, whch wll be valdated by our experments. Apart from the sparsty regularzaton, other regularzaton terms can also be ntroduced to further ncrease the IR performance. In ths paper, we propose to use the pecewse autoregressve (AR) models, whch are pre-learned from the tranng dataset, to characterze the local mage structures. For each gven local patch, one or several AR models can be adaptvely selected to regularze the soluton space. On the other hand, consderng the fact that there are often many repettve mage structures n an mage, we ntroduce a non-local (NL) self-smlarty constrant served as another regularzaton term, whch s very helpful n preservng edge sharpness and suppressng nose. α 3

4 After ntroducng ASDS and adaptve regularzatons (AReg) nto the sparse representaton based IR framework, we present an effcent teratve shrnkage (IS) algorthm to solve the l 1 -mnmzaton problem. In addton, we adaptvely estmate the mage local sparsty to adjust the sparsty regularzaton parameters. Extensve experments on mage deblurrng and super-resoluton show that the proposed ASDS-AReg approach can effectvely reconstruct the mage detals, outperformng many state-of-the-art IR methods n terms of both PSNR and vsual percepton. The rest of the paper s organzed as follows. Secton II ntroduces the related works. Secton III presents the ASDS-based sparse representaton. Secton IV descrbes the AReg modelng. Secton V summarzes the proposed algorthm. Secton VI presents expermental results and Secton VII concludes the paper. II. Related Works It has been found that natural mages can be generally coded by structural prmtves, e.g., edges and lne segments [61], and these prmtves are qualtatvely smlar n form to smple cell receptve felds [6]. In [63], Olshausen et al. proposed to represent a natural mage usng a small number of bass functons chosen out of an over-complete code set. In recent years, such a sparse codng or sparse representaton strategy has been wdely studed to solve nverse problems, partally due to the progress of l 0 -norm and l 1 -norm mnmzaton technques [60]. Suppose that x R n s the target sgnal to be coded, and Φ =[φ 1,, φ m ] R n m s a gven dctonary of atoms (.e., code set). The sparse codng of x over Φ s to fnd a sparse vector α=[α 1 ; ;α m ] (.e., most of the coeffcents n α are close to zero) such that x Φα [49]. If the sparsty s measured as the l 0 -norm of α, whch counts the non-zero coeffcents n α, the sparse codng problem becomes mn α x Φα s.t. α T, 0 where T s a scalar controllng the sparsty [55]. Alternatvely, the sparse vector α can also be found by { x 0} ˆ α = arg mn Φα +λ α, () α where λ s a constant. Snce the l 0 -norm s non-convex, t s often replaced by ether the standard l 1 -norm or the weghted l 1 -norm to make the optmzaton problem convex [3, 57, 59, 60]. An mportant ssue of the sparse representaton modelng s the choce of dctonary Φ. Much effort has been made n learnng a redundant dctonary from a set of example mage patches [13-15, 6-31, 55]. Gven 4

5 a set of tranng mage patches S=[s 1,, s N ] R n N, the goal of dctonary learnng (DL) s to jontly optmze the dctonary Φ and the representaton coeffcent matrx Λ=[α 1,,α N ] such that s Φ α and α T p, where p = 0 or 1. Ths can be formulated by the followng mnmzaton problem: ( Φ, ˆ Λˆ ) = argmn S-ΦΛ s.t. α, F T, (3) p Φ,Λ where F s the Frobenus norm. The above mnmzaton problem s non-convex even when p=1. To make t tractable, approxmaton approaches, ncludng MOD [56] and K-SVD [6], have been proposed to alternatvely optmzng Φ and Λ, leadng to many state-of-the-art results n mage processng [14-15, 31]. Varous extensons and varants of the K-SVD algorthm [7, 9-31] have been proposed to learn a unversal and over-complete dctonary. However, the mage contents can vary sgnfcantly across mages. One may argue that a well learned over-complete dctonary Φ can sparsely code all the possble mage structures; nonetheless, for each gven mage patch, such a unversal dctonary Φ s nether optmal nor effcent because many atoms n Φ are rrelevant to the gven local patch. These rrelevant atoms wll not only reduce the computatonal effcency n sparse codng but also reduce the representaton accuracy. Regularzaton has been used n IR for a long tme to ncorporate the mage pror nformaton. The wdely used TV regularzatons lack flexbltes n characterzng the local mage structures and often generate over-smoothed results. As a classc method, the autoregressve (AR) modelng has been successfully used n mage compresson [33] and nterpolaton [34-35]. Recently the AR model was used for adaptve regularzaton n compressve mage recovery [40]: x χ = mn α s.t. y Ax, where χ s x the vector contanng the neghborng pxels of pxel x wthn the support of the AR model, and a s the AR parameter vector. In [40], the AR models are locally computed from an ntally recovered mage, and they perform much better than the TV regularzaton n reconstructng the edge structures. However, the AR models estmated from the ntally recovered mage may not be robust and tend to produce the ghost vsual artfacts. In ths paper, we wll propose a learnng-based adaptve regularzaton, where the AR models are learned from hgh-qualty tranng mages, to ncrease the AR modelng accuracy. In recent years the non-local (NL) methods have led to promsng results n varous IR tasks, especally n mage denosng [36, 15, 39]. The mathematcal framework of NL means flterng was well establshed by Buades et al. [36]. The dea of NL methods s very smple: the patches that have smlar patterns can be 5

6 spatally far from each other and thus we can collect them n the whole mage. Ths NL self-smlarty pror was later employed n mage deblurrng [8, 0] and super-resoluton [41]. In [15], the NL self-smlarty pror was combned wth the sparse representaton modelng, where the smlar mage patches are smultaneously coded to mprove the robustness of nverse reconstructon. In ths work, we wll also ntroduce an NL self-smlarty regularzaton term nto our proposed IR framework. III. Sparse Representaton wth Adaptve Sparse Doman Selecton In ths secton we propose an adaptve sparse doman selecton (ASDS) scheme, whch learns a seres of compact sub-dctonares and assgns adaptvely each local patch a sub-dctonary as the sparse doman. Wth ASDS, a weghted l 1 -norm sparse representaton model wll be proposed for IR tasks. Suppose that {Φ k }, k=1,,,k, s a set of K orthonormal sub-dctonares. Let x be an mage vector, and x =R x, =1,,,N, be the th patch (sze: n n ) vector of x, where R s a matrx extractng patch x from x. For patch x, suppose that a sub-dctonary Φ k s selected for t. Then, x can be approxmated as xˆ Φα, α, va sparse codng. The whole mage x can be reconstructed by averagng all the = k T 1 reconstructed patches x ˆ, whch can be mathematcally wrtten as [] N 1 T k = 1 = 1 N T ( ) x ˆ = R R R Φ α. (4) In Eq. (4), the matrx to be nverted s a dagonal matrx, and hence the calculaton of Eq. (4) can be done n a pxel-by-pxel manner []. Obvously, the mage patches can be overlapped to better suppress nose [, 15] and block artfacts. For the convenence of expresson, we defne the followng operator ο : N 1 T k = 1 = 1 N T ( ) x ˆ = Φ α R R R Φ α, (5) where Φ s the concatenaton of all sub-dctonares {Φ k } and α s the concatenaton of all α. Let y = DHx+ v be the observed degraded mage, our goal s to recover the orgnal mage x from y. Wth ASDS and the defnton n Eq. (5), the IR problem can be formulated as follows: { y DH 1} ˆ α = arg mn Φ α +λ α. (6) Clearly, one key procedure n the proposed ASDS scheme s the determnaton of α Φ k for each local patch. 6

7 To facltate the sparsty-based IR, we propose to learn offlne the sub-dctonares {Φ k }, and select onlne from {Φ k } the best ftted sub-dctonary to each patch x. A. Learnng the sub-dctonares In order to learn a seres of sub-dctonares to code the varous local mage structures, we need to frst construct a dataset of local mage patches for tranng. To ths end, we collected a set of hgh-qualty natural mages, and cropped from them a rch amount of mage patches wth sze n n. A cropped mage patch, denoted by s, wll be nvolved n DL f ts ntensty varance Var(s ) s greater than a threshold Δ,.e., Var(s )> Δ. Ths patch selecton crteron s to exclude the smooth patches from tranng and guarantee that only the meanngful patches wth a certan amount of edge structures are nvolved n DL. Suppose that M mage patches S=[s 1, s,, s M ] are selected. We am to learn K compact sub-dctonares {Φ k } from S so that for each gven local mage patch, the most sutable sub-dctonary can be selected. To ths end, we cluster the dataset S nto K clusters, and learn a sub-dctonary from each of the K clusters. Apparently, the K clusters are expected to represent the K dstnctve patterns n S. To generate perceptually meanngful clusters, we perform the clusterng n a feature space. In the hundreds of thousands patches cropped from the tranng mages, many patches are approxmately the rotated verson of the others. Hence we do not need to explctly make the tranng dataset nvarant to rotaton because t s naturally (nearly) rotaton nvarant. Consderng the fact that human vsual system s senstve to mage edges, whch convey most of the semantc nformaton of an mage, we use the hgh-pass flterng output of each patch as the feature for clusterng. It allows us to focus on the edges and structures of mage patches, and helps to ncrease the accuracy of clusterng. The hgh-pass flterng s often used n low-level statstcal learnng tasks to enhance the meanngful features [50]. h h h Denote by Sh = [ s1, s,..., s M] the hgh-pass fltered dataset of S. We adopt the K-means algorthm to partton S h nto K clusters { C1, C,, C K } and denote by μ k the centrod of cluster C k. Once S h s parttoned, dataset S can then be clustered nto K subsets S k, k=1,,..,k, and S k s a matrx of dmenson n m k, where m k denotes the number of samples n S k. Now the remanng problem s how to learn a sub-dctonary Φ k from the cluster S k such that all the elements n S k can be fathfully represented by Φ k. Meanwhle, we hope that the representaton of S k over Φ k 7

8 s as sparse as possble. The desgn of Φ k can be ntutvely formulated by the followng objectve functon: { λ F 1} ( Φˆ, Λˆ ) = argmn S -ΦΛ + Λ, (7) k k k k k k Φ k,λk where Λ k s the representaton coeffcent matrx of S k over Φ k. Eq. (7) s a jont optmzaton problem of Φ k and Λ k, and t can be solved by alternatvely optmzng Φ k and Λ k, lke n the K-SVD algorthm [6]. However, we do not drectly use Eq. (7) to learn the sub-dctonary Φ k based on the followng consderatons. Frst, the l -l 1 jont mnmzaton n Eq. (7) requres much computatonal cost. Second and more mportantly, by usng the objectve functon n Eq. (7) we often assume that the dctonary Φ k s over-complete. Nonetheless, here S k s a sub-dataset after K-means clusterng, whch mples that not only the number of elements n S k s lmted, but also these elements tend to have smlar patterns. Therefore, t s not necessary to learn an over-complete dctonary Φ k from S k. In addton, a compact dctonary wll decrease much the computatonal cost of the sparse codng of a gven mage patch. Wth the above consderatons, we propose to learn a compact dctonary whle tryng to approxmate Eq. (7). The prncpal component analyss (PCA) s a good soluton to ths end. PCA s a classcal sgnal de-correlaton and dmensonalty reducton technque that s wdely used n pattern recognton and statstcal sgnal processng [37]. In [38-39], PCA has been successfully used n spatally adaptve mage denosng by computng the local PCA transform of each mage patch. In ths paper we apply PCA to each sub-dataset S k to compute the prncpal components, from whch the dctonary Φ k s constructed. Denote by Ω k the co-varance matrx of dataset S k. By applyng PCA to Ω k, an orthogonal transformaton matrx P k can be obtaned. If we set P k as the dctonary and let T Z k = Ρk S k, we wll then have T k k k F k k k k F S - P Z = S - P P S = 0. In other words, the approxmaton term n Eq. (7) wll be exactly zero, yet the correspondng sparsty regularzaton term Z k 1 wll have a certan amount because all the representaton coeffcents n Z k are preserved. To make a better balance between the l 1 -norm regularzaton term and l -norm approxmaton term n Eq. (7), we only extract the frst r most mportant egenvectors n P k to form a dctonary Φ r,.e. r [,,..., ] Φ = p p p. Let 1 r T Λ r = Φr S k. Clearly, snce not all the egenvectors are used to form Φ r, the reconstructon error k r r F S -ΦΛ n Eq. (7) wll ncrease wth the decrease of r. However, the term Λ r 1 8

9 wll decrease. Therefore, the optmal value of r, denoted by r o, can be determned by { λ F 1} r = arg mn S -ΦΛ + Λ. (8) o k r r r r Fnally, the sub-dctonary learned from sub-dataset S k s Φk = 1,,..., p p p r o. Applyng the above procedures to all the K sub-datasets S k, we could get K sub-dctonares Φ k, whch wll be used n the adaptve sparse doman selecton process of each gven mage patch. In Fg. 1, we show some example sub-dctonares learned from a tranng dataset. The left column shows the centrods of some sub-datasets after K-means clusterng, and the rght eght columns show the frst eght atoms n the sub-dctonares learned from the correspondng sub-datasets. Fg. 1. Examples of learned sub-dctonares. The left column shows the centrods of some sub-datasets after K-means clusterng, and the rght eght columns show the frst eght atoms of the learned sub-dctonares from the correspondng sub-datasets. B. Adaptve selecton of the sub-dctonary In the prevous subsecton, we have learned a dctonary Φ k for each subset S k. Meanwhle, we have computed the centrod μ k of each cluster C k assocated wth S k. Therefore, we have K pars {Φ k, μ k }, wth whch the ASDS of each gven mage patch can be accomplshed. In the proposed sparsty-based IR scheme, we assgn adaptvely a sub-dctonary to each local patch of x, spannng the adaptve sparse doman. Snce x s unknown beforehand, we need to have an ntal estmaton of t. The ntal estmaton of x can be accomplshed by takng wavelet bases as the dctonary and then solvng Eq. (6) wth the terated shrnkage algorthm n [10]. Denote by ˆx the estmate of x, and denote by x ˆ a local patch of ˆx. Recall that we have the centrod μ k of each cluster avalable, and hence we could select the best ftted sub-dctonary to x ˆ by comparng the hgh-pass fltered patch of x ˆ, denoted by x ˆ h, 9

10 to the centrod μ k. For example, we can select the dctonary for x ˆ based on the mnmum dstance between x ˆ h and μ k,.e. h k = arg mn xˆ μ. (9) k k However, drectly calculatng the dstance between x ˆ h and μ k may not be robust enough because the ntal estmate ˆx can be nosy. Here we propose to determne the sub-dctonary n the subspace of μ k. Let [ ] U = μ1, μ,..., μ K be the matrx contanng all the centrods. By applyng SVD to the co-varance matrx of U, we can obtan the PCA transformaton matrx of U. Let Φ c be the projecton matrx composed by the frst several most sgnfcant egenvectors. We compute the dstance between x ˆ h and μ k n the subspace spanned by Φ c : h k = arg mn Φ xˆ Φ μ. (10) c c k k Compared wth Eq. (9), Eq. (10) can ncrease the robustness of adaptve dctonary selecton. By usng Eq. (10), the k th sub-dctonary update the estmaton of x by mnmzng Eq. (6) and lettng xˆ = Φ k wll be selected and assgned to patch x ˆ. Then we can Φ ˆ α. Wth the updated estmate ˆx, the ASDS of x can be consequently updated. Such a process s teratvely mplemented untl the estmaton ˆx converges. C. Adaptvely reweghted sparsty regularzaton In Eq. (6), the parameter λ s a constant to weght the l 1 -norm sparsty regularzaton term α. In [59] 1 Candes et al. showed that the reweghted l 1 -norm sparsty can more closely resemble the l 0 -norm sparsty than usng a constant weght, and consequently mprove the reconstructon of sparse sgnals. In ths sub-secton, we propose a new method to estmate adaptvely the mage local sparsty, and then reweght the l 1 -norm sparsty n the ASDS scheme. The reweghted l 1 -norm sparsty regularzed mnmzaton wth ASDS can be formulated as follows: N n ˆ α = arg mn y DHΦ α + λ, jα, j, (11) α = 1 j= 1 where α,j s the coeffcent assocated wth the j th atom of Φ k and λ,j s the weght assgned to α,j. In [59], 10

11 λ,j s emprcally computed as λ, 1/( ˆ j= α, j + ε), where ˆ α, j s the estmate of α,j and ε s a small constant. Here, we propose a more robust method for computng λ,j by formulatng the sparsty estmaton as a Maxmum a Posteror (MAP) estmaton problem. Under the Bayesan framework, wth the observaton y the MAP estmaton of α s gven by { P y } { P y P } ˆ α = arg max log ( α ) = arg mn log ( α) log ( α ). (1) α By assumng y s contamnated wth addtve Gaussan whte noses of standard devaton σ n, we have: α 1 1 P( y α) = exp( y DHΦ α ). (13) σ σ n π n The pror dstrbuton P(α) s often characterzed by an..d. zero-mean Laplacan probablty model: N n 1 P( α ) = exp( α 1 1, j) =, (14) j= σ σ, j, j where σ,j s the standard devaton of α,j. By pluggng P(y α) and P(α) nto Eq. (1), we could readly derve the desred weght n Eq. (11) as λ = σ / σ. For numercal stablty, we compute the weghts by, j n, j σ n λ, j= ˆ σ + ε, (15), j where ˆ σ, j s an estmate of σ,j and ε s a small constant. Now let s dscuss how to estmate σ,j. Denote by x ˆ the estmate of x, and by x ˆ l, l=1,,, L, the non-local smlar patches to x ˆ. (The determnaton of non-local smlar patches to x ˆ wll be descrbed n Secton IV-C.) The representaton coeffcents of these smlar patches over the selected sub-dctonary l T l s ˆ α ˆ = Φ k x. Then we can estmate σ,j by calculatng the standard devaton of each element ˆ α, j n ˆ α l. Compared wth the reweghtng method n [59], the proposed adaptve reweghtng method s more robust because t explots the mage nonlocal redundancy nformaton. Based on our expermental experence, t could lead to about 0.dB mprovement n average over the reweghtng method n [59] for deblurrng and super-resoluton under the proposed ASDS framework. The detaled algorthm to solve the reweghted l 1 -norm sparsty regularzed mnmzaton n Eq. (11) wll be presented n Secton V. Φ k 11

12 IV. Spatally Adaptve Regularzaton In Secton III, we proposed to select adaptvely a sub-dctonary to code the gven mage patch. The proposed ASDS-based IR method can be further mproved by ntroducng two types of adaptve regularzaton (AReg) terms. A local area n a natural mage can be vewed as a statonary process, whch can be well modeled by the autoregressve (AR) models. Here, we propose to learn a set of AR models from the clustered hgh qualty tranng mage patches, and adaptvely select one AR model to regularze the nput mage patch. Besdes the AR models, whch explot the mage local correlaton, we propose to use the non-local smlarty constrant as a complementary AReg term to the local AR models. Wth the fact that there are often many repettve mage structures n natural mages, the mage non-local redundances can be very helpful n mage enhancement. A. Tranng the AR models Recall that n Secton III, we have parttoned the whole tranng dataset nto K sub-datasets S k. For each S k an AR model can be traned usng all the sample patches nsde t. Here we let the support of the AR model be a square wndow, and the AR model ams to predct the central pxel of the wndow by usng the neghborng pxels. Consderng that determnng the best order of the AR model s not trval, and a hgh order AR model may cause data over-fttng, n our experments a 3 3 wndow (.e., AR model of order 8) s used. The vector of AR model parameters, denoted by a k, of the k th sub-dataset S k, can be easly computed by solvng the followng least square problem: T ak = arg mn ( s a q ), (16) a s Sk where s s the central pxel of mage patch s and q s the vector that conssts of the neghborng pxels of s wthn the support of the AR model. By applyng the AR model tranng process to each sub-dataset, we can obtan a set of AR models {a 1, a,, a K } that wll be used for adaptve regularzaton. B. Adaptve selecton of the AR model for regularzaton The adaptve selecton of the AR model for each patch x s the same as the selecton of sub-dctonary for x descrbed n Secton III-B. Wth an estmaton x of x ˆ, we compute ts hgh-pass Gaussan flterng output 1

13 x ˆ h. Let h th k arg mn ˆ = Φcx Φμ c k, and then the k AR model ak wll be assgned to patch x. Denote by x k the central pxel of patch x, and by χ the vector contanng the neghborng pxels of x wthn patch x. We can expect that the predcton error of x usng ak and χ should be small,.e., T k x a χ should be mnmzed. By ncorporatng ths constrant nto the ASDS based sparse representaton model n Eq. (11), we have a lfted objectve functon as follows: αˆ = arg mn y DHΦ α + + a χ α N n T λ, jα, j γ x k = 1 j= 1 x x, (17) where γ s a constant balancng the contrbuton of the AR regularzaton term. For the convenence of expresson, we wrte the thrd term x x x T k a χ as ( I-Ax ), where I s the dentty matrx and a, f xj s an element of χ, a ak A (, j) =. 0, otherwse Then, Eq. (17) can be rewrtten as αˆ = arg mn y DHΦ α + + ( I A) x α N n λ, jα, j γ = 1 j= 1. (18) C. Adaptve regularzaton by non-local smlarty The AR model based AReg explots the local statstcs n each mage patch. On the other hand, there are often many repettve patterns throughout a natural mage. Such non-local redundancy s very helpful to mprove the qualty of reconstructed mages. As a complementary AReg term to AR models, we further ntroduce a non-local smlarty regularzaton term nto the sparsty-based IR framework. For each local patch x, we search for the smlar patches to t n the whole mage x (n practce, n a large enough area around x ). A patch l x s selected as a smlar patch to x f l e = xˆ l x ˆ t, where t s a preset threshold, and x ˆ and x ˆ l are the current estmates of x and l x, respectvely. Or we can select the patch x ˆ l f t s wthn the frst L (L=10 n our experments) closest patches to x ˆ. Let x be the central pxel of patch x, and l x be the central pxel of patch l x. Then we can use the weghted average of l x,.e., L l l bx l= 1, to predct x, and the weght l b assgned to l l l x s set as b = exp( e / h) / c, where h s a 13

14 controllng factor of the weght and L exp( l c / ) = e l 1 h s the normalzaton factor. Consderng that = there s much non-local redundancy n natural mages, we expect that the predcton error L l l bx l = 1 x should be small. Let b be the column vector contanng all the weghts l b and β be the column vector contanng all l x. By ncorporatng the non-local smlarty regularzaton term nto the ASDS based sparse representaton n Eq. (11), we have: N n T ˆ α = arg mn y DHΦ α + λ, jα, j + η x b β, (19) α = 1 j= 1 x x where η s a constant balancng the contrbuton of non-local regularzaton. Eq. (19) can be rewrtten as N n ˆ α = arg mn y DHΦ α + λ, jα, j + η ( I B) Φα, (0) α = 1 j= 1 where I s the dentty matrx and l l l b, f x s an element of β, b b B(, l) =. 0, otherwse V. Summary of the Algorthm By ncorporatng both the local AR regularzaton and the non-local smlarty regularzaton nto the ASDS based sparse representaton n Eq. (11), we have the followng ASDS-AReg based sparse representaton to solve the IR problem: N n ˆ α = argmn y DHΦ α + γ ( I A) Φ α + η ( I B) Φ α + λ α α = 1 j= 1, j, j In Eq. (1), the frst l -norm term s the fdelty term, guaranteeng that the soluton xˆ =. (1) Φ αˆ can well ft the observaton y after degradaton by operators H and D; the second l -norm term s the local AR model based adaptve regularzaton term, requrng that the estmated mage s locally statonary; the thrd l -norm term s the non-local smlarty regularzaton term, whch uses the non-local redundancy to enhance each local patch; and the last weghted l 1 -norm term s the sparsty penalty term, requrng that the estmated mage should be sparse n the adaptvely selected doman. Eq. (1) can be re-wrtten as 14

15 y DH N n ˆ α = arg mn γ ( ) 0 I-A Φ α + λ α α = 1 j= 1 0 η ( I-B), j, j. () By lettng y DH y = 0, K = γ ( ) I - A, (3) 0 η ( I-B) Eq. () can be re-wrtten as N n ˆ α = arg mn y KΦ α + λ, jα, j. (4) α = 1 j= 1 Ths s a reweghted l 1 -mnmzaton problem, whch can be effectvely solved by the teratve shrnkage algorthm [10]. We outlne the teratve shrnkage algorthm for solvng (4) n Algorthm 1. Algorthm 1 for solvng Eq. (4) 1. Intalzaton: (a) By takng the wavelet doman as the sparse doman, we can compute an ntal estmate, denoted by ˆx, of x by usng the terated wavelet shrnkage algorthm [10]; (b) Wth the ntal estmate ˆx, we select the sub-dctonary Φ k and the AR model a usng Eq. (10), and calculate the non-local weght b for each local patch x ˆ ; (c) Intalze A and B wth the selected AR models and the non-local weghts; (d) Preset γ, η, P, e and the maxmal teraton number, denoted by Max_Iter; (e) Set k=0. ( k) ( k+ 1). Iterate on k untl xˆ x ˆ N e or k Max_Iter s satsfed. ( k+ 1/) ( k) T ( k) ( k) ( k) ( k) (a) xˆ = xˆ + K ( y Kxˆ ) = xˆ + ( Uy Uxˆ Vx ˆ ), where U = ( DH) T DH and T T V = γ ( I A) ( I A) + η ( I B) ( I B ); ( k+ 1/) T ˆ ( k+ 1/) T ˆ ( k+ 1/) α = k Rx 1 1 k R N Nx (b) Compute [ Φ,, Φ ], where N s the total number of mage patches; ( k+ 1) ( k+ 1/) (c) α, j = soft( α, j, τ, j ), where soft(, τ, j ) s a soft thresholdng functon wth threshold τ, j ; ( 1) ( 1) (d) Compute xˆ k+ = Φ α k+ usng Eq. (5), whch can be calculated by frst reconstructng each ( k + 1) mage patch wth xˆ = Φα k and then averagng all the reconstructed mage patches; (e) If mod(k,p)=0, update the adaptve sparse doman of x and the matrces A and B usng the ( 1) mproved estmate x ˆ k +. In Algorthm 1, e s a pre-specfed scalar controllng the convergence of the teratve process, and Max_Iter s the allowed maxmum number of teratons. The thresholds τ, j are locally computed as τ, j λ, j/ r = [10], where λ, j are calculated by Eq. (15) and r s chosen such that r > ( KΦ) T KΦ. Snce 15

16 the dctonary Φ k vares across the mage, the optmal determnaton of r for each local patch s dffcult. Here, we emprcally set r=4.7 for all the patches. P s a preset nteger, and we only update the sub-dctonares Φ k, the AR models a and the weghts b n every P teratons to save computatonal cost. Wth the updated a and b, A and B can be updated, and then the matrx V can be updated. VI. Expermental Results A. Tranng datasets Although mage contents can vary a lot from mage to mage, t has been found that the mcro-structures of mages can be represented by a small number of structural prmtves (e.g., edges, lne segments and other elementary features), and these prmtves are qualtatvely smlar n form to smple cell receptve felds [61-63]. The human vsual system employs a sparse codng strategy to represent mages,.e., codng a natural mage usng a small number of bass functons chosen out of an over-complete code set. Therefore, usng the many patches extracted from several tranng mages whch are rch n edges and textures, we are able to tran the dctonares whch can represent well the natural mages. To llustrate the robustness of the proposed method to the tranng dataset, we use two dfferent sets of tranng mages n the experments, each set havng 5 hgh qualty mages as shown n Fg.. We can see that these two sets of tranng mages are very dfferent n contents. We use Var(s )> Δ wth Δ=16 to exclude the smooth mage patches, and a total amount of 77,615 patches of sze 7 7 are randomly cropped from each set of tranng mages. (Please refer to Secton VI-E for the dscusson of patch sze selecton.) As a clusterng-based method, an mportant ssue s the selecton of the number of classes. However, the optmal selecton of ths number s a non-trval task, whch s subject to the bas and varance tradeoff. If the number of classes s too small, the boundares between classes wll be smoothed out and thus the dstnctveness of the learned sub-dctonares and AR models s decreased. On the other hand, a too large number of the classes wll make the learned sub-dctonares and AR models less representatve and less relable. Based on the above consderatons and our expermental experence, we propose the followng smple method to fnd a good number of classes: we frst partton the tranng dataset nto 00 clusters, and merge those classes that contan very few mage patches (.e., less than 300 patches) to ther nearest neghborng classes. More dscussons and experments on the selecton of the number of classes wll be 16

17 made n Secton VI-E. Fg.. The two sets of hgh qualty mages used for tranng sub-dctonares and AR models. The mages n the frst row consst of the tranng dataset 1 and those n the second row consst of the tranng dataset. B. Expermental settngs In the experments of deblurrng, two types of blur kernels, a Gaussan kernel of standard devaton 3 and a 9 9 unform kernel, were used to smulate blurred mages. Addtve Gaussan whte noses wth standard devatons and were then added to the blurred mages, respectvely. We compare the proposed methods wth fve recently proposed mage deblurrng methods: the terated wavelet shrnkage method [10], the constraned TV deblurrng method [4], the spatally weghted TV deblurrng method [45], the l 0 -norm sparsty based deblurrng method [46], and the BM3D deblurrng method [58]. In the proposed ASDS-AReg Algorthm 1, we emprcally set γ = , η = , and τ,j =λ,j /4.7, where λ,j s adaptvely computed by Eq. (15). In the experments of super-resoluton, the degraded LR mages were generated by frst applyng a truncated 7 7 Gaussan kernel of standard devaton 1.6 to the orgnal mage and then down-samplng by a factor of 3. We compare the proposed method wth four state-of-the-art methods: the terated wavelet shrnkage method [10], the TV-regularzaton based method [47], the Softcuts method [43], and the sparse representaton based method [5]. Snce the method n [5] does not handle the blurrng of LR mages, for far comparsons we used the teratve back-projecton method [16] to deblur the HR mages produced by [5]. In the proposed ASDS-AReg based super-resoluton, the parameters are set as follows. For the noseless LR mages, we emprcally set γ =0.0894, η =0. and τ ˆ, j= 0.18/ σ, j, where ˆ σ, j s the estmated We thank the authors of [4-43], [45-46], [58] and [5] for provdng ther source codes, executable programs, or expermental results. 17

18 standard devaton of α,j. For the nosy LR mages, we emprcally set γ =0.88, η =0.5 and τ,j =λ,j /16.6. In both of the deblurrng and super-resoluton experments, 7 7 patches (for HR mage) wth 5-pxel-wdth overlap between adjacent patches were used n the proposed methods. For color mages, all the test methods were appled to the lumnance component only because human vsual system s more senstve to lumnance changes, and the b-cubc nterpolator was appled to the chromatc components. Here we only report the PSNR and SSIM [44] results for the lumnance component. To examne more comprehensvely the proposed approach, we gve three results of the proposed method: the results by usng only ASDS (denoted by ASDS), by usng ASDS plus AR regularzaton (denoted by ASDS-AR), and by usng ASDS wth both AR and non-local smlarty regularzaton (denoted by ASDS-AR-NL). A webste of ths paper has been bult: where all the expermental results and the Matlab source code of the proposed algorthm can be downloaded. C. Expermental results on de-blurrng Fg. 3. Comparson of deblurred mages (unform blur kernel, σ n = ) on Parrot by the proposed methods. Top row: Orgnal, Degraded, ASDS-TD1 (PSNR=30.71dB, SSIM=0.896), ASDS-TD (PSNR=30.90dB, SSIM=0.8941). Bottom row: ASDS-AR-TD1 (PSNR=30.64dB, SSIM=0.890), ASDS-AR-TD (PSNR=30.79dB, SSIM=0.8933), ASDS-AR-NL-TD1 (PSNR=30.76dB, SSIM=0.891), ASDS-AR-NL-TD (PSNR=30.9dB, SSIM=0.8939). 18

19 To verfy the effectveness of ASDS and adaptve regularzatons, and the robustness of them to the tranng datasets, we frst present the deblurrng results on mage Parrot by the proposed methods n Fg. 3. More PSNR and SSIM results can be found n Table 1. From Fg. 3 and Table 1 we can see that the proposed methods generate almost the same deblurrng results wth TD1 and TD. We can also see that the ASDS method s effectve n deblurrng. By combnng the adaptve regularzaton terms, the deblurrng results can be further mproved by elmnatng the rngng artfacts around edges. Due to the page lmt, we wll only show the results by ASDS-AR-NL-TD n the followng development. The deblurrng results by the competng methods are then compared n Fgs. 4~6. One can see that there are many nose resduals and artfacts around edges n the deblurred mages by the terated wavelet shrnkage method [10]. The TV-based methods n [4] and [45] are effectve n suppressng the noses; however, they produce over-smoothed results and elmnate much mage detals. The l 0 -norm sparsty based method of [46] s very effectve n reconstructng smooth mage areas; however, t fals to reconstruct fne mage edges. The BM3D method [58] s very compettve n recoverng the mage structures. However, t tends to generate some ghost artfacts around the edges (e.g., the mage Cameraman n Fg. 6). The proposed method leads to the best vsual qualty. It can not only remove the blurrng effects and nose, but also reconstruct more and sharper mage edges than other methods. The excellent edge preservaton owes to the adaptve sparse doman selecton strategy and adaptve regularzatons. The PSNR and SSIM results by dfferent methods are lsted n Tables 1~4. For the experments usng unform blur kernel, the average PSNR mprovements of ASDS-AR-NL-TD over the second best method (.e., BM3D [58]) are 0.50 db (when σ n = ) and 0.4 db (when σ n =), respectvely. For the experments usng Gaussan blur kernel, the PSNR gaps between all the competng methods become smaller, and the average PSNR mprovements of ASDS-AR-NL-TD over the BM3D method are 0.15 db (when σ n = ) and 0.18 db (when σ n =), respectvely. We can also see that the proposed ASDS-AR-NL method acheves the hghest SSIM ndex. 19

20 Fg. 4. Comparson of the deblurred mages on Parrot by dfferent methods (unform blur kernel and σ n = ). Top row: Orgnal, degraded, method [10] (PSNR=7.80dB, SSIM=0.865) and method [4] (PSNR=8.80dB, SSIM=0.8704). Bottom row: method [45] (PSNR=8.96dB, SSIM=0.87), method [46] (PSNR=9.04dB, SSIM=0.884), BM3D [58] (PSNR=30.dB, SSIM=0.8906), and proposed (PSNR=30.9dB, SSIM=0.8936). Fg. 5. Comparson of the deblurred mages on Barbara by dfferent methods (unform blur kernel and σ n =). Top row: Orgnal, degraded, method [10] (PSNR=4.86dB, SSIM=0.6963) and method [4] (PSNR=5.1dB, SSIM=0.7031). Bottom row: method [45] (PSNR=5.34dB, SSIM=0.714), method [46] (PSNR=5.37dB, SSIM=0.748), BM3D [58] (PSNR=7.16dB, SSIM=0.7881) and proposed (PSNR=6.96dB, SSIM=0.797). 0

21 Fg. 6. Comparson of the deblurred mages on Cameraman by dfferent methods (unform blur kernel and σ n =). Top row: Orgnal, degraded, method [10] (PSNR=4.80dB, SSIM=0.7837) and method [4] (PSNR=6.04dB, SSIM=0.777). Bottom row: method [45] (PSNR=6.53dB, SSIM=0.873), method [46] (PSNR=5.96dB, SSIM=0.8131), BM3D [58] (PSNR=6.53 db, SSIM=0.8136) and proposed (PSNR=7.5 db, SSIM=0.8408). D. Expermental results on sngle mage super-resoluton Fg. 7. The super-resoluton results (scalng factor 3) on mage Parrot by the proposed methods. Top row: Orgnal, LR mage, ASDS-TD1 (PSNR=9.47dB, SSIM=0.9031) and ASDS-TD (PSNR=9.51dB, SSIM=0.9034). Bottom row: ASDS-AR-TD1 (PSNR=9.61dB, SSIM=0.9036), ASDS-AR-TD (PSNR=9.63dB, SSIM=0.9038), ASDS-AR-NL- TD1 (PSNR=9.97 db, SSIM=0.9090) and ASDS-AR-NL-TD (PSNR=30.00dB, SSIM=0.9093). 1

22 Fg. 8. Reconstructed HR mages (scalng factor 3) of Grl by dfferent methods. Top row: LR mage, method [10] (PSNR=3.93dB, SSIM=0.810) and method [47] (PSNR=31.1dB, SSIM=0.7878). Bottom row: method [43] (PSNR=31.94dB, SSIM=0.7704), method [5] (PSNR=3.51dB, SSIM=0.791) and proposed (PSNR=33.53dB, SSIM=0.84). Fg. 9. Reconstructed HR mages (scalng factor 3) of Parrot by dfferent methods. Top row: LR mage, method [10] (PSNR=8.78dB, SSIM=0.8845) and method [47] (PSNR=7.59dB, SSIM=0.8856). Bottom row: method [43] (PSNR=7.71dB, SSIM=0.868), method [5] (PSNR=7.98dB, SSIM=0.8665) and proposed (PSNR=30.00dB, SSIM=0.9093).

23 Fg. 10. Reconstructed HR mages (scalng factor 3) of nosy Grl by dfferent methods. Top row: LR mage, method [10] (PSNR=30.37dB, SSIM=0.7044) and method [47] (PSNR=9.77dB, SSIM=0.758). Bottom row: method [43] (PSNR=31.40 db, SSIM=0.7480), method [5] (PSNR=30.70dB, SSIM=0.7088) and proposed (PSNR=31.80dB, SSIM=0.7590). Fg. 11. Reconstructed HR mages (scalng factor 3) of nosy Parrot by dfferent methods. Top row: LR mage, method [10] (PSNR=7.01dB, SSIM=0.7901) and method [47] (PSNR=6.77dB, SSIM=0.8084). Bottom row: method [43] (PSNR=7.4 db, SSIM=0.8458), method [5] (PSNR=6.8dB, SSIM=0.7769) and proposed (PSNR=8.7dB, SSIM=0.8668). 3

24 In ths secton we present expermental results of sngle mage super-resoluton. Agan we frst test the robustness of the proposed method to the tranng dataset. Fg. 7 shows the reconstructed HR Parrot mages by the proposed methods. We can see that the proposed method wth the two dfferent tranng datasets produces almost the same HR mages. It can also be observed that the ASDS scheme can well reconstruct the mage, whle there are stll some rngng artfacts around the reconstructed edges. Such artfacts can be reduced by couplng ASDS wth the AR model based regularzaton, and the mage qualty can be further mproved by ncorporatng the non-local smlarty regularzaton. Next we compare the proposed methods wth state-of-the-art methods n [10, 43, 5, 47]. The vsual comparsons are shown n Fgs. 8~9. We see that the reconstructed HR mages by method [10] have many jaggy and rngng artfacts. The TV-regularzaton based method [47] s effectve n suppressng the rngng artfacts, but t generates pecewse constant block artfacts. The Softcuts method [43] produces very smooth edges and fne structures, makng the reconstructed mage look unnatural. By sparsely codng the LR mage patches wth the learned LR dctonary and recoverng the HR mage patches wth the correspondng HR dctonary, the sparsty-based method n [5] s very compettve n terms of vsual qualty. However, t s dffcult to learn a unversal LR/HR dctonary par that can represent varous LR/HR structure pars. It s observed that the reconstructed edges by [5] are relatvely smooth and some fne mage structures are not recovered. The proposed method generates the best vsual qualty. The reconstructed edges are much sharper than all the other four competng methods, and more mage fne structures are recovered. Often n practce the LR mage wll be nose corrupted, whch makes the super-resoluton more challengng. Therefore t s necessary to test the robustness of the super-resoluton methods to nose. We added Gaussan whte nose (wth standard devaton 5) to the LR mages, and the reconstructed HR mages are shown n Fgs. 10~11. We see that the method n [10] s senstve to nose and there are serous nose-caused artfacts around the edges. The TV-regularzaton based method [47] also generates many nose-caused artfacts n the neghborhood of edges. The Softcuts method [43] results n over-smoothed HR mages. Snce the sparse representaton based method [5] s followed by a back-projecton process to remove the blurrng effect, t s senstve to nose and the performance degrades much n the nosy case. In contrast, the proposed method shows good robustness to nose. Not only the nose s effectvely suppressed, but also the mage fne edges are well reconstructed. Ths s manly because the nose can be more effectvely removed and the edges can be better preserved n the adaptve sparse doman. From Tables 5 and 4

25 6, we see that the average PSNR gans of ASDS-AR-NL-TD over the second best methods [10] (for the noseless case) and [43] (for the nosy case) are 1.13 db and 0.77 db, respectvely. The average SSIM gans over the methods [10] and [43] are and 0.01 for the noseless and nosy cases, respectvely. E. Expermental results on a 1000-mage dataset Fg. 1. Some example mages n the establshed 1000-mage dataset. To more comprehensvely test the robustness of the proposed IR method, we performed extensve deblurrng and super-resoluton experments on a large dataset that contans 1000 natural mages of varous contents. To establsh ths dataset, we randomly downloaded 8 hgh-qualty natural mages from the Flckr webste ( and selected 178 hgh-qualty natural mages from the Berkeley Segmentaton Database 3. A sub-mage that s rch n edge and texture structures was cropped from each of these 1000 mages to test our method. Fg. 1 shows some example mages n ths dataset. For mage deblurrng, we compared the proposed method wth the methods n [46] and [58], whch perform the nd and the 3 rd best n our experments n Secton VI-D. The average PSNR and SSIM values of the deblurred mages by the test methods are shown n Table 7. To better llustrate the advantages of the proposed method, we also drew the dstrbutons of ts PSNR gans over the two competng methods n Fg. 13. From Table 7 and Fg. 13, we can see that the proposed method constantly outperforms the competng methods for the unform blur kernel, and the average PSNR gan over the BM3D [58] s up to 0.85 db (when σ n = ). Although the performance gaps between dfferent methods become much smaller for the non-truncated Gaussan blur kernel, t can stll be observed that the proposed method mostly outperforms 3 5

26 BM3D [58] and [46], and the average PSNR gan over BM3D [58] s up to 0.19 db (when σ n =). For mage super-resoluton, we compared the proposed method wth the two methods n [5] and [47]. The average PSNR and SSIM values by the test methods are lsted n Table 8, and the dstrbutons of PSNR gan of our method over [5] and [47] are shown n Fg. 14. From Table 8 and Fg. 14, we can see that the proposed method performs constantly better than the competng methods. (a) (b) (c) (d) Fg. 13. The PSNR gan dstrbutons of deblurrng experments. (a) Unform blur kernel wth σ n = ; (b) Unform blur kernel wth σ n =; (c) Gaussan blur kernel wth σ n = ; (d) Gaussan blur kernel wth σ n =. (a) (b) Fg. 14. The PSNR gan dstrbutons of super-resoluton experments. (a) Nose level σ n =0; (b) Nose level σ n =5. 6

27 Fg. 15. Vsual comparson of the deblurred mages by the proposed method wth dfferent patch szes. From left to rght: patch sze of 3 3, patch sze of 5 5, and patch sze of 7 7. Wth ths large dataset, we tested the robustness of the proposed method to the number of classes n learnng the sub-dctonares and AR models. Specfcally, we traned the sub-dctonares and AR models wth dfferent numbers of classes,.e., 100, 00 and 400, and appled them to the establshed 1000-mage dataset. Table 9 presents the average PSNR and SSIM values of the restored mages. We can see that the three dfferent numbers of classes lead to very smlar mage deblurrng and super-resoluton performance. Ths llustrates the robustness of the proposed method to the number of classes. Another mportant ssue of the proposed method s the sze of mage patch. Clearly, the patch sze cannot be bg; otherwse, they wll not be mcro-structures and hence cannot be represented by a small number of atoms. To evaluate the effects of the patch sze on IR results, we traned the sub-dctonares and AR models wth dfferent patch szes,.e., 3 3, 5 5 and 7 7. Then we appled these sub-dctonares and AR models to the 10 test mages and the constructed 1000-mage database. The expermental results of deblurrng and super-resoluton are presented n Tables 10~1, from whch we can see that these dfferent patch szes lead to smlar PSNR and SSIM results. However, t can be found that the smaller patch szes (.e., 3 3 and 5 5) tend to generate some artfacts n smooth regons, as shown n Fg. 15. Therefore, we adopt 7 7 as the mage patch sze n our mplementaton. F. Dscussons on the computatonal cost In Algorthm 1, the matrces U and V are sparse matrces, and can be pre-calculated after the ntalzaton of the AR models and the non-local weghts. Hence, Step (a) can be executed fast. For mage deblurrng, the calculaton of ( ) Ux ˆ k can be mplemented by FFT, whch s faster than drect matrx calculaton. Steps (b) and (d) requre Nn multplcatons, where n s the number of pxels of each patch and N s the 7

Super-resolution with Nonlocal Regularized Sparse Representation

Super-resolution with Nonlocal Regularized Sparse Representation Super-resoluton wth Nonlocal Regularzed Sparse Representaton Wesheng Dong a, Guangmng Sh a, Le Zhang b, and Xaoln Wu c a Key Laboratory of Intellgent Percepton and Image Understandng (Chnese Mnstry of

More information

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points;

Subspace clustering. Clustering. Fundamental to all clustering techniques is the choice of distance measure between data points; Subspace clusterng Clusterng Fundamental to all clusterng technques s the choce of dstance measure between data ponts; D q ( ) ( ) 2 x x = x x, j k = 1 k jk Squared Eucldean dstance Assumpton: All features

More information

Nonlocally Centralized Sparse Representation for Image Restoration

Nonlocally Centralized Sparse Representation for Image Restoration Nonlocally Centralzed Sparse Representaton for Image Restoraton Wesheng Dong a, Le Zhang b,1, Member, IEEE, Guangmng Sh a, Senor Member, IEEE, and Xn L c, Senor Member, IEEE a Key Laboratory of Intellgent

More information

Learning the Kernel Parameters in Kernel Minimum Distance Classifier

Learning the Kernel Parameters in Kernel Minimum Distance Classifier Learnng the Kernel Parameters n Kernel Mnmum Dstance Classfer Daoqang Zhang 1,, Songcan Chen and Zh-Hua Zhou 1* 1 Natonal Laboratory for Novel Software Technology Nanjng Unversty, Nanjng 193, Chna Department

More information

Smoothing Spline ANOVA for variable screening

Smoothing Spline ANOVA for variable screening Smoothng Splne ANOVA for varable screenng a useful tool for metamodels tranng and mult-objectve optmzaton L. Rcco, E. Rgon, A. Turco Outlne RSM Introducton Possble couplng Test case MOO MOO wth Game Theory

More information

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration

Improvement of Spatial Resolution Using BlockMatching Based Motion Estimation and Frame. Integration Improvement of Spatal Resoluton Usng BlockMatchng Based Moton Estmaton and Frame Integraton Danya Suga and Takayuk Hamamoto Graduate School of Engneerng, Tokyo Unversty of Scence, 6-3-1, Nuku, Katsuska-ku,

More information

RECONSTRUCTING a high quality image from one or

RECONSTRUCTING a high quality image from one or 1620 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 4, APRIL 2013 Nonlocally Centralzed Sparse Representaton for Image Restoraton Wesheng Dong, Le Zhang, Member, IEEE, Guangmng Sh, Senor Member, IEEE,

More information

Determining the Optimal Bandwidth Based on Multi-criterion Fusion

Determining the Optimal Bandwidth Based on Multi-criterion Fusion Proceedngs of 01 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 5 (01) (01) IACSIT Press, Sngapore Determnng the Optmal Bandwdth Based on Mult-crteron Fuson Ha-L Lang 1+, Xan-Mn

More information

A Binarization Algorithm specialized on Document Images and Photos

A Binarization Algorithm specialized on Document Images and Photos A Bnarzaton Algorthm specalzed on Document mages and Photos Ergna Kavalleratou Dept. of nformaton and Communcaton Systems Engneerng Unversty of the Aegean kavalleratou@aegean.gr Abstract n ths paper, a

More information

AS a classical problem in low level vision, image denoising. Group Sparsity Residual Constraint for Image Denoising

AS a classical problem in low level vision, image denoising. Group Sparsity Residual Constraint for Image Denoising 1 Group Sparsty Resdual Constrant for Image Denosng Zhyuan Zha, Xnggan Zhang, Qong Wang, Lan Tang and Xn Lu arxv:1703.00297v6 [cs.cv] 31 Jul 2017 Abstract Group-based sparse representaton has shown great

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dictionary Pair Learning on Grassmann Manifolds for Image Denoising

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, Dictionary Pair Learning on Grassmann Manifolds for Image Denoising IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. *, NO. *, 2015 1 Dctonary Par Learnng on Grassmann Manfolds for Image Denosng Xanhua Zeng, We Ban, We Lu, Jale Shen, Dacheng Tao, Fellow, IEEE Abstract Image

More information

Feature Reduction and Selection

Feature Reduction and Selection Feature Reducton and Selecton Dr. Shuang LIANG School of Software Engneerng TongJ Unversty Fall, 2012 Today s Topcs Introducton Problems of Dmensonalty Feature Reducton Statstc methods Prncpal Components

More information

Support Vector Machines

Support Vector Machines /9/207 MIST.6060 Busness Intellgence and Data Mnng What are Support Vector Machnes? Support Vector Machnes Support Vector Machnes (SVMs) are supervsed learnng technques that analyze data and recognze patterns.

More information

Discriminative Dictionary Learning with Pairwise Constraints

Discriminative Dictionary Learning with Pairwise Constraints Dscrmnatve Dctonary Learnng wth Parwse Constrants Humn Guo Zhuoln Jang LARRY S. DAVIS UNIVERSITY OF MARYLAND Nov. 6 th, Outlne Introducton/motvaton Dctonary Learnng Dscrmnatve Dctonary Learnng wth Parwse

More information

Cluster Analysis of Electrical Behavior

Cluster Analysis of Electrical Behavior Journal of Computer and Communcatons, 205, 3, 88-93 Publshed Onlne May 205 n ScRes. http://www.scrp.org/ournal/cc http://dx.do.org/0.4236/cc.205.350 Cluster Analyss of Electrcal Behavor Ln Lu Ln Lu, School

More information

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers

Content Based Image Retrieval Using 2-D Discrete Wavelet with Texture Feature with Different Classifiers IOSR Journal of Electroncs and Communcaton Engneerng (IOSR-JECE) e-issn: 78-834,p- ISSN: 78-8735.Volume 9, Issue, Ver. IV (Mar - Apr. 04), PP 0-07 Content Based Image Retreval Usng -D Dscrete Wavelet wth

More information

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur

FEATURE EXTRACTION. Dr. K.Vijayarekha. Associate Dean School of Electrical and Electronics Engineering SASTRA University, Thanjavur FEATURE EXTRACTION Dr. K.Vjayarekha Assocate Dean School of Electrcal and Electroncs Engneerng SASTRA Unversty, Thanjavur613 41 Jont Intatve of IITs and IISc Funded by MHRD Page 1 of 8 Table of Contents

More information

A Study on Clustering for Clustering Based Image De-Noising

A Study on Clustering for Clustering Based Image De-Noising Journal of Informaton Systems and Telecommuncaton, Vol. 2, No. 4, October-December 2014 196 A Study on Clusterng for Clusterng Based Image De-Nosng Hossen Bakhsh Golestan* Department of Electrcal Engneerng,

More information

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning

Outline. Type of Machine Learning. Examples of Application. Unsupervised Learning Outlne Artfcal Intellgence and ts applcatons Lecture 8 Unsupervsed Learnng Professor Danel Yeung danyeung@eee.org Dr. Patrck Chan patrckchan@eee.org South Chna Unversty of Technology, Chna Introducton

More information

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches

Fuzzy Filtering Algorithms for Image Processing: Performance Evaluation of Various Approaches Proceedngs of the Internatonal Conference on Cognton and Recognton Fuzzy Flterng Algorthms for Image Processng: Performance Evaluaton of Varous Approaches Rajoo Pandey and Umesh Ghanekar Department of

More information

CS 534: Computer Vision Model Fitting

CS 534: Computer Vision Model Fitting CS 534: Computer Vson Model Fttng Sprng 004 Ahmed Elgammal Dept of Computer Scence CS 534 Model Fttng - 1 Outlnes Model fttng s mportant Least-squares fttng Maxmum lkelhood estmaton MAP estmaton Robust

More information

An Image Fusion Approach Based on Segmentation Region

An Image Fusion Approach Based on Segmentation Region Rong Wang, L-Qun Gao, Shu Yang, Yu-Hua Cha, and Yan-Chun Lu An Image Fuson Approach Based On Segmentaton Regon An Image Fuson Approach Based on Segmentaton Regon Rong Wang, L-Qun Gao, Shu Yang 3, Yu-Hua

More information

Joint Example-based Depth Map Super-Resolution

Joint Example-based Depth Map Super-Resolution Jont Example-based Depth Map Super-Resoluton Yanje L 1, Tanfan Xue,3, Lfeng Sun 1, Janzhuang Lu,3,4 1 Informaton Scence and Technology Department, Tsnghua Unversty, Bejng, Chna Department of Informaton

More information

Collaboratively Regularized Nearest Points for Set Based Recognition

Collaboratively Regularized Nearest Points for Set Based Recognition Academc Center for Computng and Meda Studes, Kyoto Unversty Collaboratvely Regularzed Nearest Ponts for Set Based Recognton Yang Wu, Mchhko Mnoh, Masayuk Mukunok Kyoto Unversty 9/1/013 BMVC 013 @ Brstol,

More information

Edge Detection in Noisy Images Using the Support Vector Machines

Edge Detection in Noisy Images Using the Support Vector Machines Edge Detecton n Nosy Images Usng the Support Vector Machnes Hlaro Gómez-Moreno, Saturnno Maldonado-Bascón, Francsco López-Ferreras Sgnal Theory and Communcatons Department. Unversty of Alcalá Crta. Madrd-Barcelona

More information

Face Recognition University at Buffalo CSE666 Lecture Slides Resources:

Face Recognition University at Buffalo CSE666 Lecture Slides Resources: Face Recognton Unversty at Buffalo CSE666 Lecture Sldes Resources: http://www.face-rec.org/algorthms/ Overvew of face recognton algorthms Correlaton - Pxel based correspondence between two face mages Structural

More information

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching

A Fast Visual Tracking Algorithm Based on Circle Pixels Matching A Fast Vsual Trackng Algorthm Based on Crcle Pxels Matchng Zhqang Hou hou_zhq@sohu.com Chongzhao Han czhan@mal.xjtu.edu.cn Ln Zheng Abstract: A fast vsual trackng algorthm based on crcle pxels matchng

More information

Parallelism for Nested Loops with Non-uniform and Flow Dependences

Parallelism for Nested Loops with Non-uniform and Flow Dependences Parallelsm for Nested Loops wth Non-unform and Flow Dependences Sam-Jn Jeong Dept. of Informaton & Communcaton Engneerng, Cheonan Unversty, 5, Anseo-dong, Cheonan, Chungnam, 330-80, Korea. seong@cheonan.ac.kr

More information

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics

NAG Fortran Library Chapter Introduction. G10 Smoothing in Statistics Introducton G10 NAG Fortran Lbrary Chapter Introducton G10 Smoothng n Statstcs Contents 1 Scope of the Chapter... 2 2 Background to the Problems... 2 2.1 Smoothng Methods... 2 2.2 Smoothng Splnes and Regresson

More information

High resolution 3D Tau-p transform by matching pursuit Weiping Cao* and Warren S. Ross, Shearwater GeoServices

High resolution 3D Tau-p transform by matching pursuit Weiping Cao* and Warren S. Ross, Shearwater GeoServices Hgh resoluton 3D Tau-p transform by matchng pursut Wepng Cao* and Warren S. Ross, Shearwater GeoServces Summary The 3D Tau-p transform s of vtal sgnfcance for processng sesmc data acqured wth modern wde

More information

Unsupervised Learning

Unsupervised Learning Pattern Recognton Lecture 8 Outlne Introducton Unsupervsed Learnng Parametrc VS Non-Parametrc Approach Mxture of Denstes Maxmum-Lkelhood Estmates Clusterng Prof. Danel Yeung School of Computer Scence and

More information

Classifying Acoustic Transient Signals Using Artificial Intelligence

Classifying Acoustic Transient Signals Using Artificial Intelligence Classfyng Acoustc Transent Sgnals Usng Artfcal Intellgence Steve Sutton, Unversty of North Carolna At Wlmngton (suttons@charter.net) Greg Huff, Unversty of North Carolna At Wlmngton (jgh7476@uncwl.edu)

More information

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION

MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION MULTISPECTRAL IMAGES CLASSIFICATION BASED ON KLT AND ATR AUTOMATIC TARGET RECOGNITION Paulo Quntlano 1 & Antono Santa-Rosa 1 Federal Polce Department, Brasla, Brazl. E-mals: quntlano.pqs@dpf.gov.br and

More information

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices

An Application of the Dulmage-Mendelsohn Decomposition to Sparse Null Space Bases of Full Row Rank Matrices Internatonal Mathematcal Forum, Vol 7, 2012, no 52, 2549-2554 An Applcaton of the Dulmage-Mendelsohn Decomposton to Sparse Null Space Bases of Full Row Rank Matrces Mostafa Khorramzadeh Department of Mathematcal

More information

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide

Lobachevsky State University of Nizhni Novgorod. Polyhedron. Quick Start Guide Lobachevsky State Unversty of Nzhn Novgorod Polyhedron Quck Start Gude Nzhn Novgorod 2016 Contents Specfcaton of Polyhedron software... 3 Theoretcal background... 4 1. Interface of Polyhedron... 6 1.1.

More information

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance

Tsinghua University at TAC 2009: Summarizing Multi-documents by Information Distance Tsnghua Unversty at TAC 2009: Summarzng Mult-documents by Informaton Dstance Chong Long, Mnle Huang, Xaoyan Zhu State Key Laboratory of Intellgent Technology and Systems, Tsnghua Natonal Laboratory for

More information

LECTURE : MANIFOLD LEARNING

LECTURE : MANIFOLD LEARNING LECTURE : MANIFOLD LEARNING Rta Osadchy Some sldes are due to L.Saul, V. C. Raykar, N. Verma Topcs PCA MDS IsoMap LLE EgenMaps Done! Dmensonalty Reducton Data representaton Inputs are real-valued vectors

More information

Learning a Class-Specific Dictionary for Facial Expression Recognition

Learning a Class-Specific Dictionary for Facial Expression Recognition BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16, No 4 Sofa 016 Prnt ISSN: 1311-970; Onlne ISSN: 1314-4081 DOI: 10.1515/cat-016-0067 Learnng a Class-Specfc Dctonary for

More information

Recognizing Faces. Outline

Recognizing Faces. Outline Recognzng Faces Drk Colbry Outlne Introducton and Motvaton Defnng a feature vector Prncpal Component Analyss Lnear Dscrmnate Analyss !"" #$""% http://www.nfotech.oulu.f/annual/2004 + &'()*) '+)* 2 ! &

More information

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization

Problem Definitions and Evaluation Criteria for Computational Expensive Optimization Problem efntons and Evaluaton Crtera for Computatonal Expensve Optmzaton B. Lu 1, Q. Chen and Q. Zhang 3, J. J. Lang 4, P. N. Suganthan, B. Y. Qu 6 1 epartment of Computng, Glyndwr Unversty, UK Faclty

More information

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016)

Parallel Numerics. 1 Preconditioning & Iterative Solvers (From 2016) Technsche Unverstät München WSe 6/7 Insttut für Informatk Prof. Dr. Thomas Huckle Dpl.-Math. Benjamn Uekermann Parallel Numercs Exercse : Prevous Exam Questons Precondtonng & Iteratve Solvers (From 6)

More information

S1 Note. Basis functions.

S1 Note. Basis functions. S1 Note. Bass functons. Contents Types of bass functons...1 The Fourer bass...2 B-splne bass...3 Power and type I error rates wth dfferent numbers of bass functons...4 Table S1. Smulaton results of type

More information

Hermite Splines in Lie Groups as Products of Geodesics

Hermite Splines in Lie Groups as Products of Geodesics Hermte Splnes n Le Groups as Products of Geodescs Ethan Eade Updated May 28, 2017 1 Introducton 1.1 Goal Ths document defnes a curve n the Le group G parametrzed by tme and by structural parameters n the

More information

Lecture 4: Principal components

Lecture 4: Principal components /3/6 Lecture 4: Prncpal components 3..6 Multvarate lnear regresson MLR s optmal for the estmaton data...but poor for handlng collnear data Covarance matrx s not nvertble (large condton number) Robustness

More information

Texture Enhanced Image Denoising via Gradient Histogram Preservation

Texture Enhanced Image Denoising via Gradient Histogram Preservation 203 IEEE Conference on Computer Vson and Pattern Recognton Texture Enhanced Image Denosng va Gradent Hstogram Preservaton Wangmeng Zuo,2 Le Zhang 2 Chunwe Song Davd Zhang 2 Harbn Insttute of Technology,

More information

A Robust Method for Estimating the Fundamental Matrix

A Robust Method for Estimating the Fundamental Matrix Proc. VIIth Dgtal Image Computng: Technques and Applcatons, Sun C., Talbot H., Ourseln S. and Adraansen T. (Eds.), 0- Dec. 003, Sydney A Robust Method for Estmatng the Fundamental Matrx C.L. Feng and Y.S.

More information

Image Deblurring Using Adaptive Sparse Domain Selection and Adaptive Regularization

Image Deblurring Using Adaptive Sparse Domain Selection and Adaptive Regularization Volume 3, No. 3, May-June 2012 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info ISSN No. 0976-5697 Image Deblurring Using Adaptive Sparse

More information

PCA Based Gait Segmentation

PCA Based Gait Segmentation Honggu L, Cupng Sh & Xngguo L PCA Based Gat Segmentaton PCA Based Gat Segmentaton Honggu L, Cupng Sh, and Xngguo L 2 Electronc Department, Physcs College, Yangzhou Unversty, 225002 Yangzhou, Chna 2 Department

More information

SIGGRAPH Interactive Image Cutout. Interactive Graph Cut. Interactive Graph Cut. Interactive Graph Cut. Hard Constraints. Lazy Snapping.

SIGGRAPH Interactive Image Cutout. Interactive Graph Cut. Interactive Graph Cut. Interactive Graph Cut. Hard Constraints. Lazy Snapping. SIGGRAPH 004 Interactve Image Cutout Lazy Snappng Yn L Jan Sun Ch-Keung Tang Heung-Yeung Shum Mcrosoft Research Asa Hong Kong Unversty Separate an object from ts background Compose the object on another

More information

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline

Image Representation & Visualization Basic Imaging Algorithms Shape Representation and Analysis. outline mage Vsualzaton mage Vsualzaton mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and Analyss outlne mage Representaton & Vsualzaton Basc magng Algorthms Shape Representaton and

More information

SVM-based Learning for Multiple Model Estimation

SVM-based Learning for Multiple Model Estimation SVM-based Learnng for Multple Model Estmaton Vladmr Cherkassky and Yunqan Ma Department of Electrcal and Computer Engneerng Unversty of Mnnesota Mnneapols, MN 55455 {cherkass,myq}@ece.umn.edu Abstract:

More information

A fast algorithm for color image segmentation

A fast algorithm for color image segmentation Unersty of Wollongong Research Onlne Faculty of Informatcs - Papers (Arche) Faculty of Engneerng and Informaton Scences 006 A fast algorthm for color mage segmentaton L. Dong Unersty of Wollongong, lju@uow.edu.au

More information

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data

A Fast Content-Based Multimedia Retrieval Technique Using Compressed Data A Fast Content-Based Multmeda Retreval Technque Usng Compressed Data Borko Furht and Pornvt Saksobhavvat NSF Multmeda Laboratory Florda Atlantc Unversty, Boca Raton, Florda 3343 ABSTRACT In ths paper,

More information

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS

A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Proceedngs of the Wnter Smulaton Conference M E Kuhl, N M Steger, F B Armstrong, and J A Jones, eds A MOVING MESH APPROACH FOR SIMULATION BUDGET ALLOCATION ON CONTINUOUS DOMAINS Mark W Brantley Chun-Hung

More information

Lecture 5: Multilayer Perceptrons

Lecture 5: Multilayer Perceptrons Lecture 5: Multlayer Perceptrons Roger Grosse 1 Introducton So far, we ve only talked about lnear models: lnear regresson and lnear bnary classfers. We noted that there are functons that can t be represented

More information

Wavefront Reconstructor

Wavefront Reconstructor A Dstrbuted Smplex B-Splne Based Wavefront Reconstructor Coen de Vsser and Mchel Verhaegen 14-12-201212 2012 Delft Unversty of Technology Contents Introducton Wavefront reconstructon usng Smplex B-Splnes

More information

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique

The Greedy Method. Outline and Reading. Change Money Problem. Greedy Algorithms. Applications of the Greedy Strategy. The Greedy Method Technique //00 :0 AM Outlne and Readng The Greedy Method The Greedy Method Technque (secton.) Fractonal Knapsack Problem (secton..) Task Schedulng (secton..) Mnmum Spannng Trees (secton.) Change Money Problem Greedy

More information

4580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 10, OCTOBER 2016

4580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 10, OCTOBER 2016 4580 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 25, NO. 0, OCTOBER 206 Sparse Representaton Wth Spato-Temporal Onlne Dctonary Learnng for Promsng Vdeo Codng Wenru Da, Member, IEEE, Yangme Shen, Xn Tang,

More information

The Codesign Challenge

The Codesign Challenge ECE 4530 Codesgn Challenge Fall 2007 Hardware/Software Codesgn The Codesgn Challenge Objectves In the codesgn challenge, your task s to accelerate a gven software reference mplementaton as fast as possble.

More information

Feature-Area Optimization: A Novel SAR Image Registration Method

Feature-Area Optimization: A Novel SAR Image Registration Method Feature-Area Optmzaton: A Novel SAR Image Regstraton Method Fuqang Lu, Fukun B, Lang Chen, Hao Sh and We Lu Abstract Ths letter proposes a synthetc aperture radar (SAR) mage regstraton method named Feature-Area

More information

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like:

Outline. Self-Organizing Maps (SOM) US Hebbian Learning, Cntd. The learning rule is Hebbian like: Self-Organzng Maps (SOM) Turgay İBRİKÇİ, PhD. Outlne Introducton Structures of SOM SOM Archtecture Neghborhoods SOM Algorthm Examples Summary 1 2 Unsupervsed Hebban Learnng US Hebban Learnng, Cntd 3 A

More information

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS

NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS ARPN Journal of Engneerng and Appled Scences 006-017 Asan Research Publshng Network (ARPN). All rghts reserved. NUMERICAL SOLVING OPTIMAL CONTROL PROBLEMS BY THE METHOD OF VARIATIONS Igor Grgoryev, Svetlana

More information

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines

A Modified Median Filter for the Removal of Impulse Noise Based on the Support Vector Machines A Modfed Medan Flter for the Removal of Impulse Nose Based on the Support Vector Machnes H. GOMEZ-MORENO, S. MALDONADO-BASCON, F. LOPEZ-FERRERAS, M. UTRILLA- MANSO AND P. GIL-JIMENEZ Departamento de Teoría

More information

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15

CS434a/541a: Pattern Recognition Prof. Olga Veksler. Lecture 15 CS434a/541a: Pattern Recognton Prof. Olga Veksler Lecture 15 Today New Topc: Unsupervsed Learnng Supervsed vs. unsupervsed learnng Unsupervsed learnng Net Tme: parametrc unsupervsed learnng Today: nonparametrc

More information

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision

SLAM Summer School 2006 Practical 2: SLAM using Monocular Vision SLAM Summer School 2006 Practcal 2: SLAM usng Monocular Vson Javer Cvera, Unversty of Zaragoza Andrew J. Davson, Imperal College London J.M.M Montel, Unversty of Zaragoza. josemar@unzar.es, jcvera@unzar.es,

More information

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.

Fitting & Matching. Lecture 4 Prof. Bregler. Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros. Fttng & Matchng Lecture 4 Prof. Bregler Sldes from: S. Lazebnk, S. Setz, M. Pollefeys, A. Effros. How do we buld panorama? We need to match (algn) mages Matchng wth Features Detect feature ponts n both

More information

Lecture #15 Lecture Notes

Lecture #15 Lecture Notes Lecture #15 Lecture Notes The ocean water column s very much a 3-D spatal entt and we need to represent that structure n an economcal way to deal wth t n calculatons. We wll dscuss one way to do so, emprcal

More information

Parallel matrix-vector multiplication

Parallel matrix-vector multiplication Appendx A Parallel matrx-vector multplcaton The reduced transton matrx of the three-dmensonal cage model for gel electrophoress, descrbed n secton 3.2, becomes excessvely large for polymer lengths more

More information

Mathematics 256 a course in differential equations for engineering students

Mathematics 256 a course in differential equations for engineering students Mathematcs 56 a course n dfferental equatons for engneerng students Chapter 5. More effcent methods of numercal soluton Euler s method s qute neffcent. Because the error s essentally proportonal to the

More information

Support Vector Machines

Support Vector Machines Support Vector Machnes Decson surface s a hyperplane (lne n 2D) n feature space (smlar to the Perceptron) Arguably, the most mportant recent dscovery n machne learnng In a nutshell: map the data to a predetermned

More information

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach

Skew Angle Estimation and Correction of Hand Written, Textual and Large areas of Non-Textual Document Images: A Novel Approach Angle Estmaton and Correcton of Hand Wrtten, Textual and Large areas of Non-Textual Document Images: A Novel Approach D.R.Ramesh Babu Pyush M Kumat Mahesh D Dhannawat PES Insttute of Technology Research

More information

Classifier Selection Based on Data Complexity Measures *

Classifier Selection Based on Data Complexity Measures * Classfer Selecton Based on Data Complexty Measures * Edth Hernández-Reyes, J.A. Carrasco-Ochoa, and J.Fco. Martínez-Trndad Natonal Insttute for Astrophyscs, Optcs and Electroncs, Lus Enrque Erro No.1 Sta.

More information

Local Quaternary Patterns and Feature Local Quaternary Patterns

Local Quaternary Patterns and Feature Local Quaternary Patterns Local Quaternary Patterns and Feature Local Quaternary Patterns Jayu Gu and Chengjun Lu The Department of Computer Scence, New Jersey Insttute of Technology, Newark, NJ 0102, USA Abstract - Ths paper presents

More information

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning

Kernel Collaborative Representation Classification Based on Adaptive Dictionary Learning Internatonal Journal of Intellgent Informaton Systems 2018; 7(2): 15-22 http://www.scencepublshnggroup.com/j/js do: 10.11648/j.js.20180702.11 ISSN: 2328-7675 (Prnt); ISSN: 2328-7683 (Onlne) Kernel Collaboratve

More information

INTER-BLOCK CONSISTENT SOFT DECODING OF JPEG IMAGES WITH SPARSITY AND GRAPH-SIGNAL SMOOTHNESS PRIORS

INTER-BLOCK CONSISTENT SOFT DECODING OF JPEG IMAGES WITH SPARSITY AND GRAPH-SIGNAL SMOOTHNESS PRIORS INTER-BLOCK CONSISTENT SOFT DECODING OF IMAGES WITH SPARSITY AND GRAPH-SIGNAL SMOOTHNESS PRIORS Xanmng Lu 1,2, Gene Cheung 2, Xaoln Wu 3, Debn Zhao 1 1 School of Computer Scence and Technology, Harbn Insttute

More information

High-Boost Mesh Filtering for 3-D Shape Enhancement

High-Boost Mesh Filtering for 3-D Shape Enhancement Hgh-Boost Mesh Flterng for 3-D Shape Enhancement Hrokazu Yagou Λ Alexander Belyaev y Damng We z Λ y z ; ; Shape Modelng Laboratory, Unversty of Azu, Azu-Wakamatsu 965-8580 Japan y Computer Graphcs Group,

More information

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1

Outline. Discriminative classifiers for image recognition. Where in the World? A nearest neighbor recognition example 4/14/2011. CS 376 Lecture 22 1 4/14/011 Outlne Dscrmnatve classfers for mage recognton Wednesday, Aprl 13 Krsten Grauman UT-Austn Last tme: wndow-based generc obect detecton basc ppelne face detecton wth boostng as case study Today:

More information

Analysis of Continuous Beams in General

Analysis of Continuous Beams in General Analyss of Contnuous Beams n General Contnuous beams consdered here are prsmatc, rgdly connected to each beam segment and supported at varous ponts along the beam. onts are selected at ponts of support,

More information

Active Contours/Snakes

Active Contours/Snakes Actve Contours/Snakes Erkut Erdem Acknowledgement: The sldes are adapted from the sldes prepared by K. Grauman of Unversty of Texas at Austn Fttng: Edges vs. boundares Edges useful sgnal to ndcate occludng

More information

Hybrid Non-Blind Color Image Watermarking

Hybrid Non-Blind Color Image Watermarking Hybrd Non-Blnd Color Image Watermarkng Ms C.N.Sujatha 1, Dr. P. Satyanarayana 2 1 Assocate Professor, Dept. of ECE, SNIST, Yamnampet, Ghatkesar Hyderabad-501301, Telangana 2 Professor, Dept. of ECE, AITS,

More information

Enhanced AMBTC for Image Compression using Block Classification and Interpolation

Enhanced AMBTC for Image Compression using Block Classification and Interpolation Internatonal Journal of Computer Applcatons (0975 8887) Volume 5 No.0, August 0 Enhanced AMBTC for Image Compresson usng Block Classfcaton and Interpolaton S. Vmala Dept. of Comp. Scence Mother Teresa

More information

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters

Proper Choice of Data Used for the Estimation of Datum Transformation Parameters Proper Choce of Data Used for the Estmaton of Datum Transformaton Parameters Hakan S. KUTOGLU, Turkey Key words: Coordnate systems; transformaton; estmaton, relablty. SUMMARY Advances n technologes and

More information

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation

Quality Improvement Algorithm for Tetrahedral Mesh Based on Optimal Delaunay Triangulation Intellgent Informaton Management, 013, 5, 191-195 Publshed Onlne November 013 (http://www.scrp.org/journal/m) http://dx.do.org/10.36/m.013.5601 Qualty Improvement Algorthm for Tetrahedral Mesh Based on

More information

An efficient method to build panoramic image mosaics

An efficient method to build panoramic image mosaics An effcent method to buld panoramc mage mosacs Pattern Recognton Letters vol. 4 003 Dae-Hyun Km Yong-In Yoon Jong-Soo Cho School of Electrcal Engneerng and Computer Scence Kyungpook Natonal Unv. Abstract

More information

Robust Dictionary Learning with Capped l 1 -Norm

Robust Dictionary Learning with Capped l 1 -Norm Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 205) Robust Dctonary Learnng wth Capped l -Norm Wenhao Jang, Fepng Ne, Heng Huang Unversty of Texas at Arlngton

More information

K-means and Hierarchical Clustering

K-means and Hierarchical Clustering Note to other teachers and users of these sldes. Andrew would be delghted f you found ths source materal useful n gvng your own lectures. Feel free to use these sldes verbatm, or to modfy them to ft your

More information

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data

Type-2 Fuzzy Non-uniform Rational B-spline Model with Type-2 Fuzzy Data Malaysan Journal of Mathematcal Scences 11(S) Aprl : 35 46 (2017) Specal Issue: The 2nd Internatonal Conference and Workshop on Mathematcal Analyss (ICWOMA 2016) MALAYSIAN JOURNAL OF MATHEMATICAL SCIENCES

More information

Object-Based Techniques for Image Retrieval

Object-Based Techniques for Image Retrieval 54 Zhang, Gao, & Luo Chapter VII Object-Based Technques for Image Retreval Y. J. Zhang, Tsnghua Unversty, Chna Y. Y. Gao, Tsnghua Unversty, Chna Y. Luo, Tsnghua Unversty, Chna ABSTRACT To overcome the

More information

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION

A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION 1 THE PUBLISHING HOUSE PROCEEDINGS OF THE ROMANIAN ACADEMY, Seres A, OF THE ROMANIAN ACADEMY Volume 4, Number 2/2003, pp.000-000 A PATTERN RECOGNITION APPROACH TO IMAGE SEGMENTATION Tudor BARBU Insttute

More information

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005

Exercises (Part 4) Introduction to R UCLA/CCPR. John Fox, February 2005 Exercses (Part 4) Introducton to R UCLA/CCPR John Fox, February 2005 1. A challengng problem: Iterated weghted least squares (IWLS) s a standard method of fttng generalzed lnear models to data. As descrbed

More information

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010

Simulation: Solving Dynamic Models ABE 5646 Week 11 Chapter 2, Spring 2010 Smulaton: Solvng Dynamc Models ABE 5646 Week Chapter 2, Sprng 200 Week Descrpton Readng Materal Mar 5- Mar 9 Evaluatng [Crop] Models Comparng a model wth data - Graphcal, errors - Measures of agreement

More information

Wishing you all a Total Quality New Year!

Wishing you all a Total Quality New Year! Total Qualty Management and Sx Sgma Post Graduate Program 214-15 Sesson 4 Vnay Kumar Kalakband Assstant Professor Operatons & Systems Area 1 Wshng you all a Total Qualty New Year! Hope you acheve Sx sgma

More information

Hierarchical clustering for gene expression data analysis

Hierarchical clustering for gene expression data analysis Herarchcal clusterng for gene expresson data analyss Gorgo Valentn e-mal: valentn@ds.unm.t Clusterng of Mcroarray Data. Clusterng of gene expresson profles (rows) => dscovery of co-regulated and functonally

More information

Learning Discriminative Data Fitting Functions for Blind Image Deblurring

Learning Discriminative Data Fitting Functions for Blind Image Deblurring Learnng Dscrmnatve Data Fttng Functons for Blnd Image Deblurrng Jnshan Pan 1 Jangxn Dong 2 Yu-Wng Ta 3 Zhxun Su 2 Mng-Hsuan Yang 4 1 Nanng Unversty of Scence and Technology 2 Dalan Unversty of Technology

More information

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements

2x x l. Module 3: Element Properties Lecture 4: Lagrange and Serendipity Elements Module 3: Element Propertes Lecture : Lagrange and Serendpty Elements 5 In last lecture note, the nterpolaton functons are derved on the bass of assumed polynomal from Pascal s trangle for the fled varable.

More information

Robust visual tracking based on Informative random fern

Robust visual tracking based on Informative random fern 5th Internatonal Conference on Computer Scences and Automaton Engneerng (ICCSAE 205) Robust vsual trackng based on Informatve random fern Hao Dong, a, Ru Wang, b School of Instrumentaton Scence and Opto-electroncs

More information

Programming in Fortran 90 : 2017/2018

Programming in Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Programmng n Fortran 90 : 2017/2018 Exercse 1 : Evaluaton of functon dependng on nput Wrte a program who evaluate the functon f (x,y) for any two user specfed values

More information

A Bilinear Model for Sparse Coding

A Bilinear Model for Sparse Coding A Blnear Model for Sparse Codng Davd B. Grmes and Rajesh P. N. Rao Department of Computer Scence and Engneerng Unversty of Washngton Seattle, WA 98195-2350, U.S.A. grmes,rao @cs.washngton.edu Abstract

More information

An Image Compression Algorithm based on Wavelet Transform and LZW

An Image Compression Algorithm based on Wavelet Transform and LZW An Image Compresson Algorthm based on Wavelet Transform and LZW Png Luo a, Janyong Yu b School of Chongqng Unversty of Posts and Telecommuncatons, Chongqng, 400065, Chna Abstract a cylpng@63.com, b y27769864@sna.cn

More information

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices

Steps for Computing the Dissimilarity, Entropy, Herfindahl-Hirschman and. Accessibility (Gravity with Competition) Indices Steps for Computng the Dssmlarty, Entropy, Herfndahl-Hrschman and Accessblty (Gravty wth Competton) Indces I. Dssmlarty Index Measurement: The followng formula can be used to measure the evenness between

More information