THE encoding and estimation of the relative locations of

Size: px
Start display at page:

Download "THE encoding and estimation of the relative locations of"

Transcription

1 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY Coherent Multiscale Image Processing Using Dual-Tree Quaternion Wavelets Wai Lam Chan, Student Member, IEEE, Hyeokho Choi, and Richard G. Baraniuk, Fellow, IEEE Abstract The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT s coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy. Index Terms Coherent processing, dual-tree, multiscale disparity estimation, phase, quaternion, wavelets. I. INTRODUCTION THE encoding and estimation of the relative locations of image features play an important role in many image processing applications, ranging from feature detection and target recognition to image compression. In edge detection, for example, the goal is to locate object boundaries in an image. In image denoising or compression, state-of-the-art techniques achieve significant performance improvements by exploiting information on the relative locations of large transform coefficients [1] [4]. An efficient way to compute and represent relative location information in signals is through the phase of the Fourier transform. The Fourier shift theorem provides a simple linear relationship between the signal shift and the Fourier phase. When only a local region of the signal is of interest, the short-time Fourier transform (STFT) provides a local Fourier phase for each windowed portion of the signal. The use of the Fourier phase to decipher the relative locations of image features is well Manuscript received April 4, This work was supported in part by the National Science Foundation under Grant CCF , in part by the Office of Naval Research under Grant N , in part by the AFOSR under Grant FA , in part by the AFRL under Grant FA , and in part by the Texas Instruments Leadership University Program. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. LJubiša Stanković. W. L. Chan and R. G. Baraniuk are with the Department of Electrical and Computer Engineering, Rice University, Houston, TX USA ( wailam@rice.edu; richb@rice.edu). H. Choi, deceased, was with the Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC USA. Color versions of one or more of the figures in this paper are available online at Digital Object Identifier /TIP Fig. 1. Three real wavelets (from the horizontal, vertical, and diagonal subbands, respectively) from the 2-D DWT basis generated using the length-14 Daubechies filter. established in the image processing and computer vision communities for applications such as stereo matching and image registration [5] [7]. Indeed, the classic experiment of Lim and Oppenheim [8] demonstrated that for natural images the Fourier phase contains a wealth of information beyond the magnitude. By the Fourier shift theorem, estimating location information using local phase provides more robust estimates with sub-pixel accuracy and requires less computational effort compared to purely amplitude-based approaches. For signals containing isolated singularities, such as piecewise smooth functions, the discrete wavelet transform (DWT) has proven to be more efficient than the STFT. The locality and zooming properties of the wavelet basis functions lead to a sparse representation of such signals that compacts the signal energy into a small number of coefficients. Wavelet coefficient sparsity is the key enabler of algorithms such as wavelet-based denoising by shrinkage [9]. Many natural images consist of smooth or textured regions separated by edges and are well-suited to wavelet analysis and representation. Other advantages of wavelet analysis include its multiscale structure, invertibility, and linear complexity filter-bank implementation. 2-D DWT basis functions are easily formed as the tensor products of 1-D DWT basis functions along the vertical and horizontal directions; see Fig. 1. The conventional, real-valued DWT, however, suffers from two drawbacks. The first drawback is shift variance: a small shift of the signal causes significant fluctuations in wavelet coefficient energy, making it difficult to extract or model signal information from the coefficient values. The second drawback is the lack of a notion of phase to encode signal location information as in the Fourier case. Complex wavelet transforms (CWTs) provide an avenue to remedy these two drawbacks of the DWT. It is interesting to note that the earliest modern wavelets, those of Grossmann and Morlet [10], were in fact complex, and Grossman continually emphasized the power of the CWT phase for signal analysis and representation. Subsequent researchers have developed orthogonal or biorthogonal CWTs; see, for example, [11] [16]. A productive line of research has developed over the past decade on the dual-tree CWT, which in 1-D combines two orthogonal or biorthogonal wavelet bases using complex algebra /$ IEEE

2 1070 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 Fig. 2. Six complex wavelets from the 2-D dual-tree CWT frame generated from orthogonal near-symmetric filters [23] in the first stage and Q-filters [24] in subsequent stages. (a) Real parts, with approximate even symmetry; (b) imaginary parts, with approximate odd symmetry. into a single system, with one basis corresponding to the real part of the complex wavelet and the other to the imaginary part [17]. Ideally, the real and imaginary wavelets are a Hilbert transform pair (90 out of phase) and form an analytic wavelet supported on only the positive frequencies in the Fourier domain, just like the cosine and sine components of a complex sinusoid. The 1-D dual-tree CWT is a slightly (2 ) redundant tight frame, and the magnitudes of its coefficients are nearly shift invariant [17]. There also exists an approximately linear relationship between the dual-tree CWT phase and the locations of 1-D signal singularities [18] as in the Fourier shift theorem. The 2-D dual-tree CWT for images is based on the theory of the 2-D Hilbert transform (HT) and 2-D analytic signal as first suggested by Hahn [19]. In particular, a 2-D dual-tree complex wavelet is formed using the 1-D HT of the usual 2-D real DWT wavelets in the horizontal and/or vertical directions. The result is a 4 redundant tight frame with six directional subbands oriented at multiples of 15 ; see Fig. 2 [17], [20]. The 2-D CWT is near shift-invariant, and its magnitude-phase representation has a complex phase component that encodes shifts of local 1-D structures in images such as edges and ridges [21]. As a result, the 2-D dual-tree CWT has proved useful for a variety of tasks in image processing [3], [4], [21], [22]. Each 2-D dual-tree CWT basis coefficient has a single phase angle, which encodes the 1-D shift of image features perpendicular to its orientation. This may be sufficient for analyzing local 1-D structures such as edges. However, when the feature under analysis is intrinsically 2-D [25] for example, an image T-junction [26] then its relative location is defined in both the horizontal and vertical directions. This causes ambiguity in the CWT phase shift, whereby we cannot resolve the image shifts in both the horizontal and vertical directions from the change of only one CWT coefficient phase. To overcome this ambiguity, we must conduct a joint analysis with two CWT phases from differently oriented subbands, which can complicate image analysis and modeling considerably. In this paper, we explore an alternative theory for the 2-D HT and analytic signal due to Bülow [25], [27] and show that it leads to an alternative to the 2-D dual-tree CWT. In Bülow s HT, the 2-D analytic signal is defined by limiting the 2-D Fourier spectrum to a single quadrant. Applying this theory within the dual tree framework, we develop and study a new dual-tree quaternion wavelet transform (QWT), where each quaternion wavelet consists of a real part (a usual real DWT wavelet) and three imaginary parts that are organized according to quaternion algebra; see Fig. 3. Our QWT, first proposed in [28] and [29], is a 4 redundant tight frame with three subbands (horizontal, vertical, and diagonal). It is also near shift-invariant. Fig. 3. Three quaternion wavelets from the 2-D dual-tree QWT frame. Each quaternion wavelet comprises four components that are 90 phase shifts of each other in the vertical, horizontal, and both directions. (a) Horizontal subband, from left to right: (x) (y) (a usual, real DWT tensor wavelet), (x) (y), (x) (y), (x) (y), j (x; y)j. (b) Vertical subband, from left to right: (x) (y) (a usual, real DWT tensor wavelet), (x) (y), (x) (y), (x) (y), j (x; y)j. (c) Diagonal subband, from left to right: (x) (y) (a usual, real DWT tensor wavelet), (x) (y), (x) (y), (x) (y), j (x; y)j. The image on the far right is the quaternion wavelet magnitude for each subband, a nonoscillating function. The same dual-tree wavelet filters are used as the 2-D dual-tree CWT in Fig. 2. The QWT inherits a quaternion magnitude-phase representation from the quaternion Fourier transform (QFT). The first two QWT phases encode the shifts of image features in the absolute horizontal/vertical coordinate system, while the third phase encodes edge orientation mixtures and texture information. One major focus of this paper is to demonstrate coherent, multiscale processing using the QWT, or in other words, the use of its magnitude and phase for multiscale image analysis. To illustrate the power of coherent processing, we consider two image processing applications. In the first application, we develop a new magnitude-and-phase-based algorithm for edge orientation and offset estimation in local image blocks. Our algorithm is entirely based on the QWT shift theorem and the interpretation of the QWT as a local QFT analysis. In the second application, we design a new multiscale image disparity estimation algorithm. The QWT provides a natural multiscale framework for measuring and adjusting local disparities and performing phase unwrapping from coarse to fine scales with linear computational efficiency. The convenient QWT encoding of location information in the absolute horizontal/vertical coordinate system facilitates averaging across subband estimates for more robust performance. Our algorithm offers sub-pixel estimation accuracy and runs faster than existing disparity estimation algorithms like block matching and phase correlation [30]. When many sharp edges and features are present and the underlying image disparity field is smooth, our method also exhibits superior performance over these existing techniques. Previous work in quaternions and the theory of the 2-D HT and analytic signal for image processing includes Bülow s extension of the Fourier transform and complex Gabor filters to the quaternion Fourier transform (QFT) [25]. Our QWT can be interpreted as a local QFT and, thus, inherits many of its interesting and useful theoretical properties such as the quaternion phase representation, symmetry properties, and shift theorem. In addition, the dual-tree QWT sports a linear-time

3 CHAN et al.: COHERENT MULTISCALE IMAGE PROCESSING USING DUAL-TREE QUATERNION WAVELETS 1071 and invertible computational algorithm. A different extension of the QFT yields the quaternion wavelet pyramid introduced by Bayro Corrochano [31]; however, the use of Gabor filters limits its performance and renders it noninvertible. There are also interesting connections between the dual-tree QWT and the (nonredundant) quaternion wavelet representations of Ates and Orchard [32] and Hua and Orchard [33]. Finally, we note that there exists a third alternative 2-D HT called the Riesz transform and its associated analytic signal called the monogenic signal [34]. The monogenic signal, generated by spherical quadrature filters, has a vector-valued phase that encodes both the orientations of intrinsically 1-D (edge-like) image features and their shift normal to the edge orientation. Felsberg s extension of the monogenic signal can analyze intrinsically 2-D signals with two phase angles but requires complicated processing such as local orientation estimation and steering of basis filters [35]. This paper is organized as follows. We start by briefly reviewing the DWT and dual-tree CWT in Section II. Section III develops the dual-tree QWT, and Section IV discusses some of its important properties, in particular its phase response to singularities. We develop and demonstrate the QWT-based edge geometry and disparity estimation algorithms in Section V. Section VI concludes the paper with a discussion of the QWT s potential for future applications. The Appendix contains detailed derivations and proofs of some of the QWT properties and theorems from Sections IV and V. II. REAL AND COMPLEX WAVELET TRANSFORMS This section overviews the real DWT and the dual-tree CWT. We also develop a new formulation for the 2-D dual-tree CWT using the theory of 2-D HTs. A. Real DWT The real DWT represents a 1-D real-valued signal in terms of shifted versions of a scaling function and shifted and scaled versions of a wavelet function [36]. The functions and,, form an orthonormal basis, and we can represent any as where and are the scaling and wavelet coefficients, respectively. The parameter sets the coarsest scale space that is spanned by. Behind each wavelet transform is a filterbank based on lowpass and highpass filters. The standard real 2-D DWT is obtained using tensor products of 1-D DWTs over the horizontal and vertical dimensions. The result is the scaling function and three subband wavelets,, and that are oriented in the horizontal, vertical, and diagonal directions, respectively [36] (see Fig. 1). The real wavelet transform suffers from shift variance; that is, a small shift in the signal can greatly perturb the magnitude (1) Fig. 4. One-dimensional dual-tree CWT is implemented using a pair of filter banks operating on the same data simultaneously. Outputs of the filter banks are the dual-tree scaling coefficients, c and c, and the wavelet coefficients, d and d, at scale ` and shift p. The CWT coefficients are then obtained as d + jd. of wavelet coefficients around singularities. It also lacks a notion of phase to encode signal location information and suffers from aliasing [37]. These issues complicate modeling and information extraction in the wavelet domain. B. Dual-Tree CWT The 1-D dual-tree CWT expands a real-valued signal in terms of two sets of wavelet and scaling functions obtained from two independent filterbanks [17], as shown in Fig. 4. We will use the notation and to denote the scaling and wavelet functions and and to denote their corresponding coefficients, where specifies a particular set of wavelet filters. The wavelet functions and from the two trees play the role of the real and imaginary parts of a complex analytic wavelet. The imaginary wavelet is the 1-D HT of the real wavelet. The combined system is a2 redundant tight frame that, by virtue of the fact that is nonoscillating, is near shift-invariant. 1 It is useful to recall that the Fourier transform of the imaginary wavelet equals when and when. Thus, the Fourier transform of the complex wavelet function has no energy (or little in practice) in the negative frequency region, 2 making it an analytic signal [17]. C. Hilbert Transforms and 2-D CWT Extending the 1-D CWT to 2-D requires an extension of the HT and analytic signal. There exist not one but several different definitions of the 2-D analytic signal that each zero out a different portion of the 2-D frequency plane [27]. We will consider two definitions. The first, proposed by Hahn in [19], employs complex algebra and zeros out frequencies on all but a single quadrant (,, for example, where (, ) indexes the 2-D frequency plane). In this formulation, the complete 2-D analytic signal consists of two parts: one having spectrum on the upper right quadrant (, ) and the other on the upper left quadrant (, ) [27]. 1 A finitely supported function can never be exactly analytic [37]. In practice, we can only design finite-length complex wavelets that are approximately analytic, and, thus, the CWT is only approximately shift-invariant [17], [20]. 2 Note that the Fourier transform of the complex scaling function, 8 (!) + j8 (!) =8 (!), is only approximately analytic in practice, and so its support will leak into the negative frequency region.

4 1072 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 Definition 1 [19]: Let be a real-valued, 2-D function. The complete 2-D complex analytic signal is defined in the space domain,, as the pair of complex signals where (2) (3) The function is the total HT; the functions and are the partial HTs; and are impulse sheets along the axis and axis, respectively; and denotes 2-D convolution. The 2-D complex analytic signal in (2) (3) is the notion behind the 2-D dual-tree CWT [17], [20]. Each 2-D CWT basis function is a 2-D complex analytic signal consisting of a standard DWT tensor wavelet plus three additional real wavelets obtained by 1-D HTs along either or both coordinates. For example, starting from real DWT s diagonal-subband tensor product wavelet from above, we obtain from (4) (6) its partial and total HTs From Definition 1, we then obtain the two complex wavelets having orientations, 45 and, respectively. Similar expressions can be obtained for the other two subbands and ) based on and. Each 2-D CWT coefficient has only a single phase angle, which encodes the 1-D shift of image features perpendicular to its subband direction. Fig. 5(a) illustrates this phase-shift property. This encoding may be sufficient for local 1-D structures such as edges, since we can define edge shifts uniquely by a single value, say, in the direction perpendicular to the edge. However, even in this case, the analysis is not so straightforward when the edge does not align with the six orientations of the CWT subbands. Moreover, shifts of intrinsically 2-D (nonedge) image features such as in Fig. 5(a) require two values in the and directions, respectively. This creates ambiguity in the CWT phase shift. We can resolve this ambiguity by using the coefficients from two CWT subbands, but this complicates the use of the CWT for image analysis, modeling, and other image processing applications. In contrast, Fig. 5(b) illustrates (4) (5) (6) (7) (8) Fig. 5. (a) CWT coefficient s single phase angle responds linearly to image shift r in a direction orthogonal to the wavelet s orientation. (b) Two of the QWT coefficient s three phase angles respond linearly to image shifts (r ;r ) in an absolute horizontal/vertical coordinate system. a more convenient encoding of image shifts in absolute -coordinates (with two phase angles) using the quaternion phases of our new QWT, to which we now turn our attention. III. QUATERNION WAVELET TRANSFORM (QWT) A. Quaternion Hilbert Transform There are several alternatives to the 2-D analytic signal of Definition 1; we focus here on one due to Bülow [27]. It combines the partial and total HTs from (4) (6) to form an analytic signal comprising a real part and three imaginary components that are manipulated using quaternion algebra [25]. The set of quaternions has multiplication rules and, as well as component-wise addition and multiplication by real numbers [38]. Additional multiplication rules include:,, and. Note that quaternionic multiplication is not commutative. The conjugate of a quaternion is defined by while the magnitude is defined as. An alternative representation for a quaternion is through its magnitude and three phase angles: [25], where are the quaternion phase angles, computed using the following formulae (for normalized, i.e., ) and, in the regular case (i.e., when ) (9) (10) (11) In the singular case, i.e., when,, and are not uniquely defined. Only the sum (if ) or the difference (if )of and is unique [25]. If calculated from (9) (11) satisfy, subtract by if ; add to if. As a result,

5 CHAN et al.: COHERENT MULTISCALE IMAGE PROCESSING USING DUAL-TREE QUATERNION WAVELETS 1073 each quaternion phase angle is uniquely defined within the range. The operation of conjugation in the usual set of complex numbers,, where and, is a so-called algebra involution that fulfills the two following properties for any : and. In, there are three nontrivial algebra involutions (12) (13) (14) Using these involutions we can extend the definition of Hermitian symmetry. A function is called quaternionic Hermitian if, for each and (15) Bülow introduces an alternative definition of 2-D analytic signal based on the quaternion Fourier transform (QFT) [25]. The QFT of a 2-D signal is given by (16) where denotes the QFT operator, indexes the QFT domain, and the quaternion exponential (17) is the QFT basis function. The real part of (17) is, while the other three quaternionic components are its partial and total HTs as defined in (4) (6). Note that the QFT of a real-valued signal is quaternionic Hermitian, and each QFT basis function satisfies the definition of a 2-D quaternion analytic signal. Definition 2 [27]: Let be a real-valued 2-D signal. The 2-D quaternion analytic signal is defined as (18) where the functions,, and are defined as in (4) (6). B. QWT Construction Our new 2-D dual-tree QWT rests on the quaternion definition of 2-D analytic signal. By organizing the four quadrature components of a 2-D wavelet (the real wavelet and its 2-D HTs) as a quaternion, we obtain a 2-D analytic wavelet and its associated quaternion wavelet transform (QWT). For example, for the diagonal subband, with, we obtain the quaternion wavelet (19) Fig. 6. Quaternion Fourier domain relationships among the four quadrature components of a quaternion wavelet (x; y) in the diagonal subband. The QFT spectra of the real wavelet (x) (y) in the first to fourth quadrants are denoted by (F ;F ;F ;F ), respectively. The partial and total Hilbert transform operations are equivalent to multiplying the quadrants of F f (x) (y)g in (a) by 6j, or6j, or both. (a) F f (x) (y)g, (b) F f (x) (y)g, (c) F f (x) (y)g, (d) F f (x) (y)g. To compute the QWT coefficients, we can use a separable 2-D implementation [20] of the dual-tree filter bank in Fig. 4. During each stage of filtering, we independently apply the two sets of and wavelet filters to each dimension ( and ) of a 2-D image; for instance, applying the set of filters to both dimensions yields the scaling coefficients and the diagonal, vertical, and horizontal wavelet coefficients,,, and, respectively. Therefore, the resulting 2-D dual-tree implementation comprises four independent filter banks [corresponding to all possible combinations of wavelet filters applied to each dimension (,,, and )] operating on the same 2-D image. We combine the wavelet coefficients of the same subband from the output of each filter bank using quaternion algebra to obtain the QWT coefficients; for example, for the diagonal subband:. Fig. 3(c) illustrates the four components of a quaternion wavelet and its quaternion magnitude for the diagonal subband. The partial and total HT components resemble but are phase-shifted by 90 in the horizontal, vertical, and both directions. The magnitude of each quaternion wavelet (square root of the sum-of-squares of all four components) is a smooth bell-shaped function. We can also interpret the four components of in the QFT domain as multiplying the quadrants of the QFT of by or, or both, as shown in Fig. 6. Note that the order of multiplication is important because quaternion multiplication is noncommutative. This quaternion wavelet,, has support in only a single quadrant of the QFT domain (see Appendix A). The construction and properties are similar for the other two subband quaternion wavelets based on and [see the horizontal and vertical subband wavelets, and in Fig. 3(a) and (b), respectively].

6 1074 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 In summary, in contrast with the six complex pairs of CWT wavelets (12 functions in total), the QWT sports three quaternion sets of four QWT wavelets (12 functions in total). Finally, note that the quaternion wavelet transform is approximately a windowed quaternion Fourier transform (QFT) [25]. In contrast to the QFT in (16), the basis functions for the QWT are scaled and shifted versions of the quaternion wavelets,, and. IV. QWT PROPERTIES Since the dual-tree QWT is based on combining 1-D CWT functions, it preserves many of the attractive properties of the CWT. Furthermore, the quaternion organization and manipulation provide new features not present in either the 2-D DWT or CWT. In this section, we discuss some of the key properties of the QWT with special emphasis on its phase. A. Tight Frame The QWT comprises four orthonormal basis sets and, thus, forms a 4 redundant tight frame. The components of the QWT wavelets at each scale can be organized in matrix form as (20) page. The columns of contain the complex wavelets oriented at,, and, respectively. We obtain the CWT wavelets by multiplying the matrix in (20) by the unitary matrix (22) Since, the CWT also satisfies the tight-frame property with the same 4 redundancy factor. As we will see in the Section IV-D, both the CWT phase and the QWT phases encode 2-D image feature shifts; however, there exists no straightforward relationship between the phase angles of the QWT and CWT coefficients. C. QWT and QFT To make concrete the interpretation of the QWT as a local QFT, we derive a QFT Plancharel theorem and an inner product formula in the QFT domain. QFT Plancharel Theorem. Let be a real-valued 2-D signal, let be a quaternion-valued 2-D signal, and let and be their respective QFTs. Then the inner product of and in the space domain equals the following inner product in the QFT domain: The frame contains shifted and scaled versions of the functions in plus the scaling functions. Each column of the matrix contains the four components of the quaternion wavelet corresponding to a subband of the QWT. For example, the first column contains the quaternion wavelet components in Fig. 3(c), that is, the tensor product wavelet and its 2-D partial and total HTs in (19). Each row of contains the wavelet functions necessary to form one orthonormal basis set. Since has four rows, the total system is a 4 redundant tight frame. An important consequence is that the QWT is stably invertible. The wavelet coefficients corresponding to the projections onto the functions in can be computed using a 2-D dual-tree filter bank with linear computational complexity. B. Relationship to the 2-D CWT A unitary transformation links the QWT frame elements and coefficients and the 2-D CWT frame elements and coefficients. The components of the CWT wavelets at each scale can be organized in matrix form, as shown in (21) at the bottom of the (23) where and are the algebra involutions defined in (12) (14). The functions and are, respectively, the even and odd components of with respect to the spatial coordinate,asdefined in the QFT convolution theorem [25] (24) (25) We call the right side of (23) the QFT inner product between and. The proof appears in Appendix B. The QFT Plancharel Theorem enables us to interpret the QWT as a local or windowed QFT. Let be a quaternion wavelet at a particular scale and subband and be a real image under analysis. Their QFT inner product in (23) gives the corresponding QWT coefficient. Since quaternion wavelets have a single-quadrant QFT spectrum as shown in Appendix A, (21)

7 CHAN et al.: COHERENT MULTISCALE IMAGE PROCESSING USING DUAL-TREE QUATERNION WAVELETS 1075 the integration limit of the QFT inner product reduces to in the QWT case. D. QWT Phase Properties Recall from Section III-A that each QWT coefficient can be expressed in terms of its magnitude and phase as. We seek a shift theorem for the QWT phase that is analogous to that for the CWT. Since the QWT performs a local QFT analysis, the shift theorem for the QFT [25] holds approximately for the QWT. When we shift an image from to, the QFT phase undergoes the following transformation: (26) where denotes the shift in the horizontal/vertical spatial coordinate system. To transfer the shift theorem from the QFT to the QWT, we exploit the fact that the QWT is approximately a windowed QFT. That is, each quaternion wavelet is approximately a windowed quaternion exponential from (17), and each QWT coefficient is the inner product [as in (23)] between this windowed quaternion exponential and the image. The scale of analysis controls the center frequency of the windowed exponential in the QFT plane. The magnitude and phase of the resulting coefficient are determined by two factors: the spectral content of the image and the center frequency of the wavelet. These two factors determine the frequency parameters we should use in the shift theorem for the QFT (26). We term the effective center frequency for the corresponding wavelet coefficient. Fortunately, for images having a smooth spectrum over the support of the quaternion wavelet in the QFT domain,. Note that should always lie in the first quadrant of the QFT domain (that is, ). Thus, to apply the shift theorem in QWT image analysis, we first estimate the effective center frequency for each QWT coefficient (or assume that ). Then, using (26), we can estimate the shift of one image relative to a second image from the phase change. Conversely, we can estimate the phase shift once we know the image shift. Finally, a quick word on the quirky third QWT phase angle. We can interpret as the relative amplitude of image energy along two orthogonal directions as in [25], which is useful for analyzing texture information. For example, Fig. 7 depicts the function for the diagonal subband as we adjust of the wavelet coefficient. We see a gradual change in appearance from an oriented function to texture-like and back. This property could prove useful for the analysis of images with rich textures [25]. As we describe below in Section V-A, the third phase also relates to the orientation of a single edge. V. QWT APPLICATIONS In this section, we demonstrate the utility of the QWT with two applications in geometrical edge structure and image disparity estimation. Fig. 7. Effect of varying on the structure of the corresponding weighted quaternion wavelet for the diagonal subband (from left to right): = 0(=4), 0(=8), 0, =8,(=4). The corresponding weighted wavelet changes from textured ( =0)to oriented ( = 6(=4)). A. Edge Geometry Estimation Edges are the fundamental building blocks of many real-world images. Roughly speaking, a typical natural image can be considered as a piecewise smooth signal in 2-D that contains blocks of smooth regions separated by edge discontinuities due to object occlusions. Here we use the QWT magnitude and phase to extend the multiscale edge geometry analysis of [18] and [21]. 1) Theory: Consider an image containing a dyadic image block that features a single step edge, as parameterized in Fig. 8(a). Note that for an edge oriented at angle, any shift in the directions satisfying the constraint (27) is identical to a shift from the center of the block by in the direction perpendicular to. Our goal in this section is to analyze the phase of the quaternion wavelet coefficient (e.g., for the vertical subband) corresponding to the quaternion wavelet whose support aligns with the dyadic image block containing the edge (the other subbands behave similarly). We will show that and provide an accurate means with which to estimate the edge offset and edge orientation, respectively. First, we establish a linear relationship between and the edge offset using the shift theorem in (26). Recall from Section IV-D that the effective center frequency depends on both the image QFT spectral content and the center frequency of the quaternion wavelet, and it always lies in the first quadrant. Since the spectral energy of the edge QFT,, concentrates along two impulse ridges through the origin having orientations and in the QFT domain [see Fig. 8(b) and Appendix C], we can write, where is a positive constant that depends on, the subband, and the scale of analysis. When the edge passing through the image block center displaces perpendicularly by, the changes in phase angles satisfy and. Plugging and using (27), we obtain the concise formula (28) where we choose when, and when. We have verified this relationship via experimental analysis of straight edges in detail in an earlier paper [28]. Based on the interpretation of the QWT as a local QFT, we use the inner product formula in (23) to analyze the behavior of

8 1076 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 Fig. 8. (a) Parameterization of a single edge in a dyadic image block (a wedgelet [39]). (b) QFT spectrum of the edge; shaded squares represent the quaternion wavelets in the horizonal, vertical, and diagonal subbands. The energy of the edge is concentrated along the two dark lines crossing at the origin and is captured by the vertical subband with effective center frequency at quaternion frequency (u; v). The region bounded by the dashed line demonstrates the spectral support of the QWT basis leaking into the neighboring quadrant. (a) Single-edge model (wedgelet); (b) edge QFT spectrum. for the same edge block. The QWT coefficient can be computed from the QFT inner product between and the QFT of the edge image (similarly for the other subbands). Our analysis in Appendix D states that if the quaternion wavelet is perfectly analytic, then regardless of, when and when. Note that this corresponds to the singular case in the quaternion phase calculation in Section III-A. However, practical quaternion wavelets will not be perfectly analytic, and so their QFT support will leak into other quadrants of the QFT domain as in Fig. 8(b). This necessitates the more in-depth analysis of Appendix E, which shows that in this case (29) where is a measure of the ratio of local signal energy in the positive quadrant to the energy in the leakage quadrant. For the vertical subband as shown in Fig. 8(b), when the edge orientation changes from 0 to 45, this ratio changes from 1 to 0 and, thus, changes from 0 to. We model this behavior of in the horizontal and vertical subbands to design an edge orientation estimation in Section V-A-II. Since the diagonal subband wavelet has QFT support distant from the leakage quadrants, the QWT subband coefficients are almost unaffected by leakage (i.e., ). Their corresponding approximately equal and do not vary with. 2) Practice: Based on the above analysis, we propose a hybrid algorithm to estimate the edge geometry based on the QWT phase and the magnitude ratios between the three subbands. We generate a set of wedgelets with known and [see Fig. 8(a)] [39] for analysis and testing. Our algorithm is reminiscent of the edge estimation scheme in [21]. To estimate the edge orientation, we use both the magnitude ratios among the three subbands and of the subband with the largest magnitude. The subband with the largest magnitude gives the approximate orientation of the edge ( for diagonal, for horizontal, and for vertical); the sign of tells whether the direction is positive or negative. We experimentally analyze the QWT magnitude ratios and of the set of generated wedgelets corresponding to changing edge orientations by multiples of 5. Using standard curve-fitting techniques, we develop a simple relationship between these parameters and for our orientation estimation scheme. The resulting orientation estimation algorithm achieves a maximum error of only in practice for ideal edges. To estimate the offset of the edge, we apply the relationship between and in (28). We use from either the horizontal or vertical subband (whichever has a larger magnitude). Depending on, we compute the sum (or difference) of the change in phase angles for the edge under analysis, using as the reference edge. Upon analysis of the simulated wedgelets with known, we estimate, to be used in (28). Our final edge offset estimation algorithm achieves a maximum error of approximately relative to the normalized unit edge length of the dyadic block (that is, sub-pixel accuracy). More details of the experimental analysis for the wedgelet model can be found in [28]. According to (28), within a -range of, the range of is limited to an interval of length, which ensures that the edge stays within the image block under analysis. Therefore, in our offset estimation, we need only consider one -range of and do not need to perform any phase-unwrapping. Finally, we estimate the polarity of the edge. By first obtaining an offset estimate for each polarity of the edge with orientation estimated above, namely and, we use the inner product between the image block and two wedgelets with the estimated edge parameters and to determine the correct polarity. Although our calculation of estimation accuracy is based on the wedgelet model, our algorithm also works well for real-world images such as the popular cameraman image in Fig. 9. Our results demonstrate the close relationship between edge geometry and QWT magnitude and phases, in particular, the encoding of edge location in the QWT phases and the encoding of edge orientation in the QWT magnitude and the third phase. B. Image Disparity Estimation In this section, as another example of QWT-based data processing, we present an algorithm to estimate the local disparities between the target image and the reference image. Disparity estimation is the process of determining the local translations needed to align different regions in two images, that is, the amount of 2-D translation required to move a local region of a target image centered at pixel to align with the region in a reference image centered at the same location. This problem figures prominently in a range of image processing and computer vision tasks, such as video processing to estimate motion between successive frames, timelapse seismic imaging to study changes in a reservoir over time, medical imaging to monitor a patient s body, super-resolution, etc. Recall that the QWT phase property states that a shift in an image changes the QWT phase from to. Thus, for each QWT coefficient, if we

9 CHAN et al.: COHERENT MULTISCALE IMAGE PROCESSING USING DUAL-TREE QUATERNION WAVELETS 1077 the reference and target images, we can express the possible image shifts of each dyadic block as (30) Fig. 9. Local edge geometry estimation using the QWT. (a) Several edgy regions from the cameraman image; (b) (e) edge estimates from the corresponding QWT coefficients. The upper row shows the original image region, the lower row shows a wedgelet [see Fig. 8(a)] having the edge parameter estimates (; r) (no attempt is made to capture the texture within the block). know, the effective center frequency of the local image region analyzed by the corresponding QWT basis functions, then we can estimate the local image shifts from the phase differences. However, the center frequency is image dependent, in general. To be able to estimate image shifts from QWT phase differences, we first need to estimate for each QWT coefficient. For this estimate, we can again use the QWT phase properties. If we know the image shifts and measure the phase difference, then we can compute. By manually translating the reference image by known small amounts both horizontally and vertically, we obtain two images and. After computing the QWTs of and,we can use the phase differences between the QWT coefficients to obtain estimates for the effective spectral center for each dyadic block across all scales as and. The range of QWT phase angles limits our estimates to and for horizontal and vertical shifts, respectively, where is the length of one side of the dyadic block corresponding to each coefficient. Once we know the center frequency for each QWT coefficient, we can estimate the local image shifts by measuring the difference between the QWT phase corresponding to the same local blocks in image and. A key challenge in phase-based disparity estimation is resolving the phase wrap-around effect due to the limited range of phase angles. Due to phase wrapping, each observed phase difference can be mapped to more than one disparity estimate. Specifically, for QWT phase differences between where and. Depending on, is chosen such that it equals 0 when is even and equals 1 when is odd. The special wrap-around effect in (30) is due to the limited range in and (to and, respectively). In our multiscale disparity estimation technique, we use coarse scale shift estimates to help unwrap the phase in the finer scales. If we assume that the true image shift is small compared to the size of dyadic squares at the coarsest scale, then we can set in (30) at this scale (no phase wrap-around) and obtain correct estimates for and. Effectively, this assumption of no phase wrap-around at the coarsest scale limits the maximum image shift that we can estimate correctly. Once we have shift estimates at scale, for each block at scale, we estimate the shifts as follows. 1) Interpolate the estimates from the previous scale(s) to obtain predicted estimates. 2) Substitute into (30) and determine the such that is closest to. 3) Remove any unreliable. 4) Repeat Steps 1) 3) for the finer scales Step 1) uses either nearest-neighbor interpolation (which gives higher estimation accuracy) or bilinear interpolation (which results in a smoother disparity field for better visual quality). We choose the latter for our simulations in this paper. In Step 3), we use a similar reliability measure as in the confidence mask [25] to threshold unreliable phase and offset estimates. We also threshold based on the magnitude of the QWT coefficients. We iterate the above process until a fine enough scale (e.g., ), since estimates typically become unreliable at this scale and below. The QWT coefficients for the small dyadic blocks have small magnitudes, and so their phase angles are very sensitive to noise. We can improve upon the basic iterative algorithm by fusing estimates across subbands and scales. First, with proper interpolation, we can average over estimates from all scales containing the same image block. Second, we can average estimates from the three QWT subbands for the same block to yield more accurate estimates, but we need to discard some unreliable subband estimates (for example, horizontal disparity in the horizontal subband and in the vertical subband). We incorporate these subband/scale averaging steps into each iteration of Steps 1) 4). 3 Fig. 10 illustrates the result of our QWT phase-based disparity estimation scheme for two frames from the rotating Rubik s cube video sequence [25]. This is an interesting sequence, because a rotation cannot be captured by a single global translation but can be closely approximated by local translations. The arrows indicate both the directions and magnitudes of the local shifts, with the magnitudes stretched for better visibility. We can clearly see the rotation of the Rubik s cube on the circular stage, with larger translations closer to the 3 Matlab code available at

10 1078 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 Fig. 10. Multiscale QWT phase-based disparity estimation results. (a), (b) Reference and target images from the Rubik s cube image sequence [25]. (c) Disparity estimates between two images in the sequence, shown as arrows overlaid on top of the reference image (zoomed in for better visualization, arrow lengths proportional to magnitude of disparity). viewer (bottom of the image) and smaller translations further from the viewer (top of the image). In our experiments, we obtained the most robust estimations by averaging over both scales and subbands. Fig. 11 demonstrates the multiscale nature of our disparity estimation approach on an image sequence of a living heart [40]. The presence of sharp edges plus the smooth cardiac motion in these images is well-matched by our wavelet-based analysis. Our algorithm averages over several scales to obtain motion fields at various levels of detail. Fig. 11(c) and (e) clearly show the coarse-scale contraction and relaxation motions of the heart, while Fig. 11(d) and (f) displays more detailed movements of the heart muscles, arteries, and blood flow (in particular, see the arrows toward left region of the heart near the artery). In addition to visualizing the changes from one image to another, we can use our algorithm as the feature matching step of an image registration algorithm, using the disparity information to build a warping function to align the images. One important note is that our QWT-based method is region-based in that it does not require any detection of feature points or other image landmarks. Traditional region-based feature matching methods, which estimate the spatial correlation (or correspondence) between two images, include block-matching and Fourier methods. For comparison, we compare our approach to the exhaustive search (ES) block matching technique, which is very computationally demanding but offers the best performance among all general block-matching techniques. We also compare to a Fourier sub-pixel motion estimation method Fig. 11. Multiscale QWT phase-based disparity estimation results for the heart image sequence. (a), (b) Reference and target heart images from two frames during heart contraction (systole). Estimation results show coarse-scale and fine-scale detailed motion of the heart and blood flow during the (c) (d) contraction and (e) (f) relaxation (systole and diastole) phases of the cardiac cycle, illustrating the multiscale nature of our algorithm. (a) Reference image (heart) during contraction; (b) target image (heart) during contraction; (c) coarse-scale estimates (during contraction); (d) fine-scale estimates (during contraction); (e) coarse-scale estimates (during relaxation); (f) fine-scale estimates (during relaxation). known as gradient correlation (GC), which has been shown to have better PSNR performance than other recent Fourier methods [30]. As a performance measure, we use the peak signal-to-noise ratio (PSNR) between the motion compensated image and the target image, which is given by (31) where is the number of image pixels. The motion compensated image is obtained by shifting each image block in the reference image according to the estimated motion vectors. Fig. 12 compares the results for three image sequences: the Rubik and Taxi sequences commonly used in the optical flow literature, and the Heart sequence from

11 CHAN et al.: COHERENT MULTISCALE IMAGE PROCESSING USING DUAL-TREE QUATERNION WAVELETS 1079 Fig. 12. Comparison of multiscale QWT phase-based disparity estimation with two motion estimation algorithms, gradient correlation (GC) [30] and exhaustive search (ES). The performance measure is PSNR (in decibels) between the motion-compensated image and the target image of three test image sequences ( Rubik, Heart, and Taxi ). (a) Frame-by-frame PSNR performance comparison in the Rubik sequence. (b) Table of average PSNR performance (over all frames) for each test sequence. The multiscale QWT phase-based method demonstrates the best performance among the three test algorithms for the Rubik sequence and shows comparable performance to the other algorithms for the Heart and Taxi sequences. Last row of table shows the computational complexity of each algorithm. Fig. 11. Fig. 12(a) demonstrates the superior performance of our QWT phase-based algorithm over the other algorithms for the Rubik sequence, which has piecewise-smooth image frames and a smooth underlying disparity flow. While its PSNR performance is relatively far from ES for the Heart sequence, we note that the QWT phase-based approach provides a motion field that is more useful for patient monitoring and diagnosis. For the Taxi sequence, which contain discontinuities in their underlying flows, the QWT phase-based algorithm sports comparable performance [see the table in Fig. 12(b)]. Since the multiscale averaging step in our algorithm tends to smooth out the estimated flow, it should not be expected to perform as well for discontinuous motions fields of rigid objects moving past each other. Additional advantages of our QWT-based algorithm include its speed (linear computational complexity) and sub-pixel estimation accuracy. For an -pixel image, our algorithm is more efficient than the FFT-based GC and significantly faster than ES, which can take up to computations with the search parameter on the order of. General block-matching techniques such as ES can only decipher disparities in an integer number of pixels. On the other hand, our QWT-based algorithm can achieve sub-pixel estimation and demonstrates greater accuracy for the Rubik sequence than existing phase-based sub-pixel estimation methods such as GC. Besides gradient correlation, there exist other phase-based algorithms for disparity estimation and image registration [25], [31], [41] [44]. These approaches use phase as a feature map, where the phase function maps 2-D, -coordinates to phase angles. They assume the phase function to stay constant upon a shift from the reference image to the target image; that is, where is the phase function for the reference image and for the target image. Then, the disparity estimation problem is simplified to calculating the optical flow for these phase functions [41], [45]. In contrast, our algorithm is entirely based on the multiscale dual-tree QWT and its shift theorem. Our approach is similar to Bayro-Corrochano s QWT disparity estimation algorithm in [31] in its use of quaternion phase angles. However, the latter approach requires the design of a special filter to compute the phase derivative function in advance, while our approach need only estimate the local frequencies. Our implementation also uses a dual-tree filterbank, as compared to the quaternion wavelet pyramid of Gabor filters in [31]. Provided a continuous underlying disparity flow, our algorithm yields a denser and more accurate disparity map, even for smooth regions within an image. Incorporating an optimization procedure such as in [44] or a statistical model into our current algorithm can further improve estimation accuracy, particularly for blocks with phase singularity, but requires extra computation time. Kingsbury et al. have developed a multiscale displacement estimation algorithm based on the 2-D CWT [41], [42]. Their approach combines information from all six CWT subbands in an optimization framework based on the optical flow assumptions. In addition to disparity estimation, they simultaneously register the target image to the reference image. Both their CWT method and our QWT method are multiscale and wavelet-based and, thus, in general, best for smooth underlying disparity flows. However, our QWT algorithm is much simpler and easier to use, because it does not involve the tuning of several parameters for the iterative optimization procedures as in the CWT algorithm. While our method estimates local disparities without warping the image, we can apply any standard warping procedure to easily register the two images from the estimated disparities. Thanks to the ability of the QWT to encode location information in absolute horizontal/vertical coordinates, we can easily combine the QWT subband estimates to yield more accurate flow estimation results. Combining subband location information in the 2-D CWT is more complicated, since each subband encodes the disparities using complex phase angles in a reference frame rotated from other subbands. VI. CONCLUSION We have introduced a new 2-D multiscale wavelet representation, the dual-tree QWT, that is particularly efficient for coherent processing of relative location information in images. This tight-frame image expansion generalizes complex wavelets to higher dimensions and inspires new processing and analysis methods for wavelet phase. Our development of the dual-tree QWT is based on an alternative definition of the 2-D HT and 2-D analytic signal and on quaternion algebra. The resulting quaternion wavelets have three phase angles; two of them encode phase shifts in an absolute horizontal/vertical coordinate system while the third

12 1080 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 encodes textural information. The QWT s approximate shift theorem enables efficient and easy-to-use analysis of the phase behavior around edge regions. We have developed a novel multiscale phase-based disparity estimation scheme. Through efficient combination of disparity estimates across scale and wavelet subbands, our algorithm clearly demonstrates the advantages of coherent processing in this new QWT domain. Inherited from its complex counterpart, the QWT also features near shift-invariance and linear computational complexity through its dual-tree implementation. Beyond 2-D, the generalization of the Hilbert transform to -D signals using hypercomplex numbers can be used to develop higher dimensional wavelet transforms suitable for signals containing low-dimensional manifold structures [46]. The QWT developed here could play an interesting role in the analysis of ( )-D manifold singularities in -D space. This efficient hypercomplex wavelet representation could bring us new ways to solve high-dimensional signal compression and processing problems. quadrature components of in Fig. 6, has support only on a single quadrant. The same also holds for the quaternion wavelets in other subbands and scales. B) Proof of QFT Plancharel Theorem: Starting from the space domain inner product in (23), we have (35), shown at the bottom of the page. One can also uses the QFT convolution theorem (Theorem 2.6) in [25] for this proof. The integrand on the right hand side of (23) is the QFT of the function where denotes 2-D convolution. C) QFT of a Step Edge: First, we express the step edge in Fig. 8(a) as a 2-D separable function (a constant function along the -direction multiplied by a 1-D step function along the -direction). The QFT of such a function is. Then, applying the QFT affine theorem ([25, Theorem 2.12]) with the transformation matrix involving rotation and using the QFT shift theorem with offset satisfying (27) yield the QFT of the step edge APPENDIX A) Single Quadrant QFT of a QWT Basis Function: For a real-valued signal with QFT, we can derive the following relationships from (16): (32) (33) (34) We begin by taking the QFT of (19). Equations (32) (34) apply to the second to fourth components on the right hand side of (19) respectively because are real-valued signals. Given the QFT relationship of the four (36) D) QWT Phase Angles for a Step Edge: This calculation combines the results from (23) and (36). Let be the integrand of the inner product formula in (23) in the QFT domain. The integration limit only involves because of the single-quadrant support of the QWT basis. Consider the special case when the edge signal has zero offset. When, its QFT component in is where is the component involving the -ridge in (36). Let the QFT of a QWT basis,,be. Substituting and into (23), the QWT coefficient can be expressed (35)

13 CHAN et al.: COHERENT MULTISCALE IMAGE PROCESSING USING DUAL-TREE QUATERNION WAVELETS 1081 as In spite of leakage, the relationship between QWT phase angles and edge offset still holds as in the case without leakage, i.e., varies linearly with edge offset. Since is not necessarily for the QFT of the edge signal, i.e., the singular case no longer holds, there exists unique expressions for both and. However, our derivations show that the difference of and is the same as in the ideal case (without leakage) as in (39). (37) where and are the integrals involving,, etc. After normalizing (37) by its magnitude, we compute the third phase angle as (38) When, in, which gives. Moreover, this special quaternion in (37) with is in the singular case; as described in [25], its other phase angles are nonunique but the difference is unique with the following expression: (39) whose value largely depends on the QFT spectrum of the basis and on the edge orientation and offset. E) Theoretical Analysis of Leakage Effect: Consider the inner product of an edge signal and a QWT basis in both the main quadrant and the leakage quadrant when. 4 We can express this inner product in in a similar fashion as in (37) and yield (40) Again, and are integrals involving,, etc. Combining (37) and (40) gives the QWT coefficient, i.e., the inner product between the nonideal basis and the edge signal whose magnitude is angle can be expressed as (41). Its third phase (42) 4 The leakage quadrant can be either S or S depending on the spectral support of the basis element w(x). ACKNOWLEDGMENT While this paper was in final preparation, H. Choi passed away. We will forever remember his broad vision, his keen insights, and our lively discussions. His legacy will live on through his many contributions to the signal processing community. We thank I. Selesnick for many discussions and his Matlab dual-tree CWT code, N. Kingsbury for discussions on CWT-based image registration, T. Gautama et al. for providing us with their image sequences, J. Barron et al. for making their image sequences publicly available, and V. Argyriou et al. who generously shared their code for gradient correlation motion estimation for video sequences. We would also like to the reviewers for their insightful comments and for providing a more concise derivation for the single-quadrant QFT of a QWT basis in Appendix A and a more general Plancharel theorem in Appendix B. REFERENCES [1] J. Shapiro, Embedded image coding using zerotrees of wavelet coefficients, IEEE Trans. Signal Process., vol. 41, no. 4, pp , Apr [2] M. Crouse, R. Nowak, and R. Baraniuk, Wavelet-based statistical signal processing using hidden markov models, IEEE Trans. Signal Process., vol. 46, no. 4, pp , Apr [3] H. Choi, J. Romberg, R. Baraniuk, and N. Kingsbury, Hidden markov tree modeling of complex wavelet transforms, in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Jun. 2000, pp [4] L. Sendur and I. W. Selesnick, Bivariate shrinkage functions for wavelet-based denoising exploiting interscale dependency, IEEE Trans. Signal Process., vol. 50, no. 11, pp , Nov [5] J. Weng, Image matching using the windowed Fourier phase, Int. J. Comput. Vis., vol. 11, no. 3, pp , [6] B. Zitova and J. Flusser, Image registration methods: A survey, Image Vis. Comput., vol. 21, pp , [7] D. J. Fleet and A. D. Jepson, Computation of component image velocity from local phase information, Int. J. Comput. Vis., vol. 5, no. 1, pp , [8] A. Oppenheim and J. Lim, The importance of phase in signals, Proc. IEEE, vol. 69, no. 5, pp , May [9] D. L. Donoho, De-noising by soft-thresholding, IEEE Trans. Inf. Theory, vol. 41, no. 3, pp , Mar [10] J. M. A. Grossmann and R. Kronland-Martinet, Reading and understanding continuous wavelet transforms, in Wavelets: Time-Frequency Methods and Phase Space. Berlin, Germany: Springer-Verlag, 1989, pp [11] J. M. Lina and M. Mayrand, Complex Daubechies wavelets, J. Appl. Harmon. Anal., vol. 2, pp , [12] B. Belzer, J.-M. Lina, and J. Villasenor, Complex, linear-phase filters for efficient image coding, IEEE Trans. Signal Process., vol. 43, no. 10, pp , Oct [13] H. F. Ates and M. T. Orchard, A nonlinear image representation in wavelet domain using complex signals with single quadrant spectrum, presented at the Asilomar Conf. Signals, Systems, and Computers, [14] R. v. Spaendonck, T. Blu, R. Baraniuk, and M. Vetterli, Orthogonal Hilbert transform filter banks and wavelets, in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, Apr. 6 10, 2003, vol. 6, pp

14 1082 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 7, JULY 2008 [15] M. Wakin, M. Orchard, R. G. Baraniuk, and V. Chandrasekaran, Phase and magnitude perceptual sensitivities in nonredundant complex wavelet representations, presented at the Asilomar Conf. Signals, Systems, and Computers, [16] F. Fernandes, M. Wakin, and R. G. Baraniuk, Non-redundant, linearphase, semi-orthogonal, directional complex wavelets, presented at the IEEE Int. Conf. Acoust., Speech, Signal Processing, Montreal, QC, Canada, May [17] N. G. Kingsbury, Complex wavelets for shift invariant analysis and filtering of signals, J. Appl. Harmon. Anal., vol. 10, pp , May [18] J. Romberg, H. Choi, and R. Baraniuk, Multiscale edge grammars for complex wavelet transforms, in Proc. Int. Conf. Image Processing, Thessaloniki, Greece, Oct. 2001, pp [19] S. L. Hahn, Multidimensional complex signals with single-orthant spectra, Proc. IEEE, vol. 80, no. 8, pp , Aug [20] I. W. Selesnick and K. Y. Li, Video denoising using 2-D and 3-D dual-tree complex wavelet transforms, in Proc. SPIE Wavelets X, San Diego, CA, Aug. 4 8, 2003, vol. 76. [21] J. Romberg, M. Wakin, H. Choi, and R. Baraniuk, A geometric hidden markov tree wavelet model, in Proc. SPIE Wavelets X, San Diego, CA, Aug [22] J. Romberg, H. Choi, R. Baraniuk, and N. Kingsbury, A hidden Markov tree model for the complex wavelet transform, Tech. Rep., Dept. Elect. Comput. Eng., Rice Univ., Houston, TX, [23] A. F. Abdelnour and I. W. Selesnick, Design of 2-band orthogonal near-symmetric CQF, in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, May 2001, pp [24] N. G. Kingsbury, A dual-tree complex wavelet transform with improved orthogonality and symmetry properties, in Proc. IEEE Int. Conf. Image Processing, Vancouver, BC, Canada, Sep. 2000, vol. 2, pp [25] T. Bülow, Hypercomplex spectral signal representations for the processing and analysis of images, Ph.D. dissertation, Christian Albrechts Univ., Kiel, Germany, [26] P. Perona, Steerable-scalable kernels for edge detection and junction analysis, in Proc. Eur. Conf. Computer Vision, 1992, pp [27] T. Bülow and G. Sommer, A novel approach to the 2-D analytic signal, presented at the CAIP, Ljubljana, Slovenia, [28] W. L. Chan, H. Choi, and R. G. Baraniuk, Quaternion wavelets for image analysis and processing, in Proc. IEEE Int. Conf. Image Processing, Singapore, Oct. 2004, vol. 5, pp [29] W. L. Chan, H. Choi, and R. G. Baraniuk, Coherent image processing using quaternion wavelets, presented at the SPIE Wavelets XI, 2005, vol [30] V. Argyriou and T. Vlachos, Using gradient correlation for sub-pixel motion estimation of video sequences, in Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, May 2004, pp [31] E. Bayro-Corrochano, The theory and use of the quaternion wavelet transform, J. Math. Imag. Vis., vol. 24, no. 1, pp , [32] H. Ates, Modeling location information for wavelet image coding, Ph.D. dissertation, Dept. Elect. Eng., Princeton Univ., Princeton, NJ, [33] G. Hua, Noncoherent image denoising, M.S. thesis, Dept. Elect. Comput. Eng., Rice Univ., Houston, TX, [34] M. Felsberg and G. Sommer, The monogenic signal, IEEE Trans. Signal Process., vol. 49, no. 12, pp , Dec [35] M. Felsberg, Low-level image processing with the structure multivector, Ph.D. dissertation, Christian Albrechts Univ., Kiel, Germany, [36] M. K. Vetterli and J. Kovacevic, Wavelets and Subband Coding. Englewood Cliffs, NJ: Prentice-Hall, [37] I. Selesnick, R. Baraniuk, and N. Kingsbury, The dual-tree complex wavelet transform, IEEE Signal Process. Mag., vol. 22, no. 6, pp , Nov [38] I. L. Kantor and A. S. Solodovnikov, Hypercomplex Numbers. New York: Springer-Verlag, [39] M. Wakin, J. Romberg, H. Choi, and R. Baraniuk, Rate-distortion optimized image compression using wedgelets, presented at the IEEE Int. Conf. Image Processing, Sep [40] [Online]. Available: [41] M. Hemmendroff, M. Anderson, T. Kronander, and H. Knutsson, Phase-based multidimensional volume registration, IEEE Trans. Med. Imag., vol. 21, no. 12, pp , Dec [42] N. Kingsbury, Dual Tree Complex Wavelets (HASSIP Workshop) 2004 [Online]. Available: [43] M. Felsberg, Optical flow estimation from monogenic phase, presented at the 1st Int. Workshop Complex Motion, Günzburg, Tyskland, Oct. 2004, vol. LNCS [44] J. Zhou, Y. Xu, and X. K. Yang, Quaternion wavelet phase based stereo matching for uncalibrated images, Pattern Recognit. Lett., vol. 28, pp , Mar [45] B. K. Horn and B. G. Schunck, Determining optical flow, Artif. Intell., vol. 17, pp , [46] W. L. Chan, H. Choi, and R. Baraniuk, Directional hypercomplex wavelets for multidimensional signal analysis and processing, in Proc. IEEE Int. Conf. Acoustics, Speech, Signal Processing, May 2004, pp Wai Lam Chan (S 03) received the B.S. degree in electrical and computer engineering and computer science from Duke University, Durham, NC, in 2002 (summa cum laude), and the M.S. degree in electrical and computer engineering from Rice University, Houston, TX, in He is currently a graduate student in the Department of Electrical and Computer Engineering, Rice University. He held a visiting research position in seismic imaging at ConocoPhillips in His research interests include THz imaging, compressed sensing, and hyper-complex wavelets with applications to image processing. Mr. Chan received the Presidential Fellowship and the Texas Instruments Fellowship from Rice University and is a member of Tau Beta Pi and Eta Kappa Nu. Hyeokho Choi received the B.S. degree in control and instrumentation engineering from Seoul National University Seoul, Korea, in 1991, and the M.S. and Ph.D. degrees in electrical engineering from the University of Illinois at Urbana-Champaign, Urbana, in 1993 and 1998, respectively. In 1998, he joined Rice University, Houston, TX, as a Research Associate. From 2000 to 2005, he was a Research Professor at Rice University. In Fall 2005, he joined North Carolina State University, Raleigh, as an Assistant Professor in the Electrical and Computer Engineering Department. His research interests were in the areas of signal and image processing and computer vision. Dr. Choi received the Presidential Honor at Seoul National University at graduation in 1991 and the Texas Instruments Postdoctoral Fellowship in He was an Associate Editor for the IEEE TRANSACTIONS ON IMAGE PROCESSING. Richard G. Baraniuk (S 85 M 93 SM 98 F 01) received the B.Sc. degree from the University of Manitoba, Manitoba, Canada, in 1987, the M.Sc. degree from the University of Wisconsin, Madison, in 1988, and the Ph.D. degree from the University of Illinois at Urbana-Champaign, Urbana, in 1992, all in electrical engineering. After spending 1992 and 1993 with the Ecole Normale Supérieure, Lyon, France, he joined Rice University, Houston, TX, where he is currently the Victor E. Cameron Professor of Electrical and Computer Engineerng and Founder of the Connexions Project. He spent sabbaticals at the Ecole Nationale Supérieure de Télécommunications, Paris, France, in 2001, and the Ecole Fédérale Polytechnique de Lausanne, Switzerland, in His research interests in signal and image processing include wavelets and multiscale analysis, statistical modeling, and sensor networks. Dr. Baraniuk received a NATO postdoctoral fellowship from NSERC in 1992, the National Young Investigator award from National Science Foundation in 1994, a Young Investigator Award from the Office of Naval Research in 1995, the Rosenbaum Fellowship from the Isaac Newton Institute of Cambridge University in 1998, the C. Holmes MacDonald National Outstanding Teaching Award from Eta Kappa Nu in 1999, the Charles Duncan Junior Faculty Achievement Award from Rice University in 2000, the ECE Young Alumni Achievement Award from the University of Illinois in 2000, and the George R. Brown Award for Superior Teaching at Rice University in 2001 and He was elected a Plus Member of AAA in 1986.

Coherent Multiscale Image Processing using Quaternion Wavelets

Coherent Multiscale Image Processing using Quaternion Wavelets RICE UNIVERSITY Coherent Multiscale Image Processing using Quaternion Wavelets by Wai Lam Chan A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Master of Science Approved, Thesis

More information

Coherent Multiscale Image Processing using Dual-Tree Quaternion Wavelets

Coherent Multiscale Image Processing using Dual-Tree Quaternion Wavelets 0 Coherent Multiscale Image Processing using Dual-Tree Quaternion Wavelets Wai Lam Chan, Hyeokho Choi, Richard G. Baraniuk * Department of Electrical and Computer Engineering, Rice University 6100 S. Main

More information

Coherent Image Processing using Quaternion Wavelets

Coherent Image Processing using Quaternion Wavelets Coherent Image Processing using Quaternion Wavelets Wai Lam Chan, Hyeokho Choi, Richard G. Baraniuk Dept. of Electrical and Computer Engineering Rice University Houston, TX 775 ABSTRACT We develop a quaternion

More information

REPORT DOCUMENTATION PAGE

REPORT DOCUMENTATION PAGE REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions,

More information

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi 1. Introduction The choice of a particular transform in a given application depends on the amount of

More information

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover 38 CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING Digital image watermarking can be done in both spatial domain and transform domain. In spatial domain the watermark bits directly added to the pixels of the

More information

Tutorial on Image Compression

Tutorial on Image Compression Tutorial on Image Compression Richard Baraniuk Rice University dsp.rice.edu Agenda Image compression problem Transform coding (lossy) Approximation linear, nonlinear DCT-based compression JPEG Wavelet-based

More information

IMAGE COMPRESSION USING TWO DIMENTIONAL DUAL TREE COMPLEX WAVELET TRANSFORM

IMAGE COMPRESSION USING TWO DIMENTIONAL DUAL TREE COMPLEX WAVELET TRANSFORM International Journal of Electronics, Communication & Instrumentation Engineering Research and Development (IJECIERD) Vol.1, Issue 2 Dec 2011 43-52 TJPRC Pvt. Ltd., IMAGE COMPRESSION USING TWO DIMENTIONAL

More information

SIGNAL DECOMPOSITION METHODS FOR REDUCING DRAWBACKS OF THE DWT

SIGNAL DECOMPOSITION METHODS FOR REDUCING DRAWBACKS OF THE DWT Engineering Review Vol. 32, Issue 2, 70-77, 2012. 70 SIGNAL DECOMPOSITION METHODS FOR REDUCING DRAWBACKS OF THE DWT Ana SOVIĆ Damir SERŠIĆ Abstract: Besides many advantages of wavelet transform, it has

More information

A Geometric Hidden Markov Tree Wavelet Model

A Geometric Hidden Markov Tree Wavelet Model A Geometric Hidden Markov Tree Wavelet Model Justin Romberg, Michael Wakin, Hyeokho Choi, Richard Baraniuk Dept. of Electrical and Computer Engineering, Rice University 6100 Main St., Houston, TX 77005

More information

DUAL TREE COMPLEX WAVELETS Part 1

DUAL TREE COMPLEX WAVELETS Part 1 DUAL TREE COMPLEX WAVELETS Part 1 Signal Processing Group, Dept. of Engineering University of Cambridge, Cambridge CB2 1PZ, UK. ngk@eng.cam.ac.uk www.eng.cam.ac.uk/~ngk February 2005 UNIVERSITY OF CAMBRIDGE

More information

Image denoising in the wavelet domain using Improved Neigh-shrink

Image denoising in the wavelet domain using Improved Neigh-shrink Image denoising in the wavelet domain using Improved Neigh-shrink Rahim Kamran 1, Mehdi Nasri, Hossein Nezamabadi-pour 3, Saeid Saryazdi 4 1 Rahimkamran008@gmail.com nasri_me@yahoo.com 3 nezam@uk.ac.ir

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform Thaarini.P 1, Thiyagarajan.J 2 PG Student, Department of EEE, K.S.R College of Engineering, Thiruchengode, Tamil Nadu, India

More information

2012 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 7, JULY 2004

2012 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 7, JULY 2004 2012 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 7, JULY 2004 Commutative Reduced Biquaternions and Their Fourier Transform for Signal and Image Processing Applications Soo-Chang Pei, Fellow,

More information

Non-Differentiable Image Manifolds

Non-Differentiable Image Manifolds The Multiscale Structure of Non-Differentiable Image Manifolds Michael Wakin Electrical l Engineering i Colorado School of Mines Joint work with Richard Baraniuk, Hyeokho Choi, David Donoho Models for

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Diffusion Wavelets for Natural Image Analysis

Diffusion Wavelets for Natural Image Analysis Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................

More information

Quaternionic Wavelets for Texture Classification

Quaternionic Wavelets for Texture Classification Quaternionic Wavelets for Texture Classification Raphaël Soulard, Philippe Carré To cite this version: Raphaël Soulard, Philippe Carré. Quaternionic Wavelets for Texture Classification. 35th International

More information

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation Texture Outline Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation 1 Image Representation The standard basis for images is the set

More information

Comparative Study of Dual-Tree Complex Wavelet Transform and Double Density Complex Wavelet Transform for Image Denoising Using Wavelet-Domain

Comparative Study of Dual-Tree Complex Wavelet Transform and Double Density Complex Wavelet Transform for Image Denoising Using Wavelet-Domain International Journal of Scientific and Research Publications, Volume 2, Issue 7, July 2012 1 Comparative Study of Dual-Tree Complex Wavelet Transform and Double Density Complex Wavelet Transform for Image

More information

Wavelet Transform (WT) & JPEG-2000

Wavelet Transform (WT) & JPEG-2000 Chapter 8 Wavelet Transform (WT) & JPEG-2000 8.1 A Review of WT 8.1.1 Wave vs. Wavelet [castleman] 1 0-1 -2-3 -4-5 -6-7 -8 0 100 200 300 400 500 600 Figure 8.1 Sinusoidal waves (top two) and wavelets (bottom

More information

A Wavelet Tour of Signal Processing The Sparse Way

A Wavelet Tour of Signal Processing The Sparse Way A Wavelet Tour of Signal Processing The Sparse Way Stephane Mallat with contributions from Gabriel Peyre AMSTERDAM BOSTON HEIDELBERG LONDON NEWYORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY»TOKYO

More information

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu FMA901F: Machine Learning Lecture 3: Linear Models for Regression Cristian Sminchisescu Machine Learning: Frequentist vs. Bayesian In the frequentist setting, we seek a fixed parameter (vector), with value(s)

More information

MOTION ESTIMATION WITH THE REDUNDANT WAVELET TRANSFORM.*

MOTION ESTIMATION WITH THE REDUNDANT WAVELET TRANSFORM.* MOTION ESTIMATION WITH THE REDUNDANT WAVELET TRANSFORM.* R. DeVore A. Petukhov R. Sharpley Department of Mathematics University of South Carolina Columbia, SC 29208 Abstract We present a fast method for

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Stereo Vision. MAN-522 Computer Vision

Stereo Vision. MAN-522 Computer Vision Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in

More information

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform Torsten Palfner, Alexander Mali and Erika Müller Institute of Telecommunications and Information Technology, University of

More information

THE preceding chapters were all devoted to the analysis of images and signals which

THE preceding chapters were all devoted to the analysis of images and signals which Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to

More information

RICE UNIVERSITY. Spatial and Temporal Image Prediction with Magnitude and Phase Representations. Gang Hua

RICE UNIVERSITY. Spatial and Temporal Image Prediction with Magnitude and Phase Representations. Gang Hua RICE UNIVERSITY Spatial and Temporal Image Prediction with Magnitude and Phase Representations by Gang Hua A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE Doctor of Philosophy

More information

Visual Recognition: Image Formation

Visual Recognition: Image Formation Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know

More information

Ohio Tutorials are designed specifically for the Ohio Learning Standards to prepare students for the Ohio State Tests and end-ofcourse

Ohio Tutorials are designed specifically for the Ohio Learning Standards to prepare students for the Ohio State Tests and end-ofcourse Tutorial Outline Ohio Tutorials are designed specifically for the Ohio Learning Standards to prepare students for the Ohio State Tests and end-ofcourse exams. Math Tutorials offer targeted instruction,

More information

Chapter 18. Geometric Operations

Chapter 18. Geometric Operations Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;

More information

Final Review. Image Processing CSE 166 Lecture 18

Final Review. Image Processing CSE 166 Lecture 18 Final Review Image Processing CSE 166 Lecture 18 Topics covered Basis vectors Matrix based transforms Wavelet transform Image compression Image watermarking Morphological image processing Segmentation

More information

IMAGE DENOISING USING FRAMELET TRANSFORM

IMAGE DENOISING USING FRAMELET TRANSFORM IMAGE DENOISING USING FRAMELET TRANSFORM Ms. Jadhav P.B. 1, Dr.Sangale.S.M. 2 1,2, Electronics Department,Shivaji University, (India) ABSTRACT Images are created to record or display useful information

More information

Image Sampling and Quantisation

Image Sampling and Quantisation Image Sampling and Quantisation Introduction to Signal and Image Processing Prof. Dr. Philippe Cattin MIAC, University of Basel 1 of 46 22.02.2016 09:17 Contents Contents 1 Motivation 2 Sampling Introduction

More information

Image Sampling & Quantisation

Image Sampling & Quantisation Image Sampling & Quantisation Biomedical Image Analysis Prof. Dr. Philippe Cattin MIAC, University of Basel Contents 1 Motivation 2 Sampling Introduction and Motivation Sampling Example Quantisation Example

More information

Model Based Perspective Inversion

Model Based Perspective Inversion Model Based Perspective Inversion A. D. Worrall, K. D. Baker & G. D. Sullivan Intelligent Systems Group, Department of Computer Science, University of Reading, RG6 2AX, UK. Anthony.Worrall@reading.ac.uk

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 7, NO. 2, APRIL 1997 429 Express Letters A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation Jianhua Lu and

More information

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest. D.A. Karras, S.A. Karkanis and D. E. Maroulis University of Piraeus, Dept.

More information

Graphics and Interaction Transformation geometry and homogeneous coordinates

Graphics and Interaction Transformation geometry and homogeneous coordinates 433-324 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Anisotropic representations for superresolution of hyperspectral data

Anisotropic representations for superresolution of hyperspectral data Anisotropic representations for superresolution of hyperspectral data Edward H. Bosch, Wojciech Czaja, James M. Murphy, and Daniel Weinberg Norbert Wiener Center Department of Mathematics University of

More information

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

Comparative Analysis of Discrete Wavelet Transform and Complex Wavelet Transform For Image Fusion and De-Noising

Comparative Analysis of Discrete Wavelet Transform and Complex Wavelet Transform For Image Fusion and De-Noising International Journal of Engineering Science Invention ISSN (Online): 2319 6734, ISSN (Print): 2319 6726 Volume 2 Issue 3 ǁ March. 2013 ǁ PP.18-27 Comparative Analysis of Discrete Wavelet Transform and

More information

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823 email:

More information

FACE RECOGNITION USING INDEPENDENT COMPONENT

FACE RECOGNITION USING INDEPENDENT COMPONENT Chapter 5 FACE RECOGNITION USING INDEPENDENT COMPONENT ANALYSIS OF GABORJET (GABORJET-ICA) 5.1 INTRODUCTION PCA is probably the most widely used subspace projection technique for face recognition. A major

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Image denoising using curvelet transform: an approach for edge preservation

Image denoising using curvelet transform: an approach for edge preservation Journal of Scientific & Industrial Research Vol. 3469, January 00, pp. 34-38 J SCI IN RES VOL 69 JANUARY 00 Image denoising using curvelet transform: an approach for edge preservation Anil A Patil * and

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

Introduction to Wavelets

Introduction to Wavelets Lab 11 Introduction to Wavelets Lab Objective: In the context of Fourier analysis, one seeks to represent a function as a sum of sinusoids. A drawback to this approach is that the Fourier transform only

More information

From Fourier Transform to Wavelets

From Fourier Transform to Wavelets From Fourier Transform to Wavelets Otto Seppälä April . TRANSFORMS.. BASIS FUNCTIONS... SOME POSSIBLE BASIS FUNCTION CONDITIONS... Orthogonality... Redundancy...3. Compact support.. FOURIER TRANSFORMS

More information

Optimised corrections for finite-difference modelling in two dimensions

Optimised corrections for finite-difference modelling in two dimensions Optimized corrections for 2D FD modelling Optimised corrections for finite-difference modelling in two dimensions Peter M. Manning and Gary F. Margrave ABSTRACT Finite-difference two-dimensional correction

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

Image Enhancement Techniques for Fingerprint Identification

Image Enhancement Techniques for Fingerprint Identification March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement

More information

Spatial Enhancement Definition

Spatial Enhancement Definition Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing

More information

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges CS 4495 Computer Vision Linear Filtering 2: Templates, Edges Aaron Bobick School of Interactive Computing Last time: Convolution Convolution: Flip the filter in both dimensions (right to left, bottom to

More information

Parametric Texture Model based on Joint Statistics

Parametric Texture Model based on Joint Statistics Parametric Texture Model based on Joint Statistics Gowtham Bellala, Kumar Sricharan, Jayanth Srinivasa Department of Electrical Engineering, University of Michigan, Ann Arbor 1. INTRODUCTION Texture images

More information

FACE RECOGNITION USING FUZZY NEURAL NETWORK

FACE RECOGNITION USING FUZZY NEURAL NETWORK FACE RECOGNITION USING FUZZY NEURAL NETWORK TADI.CHANDRASEKHAR Research Scholar, Dept. of ECE, GITAM University, Vishakapatnam, AndraPradesh Assoc. Prof., Dept. of. ECE, GIET Engineering College, Vishakapatnam,

More information

International Journal of Applied Sciences, Engineering and Management ISSN , Vol. 04, No. 05, September 2015, pp

International Journal of Applied Sciences, Engineering and Management ISSN , Vol. 04, No. 05, September 2015, pp Satellite Image Resolution Enhancement using Double Density Dual Tree Complex Wavelet Transform Kasturi Komaravalli 1, G. Raja Sekhar 2, P. Bala Krishna 3, S.Kishore Babu 4 1 M.Tech student, Department

More information

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106 CHAPTER 6 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform Page No 6.1 Introduction 103 6.2 Compression Techniques 104 103 6.2.1 Lossless compression 105 6.2.2 Lossy compression

More information

IN THIS PAPER we consider the solution of ill-posed

IN THIS PAPER we consider the solution of ill-posed IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 6, NO. 3, MARCH 1997 463 Tomographic Reconstruction and Estimation Based on Multiscale Natural-Pixel Bases Mickey Bhatia, William C. Karl, Member, IEEE, and

More information

Resolution Magnification Technique for Satellite Images Using DT- CWT and NLM

Resolution Magnification Technique for Satellite Images Using DT- CWT and NLM AUSTRALIAN JOURNAL OF BASIC AND APPLIED SCIENCES ISSN:1991-8178 EISSN: 2309-8414 Journal home page: www.ajbasweb.com Resolution Magnification Technique for Satellite Images Using DT- CWT and NLM 1 Saranya

More information

Chapter 2 Basic Structure of High-Dimensional Spaces

Chapter 2 Basic Structure of High-Dimensional Spaces Chapter 2 Basic Structure of High-Dimensional Spaces Data is naturally represented geometrically by associating each record with a point in the space spanned by the attributes. This idea, although simple,

More information

SPECIAL TECHNIQUES-II

SPECIAL TECHNIQUES-II SPECIAL TECHNIQUES-II Lecture 19: Electromagnetic Theory Professor D. K. Ghosh, Physics Department, I.I.T., Bombay Method of Images for a spherical conductor Example :A dipole near aconducting sphere The

More information

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform ECE 533 Digital Image Processing- Fall 2003 Group Project Embedded Image coding using zero-trees of Wavelet Transform Harish Rajagopal Brett Buehl 12/11/03 Contributions Tasks Harish Rajagopal (%) Brett

More information

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps

Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Visualization and Analysis of Inverse Kinematics Algorithms Using Performance Metric Maps Oliver Cardwell, Ramakrishnan Mukundan Department of Computer Science and Software Engineering University of Canterbury

More information

Digital Image Processing. Lecture 6

Digital Image Processing. Lecture 6 Digital Image Processing Lecture 6 (Enhancement in the Frequency domain) Bu-Ali Sina University Computer Engineering Dep. Fall 2016 Image Enhancement In The Frequency Domain Outline Jean Baptiste Joseph

More information

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT 17th European Signal Processing Conference (EUSIPCO 009) Glasgow, Scotland, August 4-8, 009 IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION

More information

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding. Project Title: Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding. Midterm Report CS 584 Multimedia Communications Submitted by: Syed Jawwad Bukhari 2004-03-0028 About

More information

Analysis of Planar Anisotropy of Fibre Systems by Using 2D Fourier Transform

Analysis of Planar Anisotropy of Fibre Systems by Using 2D Fourier Transform Maroš Tunák, Aleš Linka Technical University in Liberec Faculty of Textile Engineering Department of Textile Materials Studentská 2, 461 17 Liberec 1, Czech Republic E-mail: maros.tunak@tul.cz ales.linka@tul.cz

More information

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES

CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES CALCULATING TRANSFORMATIONS OF KINEMATIC CHAINS USING HOMOGENEOUS COORDINATES YINGYING REN Abstract. In this paper, the applications of homogeneous coordinates are discussed to obtain an efficient model

More information

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON WITH S.Shanmugaprabha PG Scholar, Dept of Computer Science & Engineering VMKV Engineering College, Salem India N.Malmurugan Director Sri Ranganathar Institute

More information

Translation Symmetry Detection: A Repetitive Pattern Analysis Approach

Translation Symmetry Detection: A Repetitive Pattern Analysis Approach 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops Translation Symmetry Detection: A Repetitive Pattern Analysis Approach Yunliang Cai and George Baciu GAMA Lab, Department of Computing

More information

Image Compression using Discrete Wavelet Transform Preston Dye ME 535 6/2/18

Image Compression using Discrete Wavelet Transform Preston Dye ME 535 6/2/18 Image Compression using Discrete Wavelet Transform Preston Dye ME 535 6/2/18 Introduction Social media is an essential part of an American lifestyle. Latest polls show that roughly 80 percent of the US

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

3. Lifting Scheme of Wavelet Transform

3. Lifting Scheme of Wavelet Transform 3. Lifting Scheme of Wavelet Transform 3. Introduction The Wim Sweldens 76 developed the lifting scheme for the construction of biorthogonal wavelets. The main feature of the lifting scheme is that all

More information

Comparative Evaluation of DWT and DT-CWT for Image Fusion and De-noising

Comparative Evaluation of DWT and DT-CWT for Image Fusion and De-noising Comparative Evaluation of DWT and DT-CWT for Image Fusion and De-noising Rudra Pratap Singh Chauhan Research Scholar UTU, Dehradun, (U.K.), India Rajiva Dwivedi, Phd. Bharat Institute of Technology, Meerut,

More information

Compression of Stereo Images using a Huffman-Zip Scheme

Compression of Stereo Images using a Huffman-Zip Scheme Compression of Stereo Images using a Huffman-Zip Scheme John Hamann, Vickey Yeh Department of Electrical Engineering, Stanford University Stanford, CA 94304 jhamann@stanford.edu, vickey@stanford.edu Abstract

More information

Tutorial 5. Jun Xu, Teaching Asistant March 2, COMP4134 Biometrics Authentication

Tutorial 5. Jun Xu, Teaching Asistant March 2, COMP4134 Biometrics Authentication Tutorial 5 Jun Xu, Teaching Asistant nankaimathxujun@gmail.com COMP4134 Biometrics Authentication March 2, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Indeterminate Region

More information

Vector Algebra Transformations. Lecture 4

Vector Algebra Transformations. Lecture 4 Vector Algebra Transformations Lecture 4 Cornell CS4620 Fall 2008 Lecture 4 2008 Steve Marschner 1 Geometry A part of mathematics concerned with questions of size, shape, and relative positions of figures

More information

An Accurate Method for Skew Determination in Document Images

An Accurate Method for Skew Determination in Document Images DICTA00: Digital Image Computing Techniques and Applications, 1 January 00, Melbourne, Australia. An Accurate Method for Skew Determination in Document Images S. Lowther, V. Chandran and S. Sridharan Research

More information

Image Fusion Using Double Density Discrete Wavelet Transform

Image Fusion Using Double Density Discrete Wavelet Transform 6 Image Fusion Using Double Density Discrete Wavelet Transform 1 Jyoti Pujar 2 R R Itkarkar 1,2 Dept. of Electronics& Telecommunication Rajarshi Shahu College of Engineeing, Pune-33 Abstract - Image fusion

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

Multiframe Blocking-Artifact Reduction for Transform-Coded Video

Multiframe Blocking-Artifact Reduction for Transform-Coded Video 276 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 12, NO. 4, APRIL 2002 Multiframe Blocking-Artifact Reduction for Transform-Coded Video Bahadir K. Gunturk, Yucel Altunbasak, and

More information

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 11, NOVEMBER Victor H. S. Ha, Member, IEEE, and José M. F. Moura, Fellow, IEEE

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 11, NOVEMBER Victor H. S. Ha, Member, IEEE, and José M. F. Moura, Fellow, IEEE IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 11, NOVEMBER 2005 1687 Affine-Permutation Invariance of 2-D Shapes Victor H. S. Ha, Member, IEEE, and José M. F. Moura, Fellow, IEEE Abstract Shapes

More information

Compression of Light Field Images using Projective 2-D Warping method and Block matching

Compression of Light Field Images using Projective 2-D Warping method and Block matching Compression of Light Field Images using Projective 2-D Warping method and Block matching A project Report for EE 398A Anand Kamat Tarcar Electrical Engineering Stanford University, CA (anandkt@stanford.edu)

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

Multiresolution Image Processing

Multiresolution Image Processing Multiresolution Image Processing 2 Processing and Analysis of Images at Multiple Scales What is Multiscale Decompostion? Why use Multiscale Processing? How to use Multiscale Processing? Related Concepts:

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information