1860 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 11, NOVEMBER Super-Resolution Reconstruction of Hyperspectral Images

Size: px
Start display at page:

Download "1860 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 11, NOVEMBER Super-Resolution Reconstruction of Hyperspectral Images"

Transcription

1 1860 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 14, NO 11, NOVEMBER 2005 Super-Resolution Reconstruction of Hyperspectral Images Toygar Akgun, Student Member, IEEE, Yucel Altunbasak, Senior Member, IEEE, and Russell M Mersereau, Fellow, IEEE Abstract Hyperspectral images are used for aerial and space imagery applications, including target detection, tracking, agricultural, and natural resource exploration Unfortunately, atmospheric scattering, secondary illumination, changing viewing angles, and sensor noise degrade the quality of these images Improving their resolution has a high payoff, but applying superresolution techniques separately to every spectral band is problematic for two main reasons First, the number of spectral bands can be in the hundreds, which increases the computational load excessively Second, considering the bands separately does not make use of the information that is present across them Furthermore, separate band super resolution does not make use of the inherent low dimensionality of the spectral data, which can effectively be used to improve the robustness against noise In this paper, we introduce a novel super-resolution method for hyperspectral images An integral part of our work is to model the hyperspectral image acquisition process We propose a model that enables us to represent the hyperspectral observations from different wavelengths as weighted linear combinations of a small number of basis image planes Then, a method for applying super resolution to hyperspectral images using this model is presented The method fuses information from multiple observations and spectral bands to improve spatial resolution and reconstruct the spectrum of the observed scene as a combination of a small number of spectral basis functions Index Terms Hyperspectral, image reconstruction, information fusion, resolution enhancement, spectral, super resolution I INTRODUCTION WITH THE developments in space technology during the late 1950s that made artificial satellites possible came the possibility of making visual observations of Earth from space to obtain useful information This is now a mature field with a significant impact in civilian and military applications including civil engineering, mining and petroleum exploration, military information gathering, etc One of the most expensive parameters in a space imaging system is the spatial resolution Unfortunately, it is also one of the hardest to improve There are many factors (imperfect imaging optics, atmospheric scattering, secondary illumination effects and sensor noise, to name a few) that degrade the acquired image quality and limit the performance of algorithms Manuscript received September 23, 2003; revised August 25, 2004 This work was supported in part by the Office of Naval Research (ONR) under Award N and by the National Science Foundation under Award CCR The associate editor coordinating the review of this manuscript and approving it for publication was Dr Nasser Kehtarnavaz The authors are with the Center for Signal and Image Processing, Georgia Institute of Technology, Atlanta, GA USA ( takgun@ece gatechedu; yucel@ecegatechedu; rmm@ecegatechedu) Digital Object Identifier /TIP that use these images as input In many situations, modifying the imaging optics or the sensor array is not an available option, thus highlighting a clear need for post processing Since the spatial resolution is a key parameter in many applications related to space imagery, it is obvious that any improvement here is important To improve the spatial resolution of hyperspectral images, we can make use of super-resolution techniques together with the information at different wavelengths of the sensed illuminance that is available with hyperspectral sensors A comprehensive background on super resolution can be found in [1] [8] In their early work on the subject, Tsai and Huang [1] disregarded the blur in the imaging process and carried out a frequency domain analysis of the super-resolution problem They showed that any effective super-resolution method requires frequency aliasing to be present in the low-resolution (low-res) observation (source) images A Bayesian framework has been used to formulate MAP (maximum a posteriori) estimators for the high-resolution (hi-res) target image A key advantage of using the Bayesian framework is the ease of incorporating regularization constraints (like smoothness) into the super-resolution process [4] In [9], Schultz and Stevenson, and in [3], Stevenson and Schmitz described a MAP estimator with a Huber-MRF (Markov random field) prior model to preserve discontinuities and solve the blurring problem introduced by imposing smoothness In the projections onto convex sets (POCS) based super-resolution methods [2], [5], [6], an initial estimate of the hi-res target image is updated iteratively based on the error measured between the observed and synthetic low-res images obtained by simulating the imaging process with the initial estimate as the input Examples of somewhat related ideas can be found in the hyperspectral imaging field In [10] and [11], Zhukov et al proposed methods for multiresolution image fusion in the context of the hyperspectral unmixing problem Winter [12] presented an alternative technique to combine a hi-res panchromatic image with a lower resolution hyperspectral image to obtain a product that has the spectral properties of the hyperspectral image at a higher spatial resolution Zomet and Peleg [13] used the correlations between the three (RGB) color channels to increase the resolution of a single color image Finally, Gotoh and Okutomi [14] proposed a super-resolution method aimed for images obtained by a single CCD with a color filter array Their method is based on a generalized formulation of super resolution which performs both resolution enhancement and demosaicking simultaneously and is capable of producing a hi-res color image directly from color mosaic images obtained by a single CCD with a color filter array /$ IEEE

2 AKGUN et al: SUPER-RESOLUTION RECONSTRUCTION OF HYPERSPECTRAL IMAGES 1861 In this paper, we propose a novel hyperspectral image acquisition model that enables us to represent hyperspectral observations from different wavelengths as weighted linear combinations of a small number of aliased and blurred basis image planes whose pixel values correspond to the principle component coefficients We proceed by formulating the reconstruction process as the inverse problem of finding a hi-res target hyperspectral image that agrees best with the observations under the proposed model Then, a set-theoretic method is used to solve the inverse problem Finally, we present results obtained from experiments carried out on two data sets, namely a 31-band hyperspectral image of a natural scene captured under a controlled illumination laboratory environment and a 224-band airborne visible/infrared imaging spectrometer (AVIRIS) image II HYPERSPECTRAL IMAGE ACQUISITION MODEL This section provides some background information about the imaging process being modeled First, the hyperspectral image acquisition process will be briefly discussed from a physical point of view, together with those atmospheric, environmental, and device-dependent effects that influence the process We will describe our hyperspectral image acquisition model, which interprets source images (also referred as observations) as aliased and optically blurred linear combinations of the target image s 1 basis image planes As mentioned in Section I, the pixel values of these basis image planes correspond to the principle component magnitudes Then, a mathematical formulation of the proposed model will be provided In the next section, we will address the inverse problem and present a back-projections-based iterative solution method Possible simplifications for single observation and multiple observations with translational motion will be studied and a useful interpretation of the overall imaging process will be presented A Hyperspectral Imaging Background To understand a hyperspectral image requires some background information This section introduces this background without getting into all of the fine details of the physical phenomena that lie behind it For our purposes, this summary is satisfactory, but the interested reader can refer to [15] and [16] for a more complete treatment of hyperspectral images and their applications Any physical object in a scene reflects, absorbs and emits electromagnetic radiation The object s molecular composition and shape affect the way this interaction occurs Using this phenomenon to gather information about an object or scene without coming into physical contact with it is called electrooptical remote sensing If the electromagnetic radiation arriving at the sensor array is measured at a sufficiently high number of wavelengths for every pixel, the resulting spectrum can be used to extract information that cannot be extracted from images captured 1 Please note that we use the term target image to denote the hi-res image cube which we are trying to reconstruct In a similar fashion, the term source image denotes the low-res observation which is available to us by conventional devices that do not provide much information about the spectral dimension Topics involved with the measurement, analysis and interpretation of such spectra are treated in the field of spectroscopy Another related field, imaging spectroscopy, combines spectroscopy with methods to acquire spectral information Hyperspectral sensors are a class of imaging spectroscopy sensors, for which the sensed waveband is divided into hundreds of contiguous narrow frequency bands As the name suggests, hyperspectral sensors differ from their predecessors, the multispectral sensors, in that the number of bands that are separately imaged is much higher (for example, the AVIRIS, from NASA/JPL has 224 bands) Hyperspectral images are the name given to the multichannel images captured by hyperspectral sensor arrays They are the data type most often obtained for space imagery applications such as mineral and oil exploration, civil engineering, military applications (mine detection, information gathering), etc For a given ground pixel, whose dimensions can be in the range of tens of centimeters to tens of meters depending on the spatial resolution and altitude of the imaging device, the radiance observed at any particular wavelength is determined, to first order, by the reflectance of the matter and the solar illumination at that wavelength, but there are many important secondary effects that limit the measurement, including scattering and absorption of the reflected radiance by the atmosphere, spatial and spectral aberrations in the sensors, imperfect optics in the imaging device, secondary illumination from adjacent objects, finite sensor dimensions, and the viewing angle of the sensor array Characterizing these effects with the ultimate goal of developing compensation techniques to limit their undesired influence on the image data is a challenging problem and an active research area There are many different models that describe the hyperspectral images Statistical models [17] [19] typically use some kind of Markov random field (for example, Gauss Markov random fields) and are capable of capturing the spatially and spectrally correlated nature of hyperspectral data Deterministic models, on the other hand, are computationally more attractive and can easily be structured to have guidance of a physical model of the imaging process [20], [21] The deterministic models can be further divided into two subgroups, namely linear deterministic models and nonlinear deterministic models [22] Finally, there are approaches that combine the statistical and deterministic models in an effort to construct models that have the advantages of both approaches [23] An integral part of our approach is a model of the hyperspectral image acquisition process We require our model to be complex enough to capture the main characteristics of the imaging process, while keeping it as simple as possible to keep its computational complexity within practical limits Although the proposed model makes no specific assumptions about the imaging device used and incorporates most of the effects that influence the spatial and spectral resolution of the observed scene, it excludes mainly the physical effects These are related to the sensor characteristics and secondary illumination sources In this paper, we assume that sensor calibration and atmospheric compensation have already been applied and focus on the image processing aspects of the acquisition

3 1862 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 14, NO 11, NOVEMBER 2005 TABLE I LIST OF TERMS B Acquisition Model In following sections, we model the image acquisition, spatial filtering, spectral filtering, and sampling Because the ideas we are trying to convey are rigid and the path we take to do so is mathematically rigorous, one can easily get confused with the notation used Therefore, we begin with a summary of the mathematical notation that will be used throughout the paper The hyperspectral image data is best represented as an -dimensional vector for each pixel, where is the number of spectral bands The images are assumed to be so that the hyperspectral data forms an data cube Following this convention, we let denote the -dimensional pixel value at location We use to denote the th spatially continuous hi-res (target) image plane and for the th spatially discrete hi-res image plane Similarly, denotes the th continuous low-res (source) image plane and denotes the th discrete low-res image plane Any pixel denoted by the letter, no matter what its subscript or indices may be, is a target image pixel The letter similarly always denotes a source image pixel Furthermore, at some point, it will be necessary to differentiate between high- and low-res grid pixels For this purpose, the hi-res grid pixels are indexed with and the low-res grid pixels are indexed with A complete list of terms and their definitions is given in Table I The block diagram shown in Fig 1 depicts the system to be modeled The ideal continuous-space and continuous-spectrum image signal, denoted by, represents the actual input to the imaging device In this notation is the observation index Our main assumption in super-resolution reconstruction is that we have access to multiple observations of the scene for which we wish to apply super resolution These observations can be hyperspectral images captured at different times 2 by a single imaging device or simultaneously from multiple imaging devices Super-resolution reconstruction then fuses the information present across these observations to obtain a higher resolution image of the target scene [1] [9] Ideally, we would like to reconstruct from the available observations, but is continuous in all dimensions and there is no way we can implement a solution to this problem using digital hardware We will deal with this limitation in two steps First, we will consider the spectral dimension, where we will make use of a well known and widely used property of hyperspectral image data Then, we will look into the spatial dimension 1) Discretizing the Target Image: It is a well-known fact that the spectral reflectance of natural images can be accurately modeled using linear combinations of a relatively small number (generally around seven [24]) of reflectance basis functions These illuminant-independent orthonormal basis functions can be obtained by applying principal components analysis (PCA) to a large set of natural image reflectances and selecting the first principal components If we denote the illuminant spectrum as, then one possible choice for a set of illuminant-dependent basis functions is As a first step in our model, we will assume that is representable as a linear combination of these basis functions That is, at every location, will be represented by a -dimensional vector, where the elements of this vector are the coefficients of the corresponding orthonormal basis functions Therefore, the hi-res target image, shown in Fig 1, is a -dimensional vector at every pixel Note that is not the number of spectral bands; it is the number of spectral basis functions This assumption lets us represent an -dimensional signal in a -dimensional space (note that ) with negligible error This greatly reduces the complexity of the reconstruction problem Before starting to discuss the spatial domain, we would like to comment on the use of PCA in the context of resolution improvement and the spectral information fusion aspect of the proposed technique First, our main assumption in attempting to fuse information coming from multiple spectral bands is that the spectral signature of some target material we are interested in is present in several bands No claims are made for spectral details that may be present only in a single frequency band of a single observation Second, the choice of basis functions is application specific If we are trying to improve the resolution of a specific material with a known spectral signature, then the training images can be chosen accordingly to have basis vectors optimized for that specific material Also, at the expense of increased computational load, the number of the basis functions used to represent can be increased and the representation error can be made arbitrarily small Finally, the use of PCA to find the spectral basis functions is totally arbitrary In fact, the basis functions may be calculated using a variety of approaches including but not limited to, convex geometry-based approaches, noise reduction-based approaches, etc (see [25] for a detailed discussion of the available techniques) 2 Please note that this is not an implicit assumption of the existence of a fictitious hyperspectral video signal The observations can be captured at time instances which are separated by arbitrarily long time periods

4 AKGUN et al: SUPER-RESOLUTION RECONSTRUCTION OF HYPERSPECTRAL IMAGES 1863 Fig 1 image Hyperspectral image acquisition model by which a hypothesized hi-res discrete target hyperspectral image is used to produce a low-res source hyperspectral To deal with the spatial domain, we hypothesize that for each of the spectral basis image planes, there exists a corresponding discrete, hi-res target image plane and we seek to reconstruct a target image from that signal, The main assumption here is that the spatially continuous signal is bandlimited (more details on the band-limitedness assumption will be given in the next section) and, therefore, could be reconstructed from the spatially discrete hi-res image through an ideal reconstruction filter 2) Discrete-to-Continuous Conversion: The first step in the ideal reconstruction process is conversion of the discrete signals into impulse trains The following operations are performed on each of the target image planes If we let denote the impulse array obtained from, then we can write Substituting for from (1), we get Assuming convergence, we can exchange the order of summation and integration to write (3) (4) Note that the spatial sampling frequency is normalized for the low-res grid so that and show the increase in the spatial sampling density when we move from the low-res image (source) to the hi-res image (target) In other words, if we assume that the sampling density in the low-res image is 1 per unit area, then the hi-res image has and samples in the horizontal and vertical directions per unit area, respectively Under this normalization, our band-limitedness assumption requires the continuous signal to be bandlimited to the frequency range We implicity assume that the hi-res target image (and, hence, its reconstructed version) exists for all observations Therefore, in the following equations, the observation index is suppressed Keeping this in mind, the convolution with the reconstruction filter takes the familiar form (1), (4) be- If we include the suppressed observation index comes (5) 3) Spectral Representation With Predetermined Basis Functions: We assume that the basis functions have been predetermined by applying PCA on appropriate training data and selecting the first principal components As mentioned in Section II-B1, the use of PCA is arbitrary and any of the methods mentioned in [25] can be used to obtain these basis functions If we denote the continuous signal as, then we have (6) (2) (7)

5 1864 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 14, NO 11, NOVEMBER 2005 Let us assume that the pixel located at in observation corresponds to in observation That is, Then, by using the inverse of the mapping mentioned above, we can write in terms of Fig 2 Motion mapping M relates the available observations to the reference observation Noting that (6) applies to each of the target image planes, we can write (11) where is the Jacobian of the motion mapping To leave no room for misunderstanding, we note that the inverse mapping maps a given pixel in observation back to its location in observation, that is (12) (8) If we define as Before we move on, we would like to point out a connection between this work and a previous work on face super resolution by Gunturk et al [26] There are some major differences between these works, such as the fact that the previous work used spatial basis whereas here the basis are spectral Nevertheless, the undeniable similarities call for a comparison In both methods, the use of the low-dimensional space to which the unknown hi-res images are known to belong, serve as an effective regularization Furthermore, both methods take the advantage of projecting the noise process to a lower dimensional space which in turn reduces its undesired effects on the reconstructed hi-res images 4) Spatial Filtering: We use to denote the spatially invariant blur filter This models the imperfect imaging optics (eg, lens blur) and the unavoidable sensor integration blur caused by the finite sensor area In the following derivation we assume that the blur filters for all the spectral basis functions are the same This is justified by the spatial response functions supported with the AVIRIS data and will lead to a relatively simple final relation between the hi-res target image and the low-res observations Please note that the solution method that will be used to obtain the target image can handle different blur filters for every basis function with only minor modifications (more on this is in Section III) The blur operation can be written as the convolution of the target image planes with the point spread function of blur filter Substituting from (8) for we get can be written as follows: (13) (14) into this expression, (15) Again, assuming convergence, we can exchange the integration and summations to obtain (9) where subscript, means continuous and blurred We will use the motion mapping for relating the available observations to the reference observation (Fig 2) [5] is defined as To get a simpler looking expression for define as (16),we (10) (17)

6 AKGUN et al: SUPER-RESOLUTION RECONSTRUCTION OF HYPERSPECTRAL IMAGES 1865 Fig 3 Spectral filtering: The ith spectral filter (solid line) is applied to all basis functions to produce the weights of the jth source plane, w which allows us to write 6) Spatial Domain Sampling: Next, we must spatially discretize the images to make a practical implementation possible This is done by sampling the s on a low-res grid (18) 5) Spectral Filtering (Band Selection, Atmospheric, and Illuminator-Based Effects on Spectrum): The spectral response functions, where stands for the number of spectral bands in the source images, model the hyperspectral sensors efficiency at different wavelengths We assume that the input images are atmospherically corrected, which, in turn, eliminates the need for complex processing to invert the atmospheric effects on the spectrum or in matrix form with and (21) (22) where we have made the following definitions to simplify the expression: (19) where the second equality follows from the assumption that the integrals and summations converge, allowing us to change their order If we denote the integral in brackets as, then we can write (20) From (20), we see that, within the limitations of our model, the low-res source images can be represented as linear combinations of the basis planes filtered by The weights are obtained by separately applying the spectral filters to the basis functions Fig 3 demonstrates this for the weights of the th source plane with a fictitious spectrum for and (23)

7 1866 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 14, NO 11, NOVEMBER 2005 Note that (for ) and, as defined above, are vectors obtained by cascading the corresponding elements in definition (23) one after another Equation (21) shows the relationship between the low-res observations and the hi-res target image cube through the discrete (spatially) shift-varying blur function 7) Additive Noise: Finally, the additive noise models the total effect of all possible noise sources (unavoidable sensor noise, sampling noise, and quantization noise are introduced when the sampled pixel values are quantized) that exist throughout the whole acquisition process in the image quality Fortunately, for hyperspectral images, complex motion fields with high motion rates and frequent occlusion regions are quite rare due to the nature of the data As in [5] and [6], we start with defining the following closed, convex constraint sets, for each low-res grid pixel: where (25) (24) The exact statistical nature of the noise process, which is of great importance for methods formulated using a Bayesian framework, depends on the specific application and the assumptions we are willing to make A very popular characterization is to assume that is a zero mean wide sense stationary Gaussian noise process III INVERSE PROBLEM Given the model presented in the previous sections, the inverse problem can be stated as finding the target image that agrees with the available source images Here, agrees deserves some explanation When we say the candidate target image is in agreement with the source images we mean that if the linear, time and space-varying (LTSV) filter in (20) is applied to the candidate target image, the resulting synthetic source image is close to the actual images captured by the imaging device under consideration There exist many ways to solve this problem, each with its (dis)advantages For example, we could try to minimize the squared error between the observed images and the synthetically produced source images by using well-studied least squares methods The drawback of this approach is that it requires the computation of the inverses of large matrices, which is in most cases very difficult A preferable alternative is to use iterative set-theoretic methods [5], [6] It can be shown that using a squared-error criterion together with a gradient-based iterative minimization method is completely equivalent to a version of the back-projection method [4] In this work, we propose a POCS-based solution (see [27] and [28]) to the inverse problem addressed above The POCS method requires a number of closed convex constraint sets to be defined These constraint sets must be defined in a well-defined vector space and contain the hi-res target image We define constraint sets (one for each observed band) at every low-res grid point where (21) is valid A reconstructed hi-res image is a point in the intersection of these constraint sets and can be determined by successively projecting an initial estimate (which is usually chosen to be a bilinearly interpolated low-res image) onto the constraint sets As mentioned in almost every work that applies a POCS based reconstruction method, we require an accurate estimate of the motion field for this approach to work Otherwise, the projection operators dictate irrelevant constraints on the pixels with inaccurate motion vectors, which results in a degradation (26) is the residual signal associated with the th observation of the th spectral band The quantities used in the definition of the constraint set reflect our statistical confidence with which the actual hi-res target (for )isin (see [6]) We can determine the values of s from the statistics of the noise process to guarantee that the actual hi-res target is an element of the constraint set with some predetermined statistical confidence To determine the projection operator that projects the current estimate of the hi-res target (for ) onto, we start with (22) Combining and into a single matrix, we obtain for The projection operator is then defined as if, where we used if otherwise (27) (28) In an effort to avoid notational confusion in (28), the projection operator is written for only the th basis component For the reader who is familiar with the algebraic reconstruction techniques and numerically stable matrix inversion methods, we note the obvious similarity of this projection operator to Kaczmarz s iterative method for solving systems of linear equations Any additional constraints coming from our prior information about the hyperspectral data (positivity, bounded energy, and limited support for quantized data, to name a few) can be used to further improve the results by defining new constraint sets and their corresponding projection operators accordingly Given

8 AKGUN et al: SUPER-RESOLUTION RECONSTRUCTION OF HYPERSPECTRAL IMAGES 1867 the projection operator above, estimates of the hi-res targets are computed iteratively from the low-res observations as follows: (29) As mentioned in Section II-B4, this method is capable of handling different blur filters for every basis function This is possible by modifying the projection operators in such a way that every coefficient plane is projected by using the corresponding spatial blur filter This does not increase the computational load since the number of computations is not altered (as long as the spatial blur filters corresponding to different basis functions do not differ greatly in size) Regardless of the method used to solve for the target image, a good share of the total effort goes into calculating the LTSV blur filter From (17), we see that has a complex structure It depends on the reconstruction and spatial blurring filter as well as the motion in the scene Furthermore, is only valid where the motion is accurately modeled, making the precision of the motion vectors used in the computations extremely important, [5] In many cases, the computational load renders the real-observation calculation of the generalized blur filter for every pixel impossible Instead, is computed off-line and tabulated for various motion and blur values A good understanding of the LTSV filtering operation given in (20) is helpful here For this reason, in the next section, we will study two special cases, namely the case of a single observation and the case with multiple observations and translational motion In these cases, is fairly easy to compute and has a nice interpretation, which sheds some light on the LTSV filtering performed by A Single Data Cube and Multiple Cubes With Global Translational Motion In the single-cube case, we have only one hyperspectral source cube, which is a set of monochromatic images of the same scene captured at different wavelengths Our aim is to reconstruct the hi-res target image planes for from which we can obtain the spatially and spectrally continuous target image by using the spectral basis functions Note that the observation index is dropped since we are working with only one source cube Following the same steps as in the previous sections, one can show that for the single-cube case Fig 4 Hyperspectral test images extracted from Moffett Field and BearFruitGray data sets BearFruitGray images are RGB rendered from the available hyperspectral data, for the AVIRIS (Moffett Field) images the hundredth band is shown (a) BearFruitGray-1 (b) BearFruitGray-2 (c) AVIRIS-1 (d) AVIRIS-2 interpret it as a finite length weighted blur window If we treat the right-hand side as a function of and, we can see that the coordinates specify the center of the flipped blur window, and and specify the size of the window The greater the downsampling ratios ( and ), the larger the blur window and, hence, the worse the blur When acts on a basis plane, the output is the weighted average of the pixels within its region of support Furthermore, we can see that the th band source pixel at location is a weighted linear combination of the blur window s outputs from the hi-res basis planes, where the weights are the s In the translational motion case, we have multiple-source cubes, but the motion in the observed scene is constrained to be global translational motion This type of motion can be incorporated into the model by letting (32) Using these motion mappings and proceeding as in the singlecube case, we obtain (30) where (31) Equation (30) provides a nice interpretation for By comparing (30) with (21), we see that, in this case, the LTSV blur filter is simply the convolution of the blur filter with the reconstruction filter This is relatively easy to compute We can (33) From (33), we see that the same interpretation given in the previous section applies to the translational motion case with a slight change In this case, the effective blur window is moving at the same velocity and in the same direction as the global translational motion Using these two cases, we can infer some results for the arbitrary motion case First, comparing (30) and (33) with (20), we see that in the arbitrary motion case s region of support will

9 1868 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 14, NO 11, NOVEMBER 2005 Fig 5 (d) and (e), respectively (a) Original (b) Bilinear (c) Proposed (d) Bilinear (e) Proposed Results for the second test image extracted from 31-band BearFruitGray The resolution enhancement ratio is set to for (b) and (c), and for not be a simple rectangular area Instead, it will consist of arbitrary regions Under certain conditions, namely when the local motion is sufficiently small and occlusion is negligible, we can approximate as in (30) or (33), depending on the local motion around the source pixel This works well when appropriate and saves a great deal of computation IV RESULTS A Experimental Setup The proposed technique is tested with two different hyperspectral image data sets The first data set is the 31-band reflectance image of a natural scene (BearFruitGrayB) captured in a controlled illumination laboratory environment For detailed information on the data set see [29] The second data set is the 224-band image of an urban area (Moffett Field) acquired by the AVIRIS hyperspectral imaging system For detailed information on the data set see [30] Since the image dimensions of both data sets are too large, some specific regions are extracted from the original data and the tests are conducted on these smaller images Please note that for the first data set (BearFruitGrayB), the proposed method is tested only on the calibrated reflectance data On the other hand, for the second data set (AVIRIS-Moffett Field), the proposed method is tested on both the calibrated reflectance and radiance data Fig 4 shows the images used in the tests The images from Bear- FruitGray data set are RGB rendered for better visualization On the other hand, since the AVIRIS data includes bandwidths well beyond the visible range, it is meaningless to render RGB images For this reason, a specific frequency band is selected (the hundredth band is used for all the figures in this paper) and presented for visual purposes We conducted two sets of experiments As a simple sanity check, in the first set, the proposed method is applied to single hyperspectral images from the Bear- FruitGray data set to obtain higher resolution versions For this purpose, we hypothesize that the original image is obtained from

10 AKGUN et al: SUPER-RESOLUTION RECONSTRUCTION OF HYPERSPECTRAL IMAGES 1869 TABLE II NUMERICAL RESULTS FOR BEARFRUITGRAY DATA SET TABLE III NUMERICAL RESULTS FOR AVIRIS REFLECTANCE DATA an imaginary hi-res target image through a generalized blur filter that is unknown to us We proceed with applying the proposed method to the original hyperspectral image with multiple candidate s and choose the filter that gives the best results The results are compared with the images obtained by bilinear interpolation Since the higher resolution versions of the hyperspectral images are not available, it is not possible to report numerical results Visual results are presented in Fig 5 for qualitative comparison In both figures, the first set shows the images for which the resolution enhancement factors are 2 2 in the vertical and horizontal directions, respectively In the second set, the enhancement factors are 4 4 Clearly, the visual quality of the outputs depends on the assumed blur filter We note that any information about the spatial response of the imaging device used to acquire the hyperspectral data can be exploited to enhance the performance In our experiments we used a 5 5 Gaussian filter with unit variance for 2 2 enhancement and a 7 7 Gaussian filter with a variance of 3 for 4 4 enhancement In the second set of experiments, the proposed method is tested under three different motion scenarios, namely single cube (no motion), multiple cubes with global translational motion and multiple cubes with global affine motion Note that for the type of images we are working on, these are relevant and realistic motion models The experimental setup can be explained as follows In the single-cube case, we begin by choosing a target window in the original hyperspectral image Then, this target window is blurred by a Gaussian filter and filtered in the spectral domain to decrease the number of spectral bands (from 31 to 15 for the first set and from 224 to 112 for the second set) The spectrally filtered image is then downsampled in the spatial domain to obtain the source to be used for obtaining the hi-res (both spatially and spectrally) target window In the translational motion case, after the initial target TABLE IV NUMERICAL RESULTS FOR AVIRIS RADIANCE DATA window is chosen, we move in some predetermined direction and capture another window of the same size and we continue in this fashion until we have as many source cubes as we desire We then proceed as we did in the single-cube case to spatially blur, filter in the spectral domain, and downsample the target windows The resulting source cubes are then used to obtain the first captured hi-res target window by applying the proposed technique The affine motion case is similar to the translational motion case except that the motion model is a six parameter

11 1870 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 14, NO 11, NOVEMBER 2005 Fig 6 Results for the first test image extracted from 31-band BearFruitGray (BearFruitGrayB-1) The presented multicube results are for translational motion scenario (a) Original (b) Case 1: bilinear (c) Case 1: single cube (d) Case 1: multicube (e) Case 2: bilinear (f) Case 2: single cube (g) Case 2: multicube (h) Case 3: bilinear (i) Case 3: single cube (j) Case 3: multicube affine motion model that is capable of representing rotation, scaling, and translation We have three different configurations for each scenario Case 1: 3 3 Gaussian spatial blur filter with unit variance; Gaussian spectral blur filter with a variance of two; downsampling ratio is two in both vertical and horizontal directions; for the multicube case, eight source cubes are used (for both translational and affine motion) Case 2: 5 5 Gaussian spatial blur filter with unit variance; Gaussian spectral blur filter with a variance of two; downsampling ratio is three in both vertical and horizontal directions; for the multicube case, eight source cubes are used (for both translational and affine motion) Case 3: 5 5 Gaussian spatial blur filter with a variance of two; Gaussian spectral blur filter with a variance of two; downsampling ratio is three in both vertical and horizontal directions; for the multicube case, 15 source cubes are used (for both translational and affine motion)

12 AKGUN et al: SUPER-RESOLUTION RECONSTRUCTION OF HYPERSPECTRAL IMAGES 1871 Fig 7 Results for the second reflectance test image extracted from 224-band Moffett Field (AVIRIS reflectance data-2) The presented multicube results are for translational motion scenario (a) Original (b) Case 1: bilinear (c) Case 1: single cube (d) Case 1: multicube (e) Case 2: bilinear (f) Case 2: single cube (g) Case 2: multicube (h) Case 3: bilinear (i) Case 3: single cube (j) Case 3: multicube The motion vectors are calculated by applying an optical flow method [31] on the properly upsampled images Numerical results in terms of two different fidelity measures are presented in the following section These measures are PSNR and band-averaged PSNR In this paper, PSNR is defined as PSNR db (34) MSE where stands for the peak signal power, and band-averaged PSNR is defined as APSNR MSE db (35) where stands for the peak signal power in the th spectral band Since the data we work on is not quantized, the maximum signal value is not fixed The band-averaged PSNR (APSNR), for which the numerator is calculated as the average of the peak signal powers of all bands, is selected to compensate for this fact Under all scenarios, the projection iterations are terminated if either the decrease in mean square error is smaller than a predetermined threshold or five full iterations are completed B Simulation Results We provide the following simulation results to demonstrate the proposed method under the three scenarios mentioned above

13 1872 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 14, NO 11, NOVEMBER 2005 Fig 8 Results for the second radiance test image extracted from 224-band Moffett Field (AVIRIS radiance data-2) The presented multicube results are for translational motion scenario (a) Original (b) Case 1: bilinear (c) Case 1: single cube (d) Case 1: multicube (e) Case 2: bilinear (f) Case 2: single cube (g) Case 2: multicube (h) Case 3: bilinear (i) Case 3: single cube (j) Case 3: multicube together with the results of bilinearly interpolating the separate spectral bands Since the relevant output format depends on the intended application, both numerical and visual results will be presented The numerical results given in Tables II IV below are PSNR and APSNR values in decibels, where PSNR and APSNR are defined as in (34) and (35), respectively Visual results are demonstrated in Figs 6 9 For each of the AVIRIS images, we have also included the results of comparing the proposed method with the separate-band super resolution under two different noise scenarios to differentiate between the improvement coming from projecting the additive noise to a lower dimensional space and the improvement due to the spectral deblurring Table V, which reports numerical results in terms of APSNR values, summarizes the comparison results for translational motion case In the noiseless case, the additive noise component is set to zero In the noisy case, all the input images are corrupted with white Gaussian noise with a standard deviation of 50 From the Tables II V, we can see that the proposed method even with a single source cube performs better than bilinear interpolation Using multiple cubes further improves the results, thus pointing out the advantage of fusing the information present across overlapping sources Visual results presented in Figs 6 9 also confirm the improvement seen in PSNR and

14 AKGUN et al: SUPER-RESOLUTION RECONSTRUCTION OF HYPERSPECTRAL IMAGES 1873 Fig 9 Results for the second radiance test image extracted from 224-band Moffett Field (AVIRIS radiance data-2) The presented multicube results are for translational motion scenario under Gaussian noise with a standard deviation of 50 (a) Case 1: bilinear (b) Case 1: proposed (c) Case 1: separate (d) Case 2: bilinear (e) Case 2: proposed (f) Case 2: separate (g) Case 3: bilinear (h) Case 3: proposed (i) Case 3: separate APSNR values The proposed model is capable of utilizing multiple bands (by projecting on each every observed spectral band separately), since the responses of the spectral blur filters are known By exploring this knowledge in the projection operators one can achieve better results compared to applying super resolution to all bands separately, Table V The obvious reason for this is applying super resolution to the blurred spectral bands separately causes even more mixing between the bands and the additive noise components present in each band From Table V, we can also see that for AVIRIS data, the improvement coming from the spectral deblurring is not as much as the improvement due to noise reduction But this is to be expected due to the characteristics of the spectral dimension of the data AVIRIS is a highly developed instrument capable of sampling the spectrum at more than 200 frequencies As a result of this high number of spectral samples, the observed spectrum is usually quite smooth and the improvement over linear interpolation of the sampled spectra (for blurred and down-sampled case) or doing nothing at all (for blur only case) is usually around 03 db This improvement gets larger as the spectral blurring becomes heavier The improvement due to noise reduction is usually larger than the improvement due to spectral deblurring and gets even larger as the noise power is increased within limitations of POCS based super-resolution methods V CONCLUSION In this paper, the problem of spatial and spectral reconstruction in hyperspectral images has been addressed We have proposed a linear deterministic model of the hyperspectral

15 1874 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL 14, NO 11, NOVEMBER 2005 TABLE V COMPARISON BETWEEN THE PROPOSED METHOD AND APPLYING SUPER RESOLUTION TO EVERY BAND SEPARATELY UNDER NO ADDITIVE NOISE AND GAUSSIAN ADDITIVE NOISE WITH A STANDARD DEVIATION OF 50 THE REPORTED RESULTS ARE APSNR VALUES, WHERE APSNR IS DEFINED AS IN (35) image acquisition process and supplied a mathematical formulation describing the process as a system of linear equations We have formulated the reconstruction problem (within the limitations of this model) as finding the target hyperspectral image that satisfies the previously mentioned set of linear equations as closely as possible for the given observation(s) of the desired target image We have proposed a set theoretic solution method and presented numerical and visual results validating the proposed reconstruction technique The reconstruction technique presented in this paper can be utilized as a post processing step in hyperspectral imaging applications such as anomaly detection for increased detection accuracy ACKNOWLEDGMENT The authors would like to thank the Office of Naval Research for their support of this research REFERENCES [1] T Huang and R Tsai, Multiframe Image Restoration and Registration, in: Advances in Computer Vision and Image Processing, T S Huang, Ed Greenwich, CT: JAI, 1984, vol 1 [2] P E Eren, M I Sezan, and A M Tekalp, Robust, object-based high-resolution image reconstruction from low-resolution, IEEE Trans Image Process, vol 6, no 12, pp , Dec 1997 [3] R L Stevenson, B E Schmitz, and E J Delp, Discontinuity preserving regularization of inverse visual problems, IEEE Trans Syst, Man, Cybern, vol 24, no 3, pp , Jun 1994 [4] M Elad and A Feuer, Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images, IEEE Trans Image Process, vol 6, no 12, pp , Dec 1997 [5] Y Altunbasak, A Patti, and R Mersereau, Super-resolution still and video reconstruction from MPEG-coded video, IEEE Trans Circuits Syst Video Technol, vol 12, no 4, pp , Apr 2002 [6] A Patti, M Sezan, and A M Tekalp, Superresolution video reconstruction with arbitrary sampling lattices and nonzero aperture time, IEEE Trans Image Process, vol 6, no 8, pp , Aug 1997 [7] B R Hunt, Imagery super-resolution: emerging prospects, in Proc SPIE Applications of Digital Image Processing XIV, vol 1567, San Diego, CA, Jul 1991, pp [8] B Tom and A Katsaggelos, Reconstruction of a high resolution image by simultaneous registration, restoration and interpolation of low resolution images, in Proc Int Conf Image Processing, vol 2, 1995, pp [9] R R Schultz and R L Stevenson, Improved definition video frame enhancement, in Proc Int Conf Acoustics, Speech, and Signal Processing, vol 4, May 1995, pp [10] B Zhukov, D Oertel, F Lanzl, and G Reinhackel, Unmixing-based multisensor multiresolution image fusion, IEEE Trans Geosci Remote Sens, vol 37, no 10, pp , Oct 1999 [11] B Zhukov, D Oertel, and F Lanzl, Unmixing-based multisensor multiresolution image fusion, in Proc Int Geoscience and Remote Sensing Symp, vol 1, 1995, pp [12] M E Winter, Resolution enhancement of hyperspectral data, in Proc Aerospace Conf, vol 3, 2002, pp [13] A Zomet and S Peleg, Multi-sensor super-resolution, in Proc 6th IEEE Workshop on Applications of Computer Vision, Dec 2002, pp [14] T Gotoh and M Okutomi, Direct super-resolution and registration using raw CFA images, in Proc IEEE Computer Society Conf Computer Vision and Pattern Recognition, vol 2, 2004, pp [15] G Shaw and D Manolakis, Signal processing for hyperspectral image exploitation, IEEE Signal Process Mag, vol 19, no 1, pp 12 16, Jan 2002 [16] D Landgrebe, Hyperspectral image data analysis, IEEE Signal Process Mag, vol 19, no 1, pp 17 28, Jan 2002 [17] S M Schweizer and J M F Moura, Modeling and detection in hyperspectral imagery, in Proc Int Conf Acoustics, Speech, and Signal Processing, vol 4, 1998, pp [18] G Rellier, X Descombes, J Zerubia, and F Falzon, A Gauss-Markov model for hyperspectral texture analysis of urban areas, in Proc 16th Int Conf Pattern Recognition, vol 1, 2002, pp [19] J P Kerekes and J E Baum, Spectral imaging system analytical model for subpixel object detection, IEEE Trans Geosci Remote Sens, vol 40, no 5, pp , Sep 2002 [20] J R Schott, Combining image derived spectra and physics based models for hyperspectral image exploitation, in Proc Applied Imagery Pattern Recognition Workshop, 2000, pp [21] D Slater and G Healey, Physics-based model acquisition and identification in airborne spectral images, in Proc 8th IEEE Int Conf Computer Vision, vol 2, 2001, pp [22] P L Vora, J E Farrell, J D Tietz, and D H Brainard, Image capture: Simulation of sensor responses from hyperspectral images, IEEE Trans Image Process, vol 10, no 12, pp , Dec 2001 [23] D W J Stein, Modeling variability in hyperspectral imagery using a stocastic compositional approach, in Proc Int Geoscience and Remote Sensing Symp, vol 5, 2001, pp [24] L Maloney, Evaluation of linear models of surface spectral reflectance with small numbers of parameters, J Opt Soc Amer A, vol 3, pp , 1986 [25] N Keshava and J Mustard, Spectral unmixing, IEEE Signal Process Mag, vol 19, no 1, pp 44 57, Jan 2002 [26] B K Gunturk, A U Batur, Y Altunbasak, M H H III, and R M Mersereau, Eigenface-domain super-resolution for face recognition, IEEE Trans Image Process, vol 12, no 6, pp , Jun 2003 [27] P Combettes, Convex set theoretic image recovery by extrapolated iterations of parallel subgradient projections, IEEE Trans Image Process, vol 6, no 4, pp , Apr 1997 [28], Convex set theoretic image recovery with inexact projection algorithms, in Proc IEEE Int Conf Image Processing, vol 1, 2001, pp [29] D H Brainard Hyperspectral Image Data [Online] Available: [30] Aviris Free Data Jet Propulsion Lab, California Inst Technol, Pasadena [Online] Available: freedatahtml [31] Y Altunbasak, R M Mersereua, and A J Patti, A fast parametric motion estimation algorithm with illumination and lens distortion correction, IEEE Trans Image Process, vol 12, no 4, pp , Apr 2003

16 AKGUN et al: SUPER-RESOLUTION RECONSTRUCTION OF HYPERSPECTRAL IMAGES 1875 Toygar Akgun (S 04) received the BS degree in electrical engineering from Bilkent University, Ankara, Turkey, in 2001, and the MS degree in electrical engineering from the Georgia Institute of Technology (Georgia Tech), Atlanta, in 2004 He is currently pursuing the PhD degree at the Center for Signal and Image Processing, Georgia Tech His research interests include image processing, statistical signal processing, and modeling, detection, and estimation theory Yucel Altunbasak (S 94 M 97 SM 01) received the BS degree from Bilkent University, Ankara, Turkey, in 1992 with highest honors He received the MS and PhD degrees from the University of Rochester, Rochester, NY, in 1993 and 1996, respectively He joined Hewlett-Packard Research Laboratories (HPL), Palo Alto, CA, in July 1996 His position at HPL provided him with the opportunity to work on a diverse set of research topics, such as video processing, coding and communications, multimedia streaming, and networking He also taught digital video and signal processing courses at Stanford University, Stanford, CA, and San Jose State University, San Jose, CA, as a Consulting Assistant Professor He joined the School of Electrical and Computer Engineering, Georgia Institute of Technology, in 1999 as an Assistant Professor His research efforts resulted in over 75 publications and 12 patents/patent applications He is an area/associate editor for Signal Processing: Image Communications and the Journal of Circuits, Systems and Signal Processing He also serves as a session chair in technical conferences, as a panel reviewer for government funding agencies and as a technical reviewer for various journals and conferences in the field of signal processing and communications He is currently working on industrial- and government-sponsored projects related to video and multimedia signal processing, inverse problems in imaging, and network distribution of compressed multimedia content Dr Altunbasak is an associate editor for IEEE TRANSACTIONS ON IMAGE PROCESSING He is a member of the IEEE Signal Processing Society s IMDSP Technical Committee He served as a Co-Chair for the Advanced Signal Processing for Communications Symposia at ICC 2003 He received the National Science Foundation (NSF) CAREER Award in 2002 Russell M Mersereau (S 69 M 73 SM 78 F 83) received the BS and MS degrees and the DSc degree from the Massachusetts Institute of Technology, Cambridge, in 1969 and 1973, respectively He joined the School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, in 1975 and has held the rank of Regents Professor since 1987 He is the coauthor of the text Multidimensional Digital Signal Processing (Englewood Cliffs, NJ: Prentice-Hall, 1984) Dr Mersereau is the former Vice President for Awards and Membership of the Signal Processing Society He served on the Editorial Board of the PROCEEDINGS OF THE IEEE and as an Associate Editor for signal processing for the IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING and and the IEEE SIGNAL PROCESSING LETTERS He was the Vice President for Awards and Membership of the Signal Processing Society and a member of its Executive Board from 1999 to 2001 He is the co-recipient of the 1976 Bowder J Thompson Memorial Prize of the IEEE for the best technical paper by an author under the age of 30, a recipient of the 1977 Research Unit Award of the Southeastern Section of the ASEE, and three teaching awards He was awarded the 1990 Society Award of the Signal Processing Society and an IEEE Millennium Medal in 2000

Multiframe Blocking-Artifact Reduction for Transform-Coded Video

Multiframe Blocking-Artifact Reduction for Transform-Coded Video 276 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 12, NO. 4, APRIL 2002 Multiframe Blocking-Artifact Reduction for Transform-Coded Video Bahadir K. Gunturk, Yucel Altunbasak, and

More information

Super-resolution Image Reconstuction Performance

Super-resolution Image Reconstuction Performance Super-resolution Image Reconstuction Performance Sina Jahanbin, Richard Naething March 30, 2005 Abstract As applications involving the capture of digital images become more ubiquitous and at the same time

More information

Hyperspectral Remote Sensing

Hyperspectral Remote Sensing Hyperspectral Remote Sensing Multi-spectral: Several comparatively wide spectral bands Hyperspectral: Many (could be hundreds) very narrow spectral bands GEOG 4110/5100 30 AVIRIS: Airborne Visible/Infrared

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Region Weighted Satellite Super-resolution Technology

Region Weighted Satellite Super-resolution Technology Region Weighted Satellite Super-resolution Technology Pao-Chi Chang and Tzong-Lin Wu Department of Communication Engineering, National Central University Abstract Super-resolution techniques that process

More information

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT 17th European Signal Processing Conference (EUSIPCO 009) Glasgow, Scotland, August 4-8, 009 IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION

More information

AN IMPORTANT problem that arises frequently in visual

AN IMPORTANT problem that arises frequently in visual IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 1, JANUARY 2004 33 Super-Resolution Reconstruction of Compressed Video Using Transform-Domain Statistics Bahadir K. Gunturk, Student Member, IEEE, Yucel

More information

IN MANY applications, it is desirable that the acquisition

IN MANY applications, it is desirable that the acquisition 1288 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL 17, NO 10, OCTOBER 2007 A Robust and Computationally Efficient Simultaneous Super-Resolution Scheme for Image Sequences Marcelo

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Super-Resolution from Image Sequences A Review

Super-Resolution from Image Sequences A Review Super-Resolution from Image Sequences A Review Sean Borman, Robert L. Stevenson Department of Electrical Engineering University of Notre Dame 1 Introduction Seminal work by Tsai and Huang 1984 More information

More information

Introduction to Image Super-resolution. Presenter: Kevin Su

Introduction to Image Super-resolution. Presenter: Kevin Su Introduction to Image Super-resolution Presenter: Kevin Su References 1. S.C. Park, M.K. Park, and M.G. KANG, Super-Resolution Image Reconstruction: A Technical Overview, IEEE Signal Processing Magazine,

More information

Blur Space Iterative De-blurring

Blur Space Iterative De-blurring Blur Space Iterative De-blurring RADU CIPRIAN BILCU 1, MEJDI TRIMECHE 2, SAKARI ALENIUS 3, MARKKU VEHVILAINEN 4 1,2,3,4 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720,

More information

Robust Video Super-Resolution with Registration Efficiency Adaptation

Robust Video Super-Resolution with Registration Efficiency Adaptation Robust Video Super-Resolution with Registration Efficiency Adaptation Xinfeng Zhang a, Ruiqin Xiong b, Siwei Ma b, Li Zhang b, Wen Gao b a Institute of Computing Technology, Chinese Academy of Sciences,

More information

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES

AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES AN ALGORITHM FOR BLIND RESTORATION OF BLURRED AND NOISY IMAGES Nader Moayeri and Konstantinos Konstantinides Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304-1120 moayeri,konstant@hpl.hp.com

More information

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair

Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Hyperspectral and Multispectral Image Fusion Using Local Spatial-Spectral Dictionary Pair Yifan Zhang, Tuo Zhao, and Mingyi He School of Electronics and Information International Center for Information

More information

A Fast Super-Resolution Reconstruction Algorithm for Pure Translational Motion and Common Space-Invariant Blur

A Fast Super-Resolution Reconstruction Algorithm for Pure Translational Motion and Common Space-Invariant Blur IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 8, AUGUST 2001 1187 A Fast Super-Resolution Reconstruction Algorithm for Pure Translational Motion and Common Space-Invariant Blur Michael Elad, Member,

More information

Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution

Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution Enhancing DubaiSat-1 Satellite Imagery Using a Single Image Super-Resolution Saeed AL-Mansoori 1 and Alavi Kunhu 2 1 Associate Image Processing Engineer, SIPAD Image Enhancement Section Emirates Institution

More information

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation

Optimizing the Deblocking Algorithm for. H.264 Decoder Implementation Optimizing the Deblocking Algorithm for H.264 Decoder Implementation Ken Kin-Hung Lam Abstract In the emerging H.264 video coding standard, a deblocking/loop filter is required for improving the visual

More information

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques

Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques Patch-Based Color Image Denoising using efficient Pixel-Wise Weighting Techniques Syed Gilani Pasha Assistant Professor, Dept. of ECE, School of Engineering, Central University of Karnataka, Gulbarga,

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Chapter 7. Conclusions and Future Work

Chapter 7. Conclusions and Future Work Chapter 7 Conclusions and Future Work In this dissertation, we have presented a new way of analyzing a basic building block in computer graphics rendering algorithms the computational interaction between

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2

A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA. Naoto Yokoya 1 and Akira Iwasaki 2 A MAXIMUM NOISE FRACTION TRANSFORM BASED ON A SENSOR NOISE MODEL FOR HYPERSPECTRAL DATA Naoto Yokoya 1 and Akira Iwasaki 1 Graduate Student, Department of Aeronautics and Astronautics, The University of

More information

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising

An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising An Optimized Pixel-Wise Weighting Approach For Patch-Based Image Denoising Dr. B. R.VIKRAM M.E.,Ph.D.,MIEEE.,LMISTE, Principal of Vijay Rural Engineering College, NIZAMABAD ( Dt.) G. Chaitanya M.Tech,

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Face Hallucination Based on Eigentransformation Learning

Face Hallucination Based on Eigentransformation Learning Advanced Science and Technology etters, pp.32-37 http://dx.doi.org/10.14257/astl.2016. Face allucination Based on Eigentransformation earning Guohua Zou School of software, East China University of Technology,

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude A. Migukin *, V. atkovnik and J. Astola Department of Signal Processing, Tampere University of Technology,

More information

Adaptive Quantization for Video Compression in Frequency Domain

Adaptive Quantization for Video Compression in Frequency Domain Adaptive Quantization for Video Compression in Frequency Domain *Aree A. Mohammed and **Alan A. Abdulla * Computer Science Department ** Mathematic Department University of Sulaimani P.O.Box: 334 Sulaimani

More information

PCA based Generalized Interpolation for Image Super-Resolution

PCA based Generalized Interpolation for Image Super-Resolution PCA based Generalized Interpolation for Image Super-Resolution C. V. Jiji and Subhasis Chaudhuri Department of Electrical Engineering, Indian Institute of Technology-Bombay Mumbai 400076. India jiji, sc

More information

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera

Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute

More information

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh

REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS. Y. Postolov, A. Krupnik, K. McIntosh REGISTRATION OF AIRBORNE LASER DATA TO SURFACES GENERATED BY PHOTOGRAMMETRIC MEANS Y. Postolov, A. Krupnik, K. McIntosh Department of Civil Engineering, Technion Israel Institute of Technology, Haifa,

More information

Remote Sensing Image Analysis via a Texture Classification Neural Network

Remote Sensing Image Analysis via a Texture Classification Neural Network Remote Sensing Image Analysis via a Texture Classification Neural Network Hayit K. Greenspan and Rodney Goodman Department of Electrical Engineering California Institute of Technology, 116-81 Pasadena,

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Adaptive Doppler centroid estimation algorithm of airborne SAR

Adaptive Doppler centroid estimation algorithm of airborne SAR Adaptive Doppler centroid estimation algorithm of airborne SAR Jian Yang 1,2a), Chang Liu 1, and Yanfei Wang 1 1 Institute of Electronics, Chinese Academy of Sciences 19 North Sihuan Road, Haidian, Beijing

More information

MULTICHANNEL image processing is studied in this

MULTICHANNEL image processing is studied in this 186 IEEE SIGNAL PROCESSING LETTERS, VOL. 6, NO. 7, JULY 1999 Vector Median-Rational Hybrid Filters for Multichannel Image Processing Lazhar Khriji and Moncef Gabbouj, Senior Member, IEEE Abstract In this

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

WATERMARKING FOR LIGHT FIELD RENDERING 1

WATERMARKING FOR LIGHT FIELD RENDERING 1 ATERMARKING FOR LIGHT FIELD RENDERING 1 Alper Koz, Cevahir Çığla and A. Aydın Alatan Department of Electrical and Electronics Engineering, METU Balgat, 06531, Ankara, TURKEY. e-mail: koz@metu.edu.tr, cevahir@eee.metu.edu.tr,

More information

Multi-frame blind deconvolution: Compact and multi-channel versions. Douglas A. Hope and Stuart M. Jefferies

Multi-frame blind deconvolution: Compact and multi-channel versions. Douglas A. Hope and Stuart M. Jefferies Multi-frame blind deconvolution: Compact and multi-channel versions Douglas A. Hope and Stuart M. Jefferies Institute for Astronomy, University of Hawaii, 34 Ohia Ku Street, Pualani, HI 96768, USA ABSTRACT

More information

Super Resolution Using Graph-cut

Super Resolution Using Graph-cut Super Resolution Using Graph-cut Uma Mudenagudi, Ram Singla, Prem Kalra, and Subhashis Banerjee Department of Computer Science and Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi,

More information

Copyright 2005 Society of Photo-Optical Instrumentation Engineers.

Copyright 2005 Society of Photo-Optical Instrumentation Engineers. Copyright 2005 Society of Photo-Optical Instrumentation Engineers. This paper was published in the Proceedings, SPIE Symposium on Defense & Security, 28 March 1 April, 2005, Orlando, FL, Conference 5806

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION

DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION DEEP LEARNING TO DIVERSIFY BELIEF NETWORKS FOR REMOTE SENSING IMAGE CLASSIFICATION S.Dhanalakshmi #1 #PG Scholar, Department of Computer Science, Dr.Sivanthi Aditanar college of Engineering, Tiruchendur

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Predictive Interpolation for Registration

Predictive Interpolation for Registration Predictive Interpolation for Registration D.G. Bailey Institute of Information Sciences and Technology, Massey University, Private bag 11222, Palmerston North D.G.Bailey@massey.ac.nz Abstract Predictive

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

signal-to-noise ratio (PSNR), 2

signal-to-noise ratio (PSNR), 2 u m " The Integration in Optics, Mechanics, and Electronics of Digital Versatile Disc Systems (1/3) ---(IV) Digital Video and Audio Signal Processing ƒf NSC87-2218-E-009-036 86 8 1 --- 87 7 31 p m o This

More information

Image Quality Assessment Techniques: An Overview

Image Quality Assessment Techniques: An Overview Image Quality Assessment Techniques: An Overview Shruti Sonawane A. M. Deshpande Department of E&TC Department of E&TC TSSM s BSCOER, Pune, TSSM s BSCOER, Pune, Pune University, Maharashtra, India Pune

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. Dartmouth, MA USA Abstract: The significant progress in ultrasonic NDE systems has now

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Sections 3-6 have been substantially modified to make the paper more comprehensible. Several figures have been re-plotted and figure captions changed.

Sections 3-6 have been substantially modified to make the paper more comprehensible. Several figures have been re-plotted and figure captions changed. Response to First Referee s Comments General Comments Sections 3-6 have been substantially modified to make the paper more comprehensible. Several figures have been re-plotted and figure captions changed.

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Florin C. Ghesu 1, Thomas Köhler 1,2, Sven Haase 1, Joachim Hornegger 1,2 04.09.2014 1 Pattern

More information

FOUR REDUCED-REFERENCE METRICS FOR MEASURING HYPERSPECTRAL IMAGES AFTER SPATIAL RESOLUTION ENHANCEMENT

FOUR REDUCED-REFERENCE METRICS FOR MEASURING HYPERSPECTRAL IMAGES AFTER SPATIAL RESOLUTION ENHANCEMENT In: Wagner W., Székely, B. (eds.): ISPRS TC VII Symposium 00 Years ISPRS, Vienna, Austria, July 5 7, 00, IAPRS, Vol. XXXVIII, Part 7A FOUR REDUCED-REFERENCE METRICS FOR MEASURING HYPERSPECTRAL IMAGES AFTER

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin

Video Alignment. Literature Survey. Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Literature Survey Spring 2005 Prof. Brian Evans Multidimensional Digital Signal Processing Project The University of Texas at Austin Omer Shakil Abstract This literature survey compares various methods

More information

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication

DD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

University of Technology Building & Construction Department / Remote Sensing & GIS lecture

University of Technology Building & Construction Department / Remote Sensing & GIS lecture 5. Corrections 5.1 Introduction 5.2 Radiometric Correction 5.3 Geometric corrections 5.3.1 Systematic distortions 5.3.2 Nonsystematic distortions 5.4 Image Rectification 5.5 Ground Control Points (GCPs)

More information

Hyperspectral Image Enhancement Based on Sensor Simulation and Vector Decomposition

Hyperspectral Image Enhancement Based on Sensor Simulation and Vector Decomposition Hyperspectral Image Enhancement Based on Sensor Simulation and Vector Decomposition Ankush Khandelwal Lab for Spatial Informatics International Institute of Information Technology Hyderabad, India ankush.khandelwal@research.iiit.ac.in

More information

High Information Rate and Efficient Color Barcode Decoding

High Information Rate and Efficient Color Barcode Decoding High Information Rate and Efficient Color Barcode Decoding Homayoun Bagherinia and Roberto Manduchi University of California, Santa Cruz, Santa Cruz, CA 95064, USA {hbagheri,manduchi}@soe.ucsc.edu http://www.ucsc.edu

More information

Spatially Adaptive Block-Based Super-Resolution Heng Su, Liang Tang, Ying Wu, Senior Member, IEEE, Daniel Tretter, and Jie Zhou, Senior Member, IEEE

Spatially Adaptive Block-Based Super-Resolution Heng Su, Liang Tang, Ying Wu, Senior Member, IEEE, Daniel Tretter, and Jie Zhou, Senior Member, IEEE IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 21, NO. 3, MARCH 2012 1031 Spatially Adaptive Block-Based Super-Resolution Heng Su, Liang Tang, Ying Wu, Senior Member, IEEE, Daniel Tretter, and Jie Zhou, Senior

More information

Performance of DoFP Polarimeter Calibration

Performance of DoFP Polarimeter Calibration Page 1 of 13 Performance of DoFP Polarimeter Calibration Samual B. Powell, s.powell@wustl.edu (A paper written under the guidance of Prof. Raj Jain) Download Abstract Division-of-focal plane (DoFP) imaging

More information

Video Compression Method for On-Board Systems of Construction Robots

Video Compression Method for On-Board Systems of Construction Robots Video Compression Method for On-Board Systems of Construction Robots Andrei Petukhov, Michael Rachkov Moscow State Industrial University Department of Automatics, Informatics and Control Systems ul. Avtozavodskaya,

More information

Depth Estimation with a Plenoptic Camera

Depth Estimation with a Plenoptic Camera Depth Estimation with a Plenoptic Camera Steven P. Carpenter 1 Auburn University, Auburn, AL, 36849 The plenoptic camera is a tool capable of recording significantly more data concerning a particular image

More information

RESOLUTION enhancement is achieved by combining two

RESOLUTION enhancement is achieved by combining two IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 3, NO. 1, JANUARY 2006 135 Range Resolution Improvement of Airborne SAR Images Stéphane Guillaso, Member, IEEE, Andreas Reigber, Member, IEEE, Laurent Ferro-Famil,

More information

Region Based Image Fusion Using SVM

Region Based Image Fusion Using SVM Region Based Image Fusion Using SVM Yang Liu, Jian Cheng, Hanqing Lu National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences ABSTRACT This paper presents a novel

More information

Computer Graphics. Sampling Theory & Anti-Aliasing. Philipp Slusallek

Computer Graphics. Sampling Theory & Anti-Aliasing. Philipp Slusallek Computer Graphics Sampling Theory & Anti-Aliasing Philipp Slusallek Dirac Comb (1) Constant & δ-function flash Comb/Shah function 2 Dirac Comb (2) Constant & δ-function Duality f(x) = K F(ω) = K (ω) And

More information

Fast Anomaly Detection Algorithms For Hyperspectral Images

Fast Anomaly Detection Algorithms For Hyperspectral Images Vol. Issue 9, September - 05 Fast Anomaly Detection Algorithms For Hyperspectral Images J. Zhou Google, Inc. ountain View, California, USA C. Kwan Signal Processing, Inc. Rockville, aryland, USA chiman.kwan@signalpro.net

More information

Context based optimal shape coding

Context based optimal shape coding IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,

More information

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction Compression of RADARSAT Data with Block Adaptive Wavelets Ian Cumming and Jing Wang Department of Electrical and Computer Engineering The University of British Columbia 2356 Main Mall, Vancouver, BC, Canada

More information

Image Inpainting Using Sparsity of the Transform Domain

Image Inpainting Using Sparsity of the Transform Domain Image Inpainting Using Sparsity of the Transform Domain H. Hosseini*, N.B. Marvasti, Student Member, IEEE, F. Marvasti, Senior Member, IEEE Advanced Communication Research Institute (ACRI) Department of

More information

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8 1. Explain about gray level interpolation. The distortion correction equations yield non integer values for x' and y'. Because the distorted image g is digital, its pixel values are defined only at integer

More information

Mesh Based Interpolative Coding (MBIC)

Mesh Based Interpolative Coding (MBIC) Mesh Based Interpolative Coding (MBIC) Eckhart Baum, Joachim Speidel Institut für Nachrichtenübertragung, University of Stuttgart An alternative method to H.6 encoding of moving images at bit rates below

More information

Spatial, Transform and Fractional Domain Digital Image Watermarking Techniques

Spatial, Transform and Fractional Domain Digital Image Watermarking Techniques Spatial, Transform and Fractional Domain Digital Image Watermarking Techniques Dr.Harpal Singh Professor, Chandigarh Engineering College, Landran, Mohali, Punjab, Pin code 140307, India Puneet Mehta Faculty,

More information

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important

More information

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY Development of Algorithm for Fusion of Hyperspectral and Multispectral Imagery with the Objective of Improving Spatial Resolution While Retaining Spectral Data Thesis Christopher J. Bayer Dr. Carl Salvaggio

More information

This paper describes an analytical approach to the parametric analysis of target/decoy

This paper describes an analytical approach to the parametric analysis of target/decoy Parametric analysis of target/decoy performance1 John P. Kerekes Lincoln Laboratory, Massachusetts Institute of Technology 244 Wood Street Lexington, Massachusetts 02173 ABSTRACT As infrared sensing technology

More information

IMAGE RECONSTRUCTION WITH SUPER RESOLUTION

IMAGE RECONSTRUCTION WITH SUPER RESOLUTION INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 2320-7345 IMAGE RECONSTRUCTION WITH SUPER RESOLUTION B.Vijitha 1, K.SrilathaReddy 2 1 Asst. Professor, Department of Computer

More information

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH Ignazio Gallo, Elisabetta Binaghi and Mario Raspanti Universitá degli Studi dell Insubria Varese, Italy email: ignazio.gallo@uninsubria.it ABSTRACT

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) ISSN 0976 6464(Print) ISSN 0976 6472(Online) Volume 3, Issue 3, October- December (2012), pp. 153-161 IAEME: www.iaeme.com/ijecet.asp

More information

Hydrocarbon Index an algorithm for hyperspectral detection of hydrocarbons

Hydrocarbon Index an algorithm for hyperspectral detection of hydrocarbons INT. J. REMOTE SENSING, 20 JUNE, 2004, VOL. 25, NO. 12, 2467 2473 Hydrocarbon Index an algorithm for hyperspectral detection of hydrocarbons F. KÜHN*, K. OPPERMANN and B. HÖRIG Federal Institute for Geosciences

More information

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 7, NO. 2, APRIL 1997 429 Express Letters A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation Jianhua Lu and

More information

Superresolution Using Preconditioned Conjugate Gradient Method

Superresolution Using Preconditioned Conjugate Gradient Method Superresolution Using Preconditioned Conjugate Gradient Method Changjiang Yang, Ramani Duraiswami and Larry Davis Computer Vision Laboratory University of Maryland, College Park, MD 20742 E-mail: {yangcj,ramani,lsd}@umiacs.umd.edu

More information

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera

Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information

More information

Multi-frame super-resolution with no explicit motion estimation

Multi-frame super-resolution with no explicit motion estimation Multi-frame super-resolution with no explicit motion estimation Mehran Ebrahimi and Edward R. Vrscay Department of Applied Mathematics Faculty of Mathematics, University of Waterloo Waterloo, Ontario,

More information

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction

A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction A New Image Based Ligthing Method: Practical Shadow-Based Light Reconstruction Jaemin Lee and Ergun Akleman Visualization Sciences Program Texas A&M University Abstract In this paper we present a practical

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

TERM PAPER ON The Compressive Sensing Based on Biorthogonal Wavelet Basis

TERM PAPER ON The Compressive Sensing Based on Biorthogonal Wavelet Basis TERM PAPER ON The Compressive Sensing Based on Biorthogonal Wavelet Basis Submitted By: Amrita Mishra 11104163 Manoj C 11104059 Under the Guidance of Dr. Sumana Gupta Professor Department of Electrical

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

An LED based spectrophotometric instrument

An LED based spectrophotometric instrument An LED based spectrophotometric instrument Michael J. Vrhel Color Savvy Systems Limited, 35 South Main Street, Springboro, OH ABSTRACT The performance of an LED-based, dual-beam, spectrophotometer is discussed.

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., www.atmos-meas-tech-discuss.net/5/c741/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement Techniques Discussions

More information

Radar Target Identification Using Spatial Matched Filters. L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory

Radar Target Identification Using Spatial Matched Filters. L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory Radar Target Identification Using Spatial Matched Filters L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory Abstract The application of spatial matched filter classifiers to the synthetic

More information