IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 11, NOVEMBER

Size: px
Start display at page:

Download "IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 11, NOVEMBER"

Transcription

1 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 11, NOVEMBER Multiframe Super-Resolution Reconstruction of Small Moving Objects Adam W. M. van Eekeren, Member, IEEE, Klamer Schutte, and Lucas J. van Vliet, Member, IEEE Abstract Multiframe super-resolution (SR) reconstruction of small moving objects against a cluttered background is difficult for two reasons: a small object consists completely of mixed boundary pixels and the background contribution changes from frame-to-frame. We present a solution to this problem that greatly improves recognition of small moving objects under the assumption of a simple linear motion model in the real-world. The presented method not only explicitly models the image acquisition system, but also the space-time variant fore- and background contributions to the mixed pixels. The latter is due to a changing local background as a result of the apparent motion. The method simultaneously estimates a subpixel precise polygon boundary as well as a high-resolution (HR) intensity description of a small moving object subject to a modified total variation constraint. Experiments on simulated and real-world data show excellent performance of the proposed multiframe SR reconstruction method. Index Terms Boundary description, moving object, partial area effect, super-resolution (SR) reconstruction. I. INTRODUCTION I N SURVEILLANCE applications, the most interesting events are dynamic events consisting of changes occurring in the scene such as moving persons or moving objects. In this paper, we focus on multiframe super-resolution (SR) reconstruction of small moving objects in under-sampled image sequences. Small objects are objects that are completely comprised of boundary pixels. Each boundary pixel is a mixed pixel, and its value has both contributions of the moving foreground object and the locally varying background. Hence, not only do the fractions change from frame-to-frame, but also the local background values change due to the apparent motion. Especially for small moving objects, an improvement in resolution is useful to permit classification or identification. Manuscript received November 25, 2008; revised April 24, 2010; accepted April 24, Date of publication August 19, 2010; date of current version October 15, The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Michael Elad. A. W. M. van Eekeren is with the Electro Optics Group at TNO Defence, Security, and Safety, The Hague, The Netherlands. He is also with the Quantitative Imaging Group, Delft University of Technology, Delft, The Netherlands ( adam.vaneekeren@tno.nl). K. Schutte is with the Electro Optics group at TNO Defence, Security and Safety, The Hague, The Netherlands ( klamer.schutte@tno.nl). L. J. van Vliet is with the Quantitative Imaging Group at Delft University of Technology, Delft, The Netherlands ( L.J.vanVliet@TUDelft.nl). Color versions of one or more of the figures in this paper are available online at Digital Object Identifier /TIP Multiframe SR reconstruction 1 improves the spatial resolution by exchanging temporal information of a sequence of subpixel displaced low-resolution (LR) images for spatial information. Although the concept of SR reconstruction already exists for more than 20 years [1], relatively little attention is given to SR reconstruction of moving objects. In [2] [8], this subject was addressed for various dedicated tasks. Although [2] and [5] apply different SR reconstruction methods, i.e., iterative-back-projection [9] and projection onto convex sets [10], respectively, both use a validity map in their reconstruction process. This makes these methods robust to motion outliers. Both methods perform well on large moving objects that obey to a simple translational motion model. For large objects, only a small fraction of the pixels are boundary pixels. Hardie et al. [7] use optical flow to segment a moving object and subsequently apply SR reconstruction to it. In their work, the background is static and SR reconstruction is only applied to the masked area inside a large moving object. In [6], Kalman filters are used to reduce edge artifacts at the boundary between fore- and background. However, the foreand background are not explicitly modeled in this method. In previous work [3], we presented a system that applies SR reconstruction after a segmentation step simultaneously to a large moving object and the background using Hardie s method [7]. Again, no SR reconstruction is applied to the boundary of mixed pixels separating the moving object from a cluttered background. In [4], we presented the first attempt of SR reconstruction on small moving objects with simulated data. At that time no experiments were done on real-world data which lifted the need for a very precise estimate of the object s trajectory. In [8], SR reconstruction is performed on moving vehicles of approximately 10 by 20 pixels. For object registration a trajectory model is used in combination with a consistency measure of the local background and vehicle. However, in the SR reconstruction approach no attention is given to mixed pixels. An interesting subset of moving objects consists of faces. Efforts in that area using SR reconstruction include [11] and [12], in which the modeling of complex motion is a key element. However, the faces in the LR input images used are far larger than the small objects that we focus on in this paper. SR reconstruction on moving objects is also applied in astronomy. An overview can be found in [13], where it is explained that SR reconstruction is only possible under the condition that the solution is very sparse, i.e., very few samples having a value larger than zero. In contrast, our SR reconstruction method is designed to handle nonzero cluttered backgrounds. 1 In the remainder of this paper SR reconstruction refers to multi-frame SR reconstruction /$ IEEE

2 2902 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 11, NOVEMBER 2010 Fig. 1. Flow diagram illustrating the construction of a 2-D HR image z representing the camera s field-of-view and the degradation thereof into a LR frame ^y via a camera model. For small moving objects that consist completely of mixed pixels against a cluttered background, the state-of-the-art pixel-based SR reconstruction methods mentioned previously will fail. Pixel-based SR reconstruction methods make an error at the object boundary, because they cannot disentangle the contributions from the space-time variant background and foreground information within a mixed pixel. To tackle the aforementioned problem we incorporate a subpixel precise object boundary model with a high-resolution (HR) pixel grid. We simultaneously estimate this polygonal object boundary as well as a HR intensity description of a small moving object subject to a modified total variation constraint. Assuming rigid objects that move with constant speed through the real world, object registration is achieved by fitting a trajectory through the object s center-of-mass in each frame. The approach assumes that a HR background image is estimated first. Robust SR reconstruction methods can accomplish this. They treat the intensity fluctuations after global registration caused by the small moving object as outliers. Especially for small moving objects our approach significantly improves object recognition. Note that the use of the proposed SR reconstruction method is not limited to small moving objects. It can also be used to improve the resolution of boundary regions of larger moving objects as long as the size of the object does not prohibit proper SR reconstruction of the background. The paper is organized as follows. First, in Section II we present the forward model for a simulated HR scene and the observed LR image data by an electro-optical sensor system. In Section III, the three steps of the proposed SR reconstruction method for small moving objects are presented. Section IV presents experiments on simulated data, followed by a real-world experiment in Section V. Finally, in Section VI the main conclusions are presented. II. FORWARD MODEL: REAL-WORLD DATA DESCRIPTION This section describes the two steps of our forward model to constructs a LR camera frame from HR representations of the fore- and background in combination with a subpixel precise polygon model of our object. The first step models the construction of a 2-D HR image including the moving object whereas the second step models the image degradation as a result of the physical properties of our camera system. A. 2-D HR Scene We model a camera s field-of-view the scene at frame as a properly sampled 2-D HR image. Each frame consists of pixels without significant degradation due to motion, blur or noise. Let us express this image in lexicographical notation as the vector. The image is constructed from a translated HR background intensity description, consisting of pixels, and a translated HR foreground intensity description, consisting of pixels. This is depicted in the left part of Fig. 1. Note that the foreground has a different apparent motion with respect to the camera than the background. The small moving object in the foreground is not only represented by its HR intensity description, but also by a subpixel precise polygon boundary, with the number of vertices. We impose the following assumptions on the motion of the object: 1) the aspect angle the angle between the direction of motion and the optical axis of the camera stays the same and 2) the object is moving with a constant velocity, i.e., the acceleration is zero. These are realistic assumptions if the object is far away from the camera and for a short duration up to a few seconds. The latter does not limit the acquisition of a large number LR frames due to the high frame rate of today s image sensors. At frame the HR background and the HR foreground are translated and merged into the 2-D HR image in which the th pixel is defined by with and. Here, is the number of frames. The summation over represent the translation of foreground pixel to by bilinear interpolation and similarly, the summation over translates background pixel to. The weight represents the foreground contribution at pixel in frame depending upon the polygon boundary. The foreground contribution varies between 0 and 1, so the corresponding background contribution equals by definition. Fig. 2 depicts the construction of the th HR image by masking both the translated background,, and the translated foreground,, after which the constituents are merged into. The polygon boundary defines the foreground (1)

3 VAN EEKEREN et al.: MULTIFRAME SUPER-RESOLUTION RECONSTRUCTION OF SMALL MOVING OBJECTS 2903 Fig. 2. Flow diagram illustrating the masking of foreground and background constituents and the merging thereof into the HR image z. The polygon boundary p is superimposed on the background contributions (1 0 c ) for visualization purposes only. Note that in the weight images c and (1 0 c ) black (= 0) indicates no contribution, white (= 1) indicates full contribution and greys indicate a partial contribution. contributions and the background contributions in HR frame. B. Camera Model A LR camera frame is obtained by applying the camera model to the 2-D HR image representing the camera s field-of-view. The camera model comprises two types of image blur, sampling, and degradation by noise. Blur: The optical point-spread-function (PSF), together with the sensor PSF, will cause a blurring in the image plane. In this paper, the optical blur is modeled by a Gaussian function with standard deviation. The sensor blur is modeled by a uniform rectangular function representing the fill-factor of each sensor element. A convolution of both functions represents the total blurring function. Sampling: The sampling as depicted in Fig. 1 reflects the pixel pitch only. The integration of photons over the photosensitive area of a pixel is accounted for by the aforementioned sensor blur. Noise: The temporal noise in the recorded data is modeled by additive, independent and identically distributed Gaussian noise samples with standard deviation. For the recorded data used, independent additive Gaussian distributed noise is a sufficiently accurate noise model. Other types of noise, like fixed pattern noise (FPN) and bad pixels, are not explicitly modeled. For applications where FPN becomes a hindrance, it is advised to correct the captured data prior to SR reconstruction using a scene-based non uniformity correction algorithm, such as the one proposed in [14]. All in all, the observed th LR pixel from frame is modeled as follows: for and. Here, denotes the number of LR pixels in. The weight represents the contribution of HR pixel to estimated LR pixel. Each contribution is determined by the blurring and sampling of the camera. represents an additive, independent and identically distributed Gaussian noise sample with standard deviation. III. DESCRIPTION OF PROPOSED METHOD The proposed SR reconstruction method can be divided into three parts: 1) applying SR reconstruction to the background for subsequent detection of moving objects from the residue between the observed LR frame and a simulated LR frame based upon the estimated HR background at that instance; 2) fitting a trajectory model to the detected instances of the moving object through the image sequence to obtain subpixel precise object registration; and 3) obtaining a HR object representation comprised of a subpixel precise boundary and a HR intensity description by solving an inverse problem based upon the model of Section II. We start with the third step, because it is the key innovative part of the proposed method. A. SR Reconstruction of a Small Moving Object To find the optimal HR description of the object (consisting of a polygon boundary and a HR intensity description ), we solve an inverse problem based upon the camera observation (2)

4 2904 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 11, NOVEMBER 2010 model described in (1) and (2). To favor sparse solutions of this ill-posed problem we added two regularization terms: one to penalize intensity transitions in the HR intensity description and one to avoid unrealistically wild object shapes. These observations give rise to the following cost function: Fig. 3. Two examples to illustrate the expression for polygon regularization 0 at vertex v of polygon p. (a) 0 is minimal for =, (b) 0 is maximal for =0_ =2. where the first summation term represents the normalized data misfit contributions for all pixels. Normalization is performed with respect to the total number of LR pixels and the noise variance. Here, denotes the measured intensities of the observed LR pixels and the corresponding estimated intensities obtained using the forward model of Section II. Although the estimated intensities are also dependent upon the background, only and are varied to minimize (3). The HR background is estimated in advance as described in Section III-B. The second term of the cost function is a regularization term which favors sparse solutions by penalizing the amount of intensity variation within the object according to a criterion similar to the bilateral total variation (BTV) criterion [15]. Here, is the shift operator that shifts by pixels in horizontal direction whereas shifts by pixels in vertical direction. The actual minimization of the cost function is done in an iterative way by the Levenberg Marquardt (LM) algorithm [16]. This optimization algorithm assumes that the cost function has a first derivative that exists everywhere. However, the L1-norm used in the TV criterion does not satisfy this assumption. Therefore, we introduce the hyperbolic norm (3) angles, sharp protrusions. Note that this measure also becomes very large for (inward pointing spike). Note that in (3) normalization is performed on by a multiplication with the square of the mean edge length, with the number of vertices and the total edge length of. This normalization prevents extensive growth of edges. As mentioned previously, the actual minimization of the cost function is performed in an iterative way by the Levenberg Marquardt algorithm [16]. To allow this, we put the cost function of (3) in the LM framework, which expects a format like where is the measurement and is the estimate depending upon parameter.in general, it is straightforward to store all residues, for example, in a vector which forms the input of the LM algorithm. In our case, we have to be aware of the different norms in each of the terms of (3). The residue vector looks like (4) This norm has the same properties as the L1-norm for large values and it has a first (and second) derivative that exists everywhere. For all experiments the value is used. The third term of (3) constrains the shape of the polygon by penalizing the variation of the polygon boundary. Regularization is needed to penalize unwanted protrusions, such as spikes, which cover a very small area compared to the total object area. This constraint is embodied by the measure, which is small when the polygon boundary is smooth with (5) is the inverse of, which is the area spanned by the edges ( and ) at vertex and half the angle between those edges as indicated by the right part of (5). From example (a) in Fig. 3 it is clear why the area is calculated with half the angle : if we would take the full angle, would be zero, which would result in. Example (b) shows that the measure will be very large for small where the letters on top indicate the number of elements used in each part of the cost function. The length of the vector in (6) is. The cost function in (3) is iteratively minimized to simultaneously find the optimal and. A flow diagram of this iterative minimization procedure in steady state is depicted in Fig. 4. Here the Cost function refers to (3) and the Camera model to formulas (1) and (2). Note that the measured data used for the minimization procedure contains a small region-of-interest (ROI) around the moving object in each frame only. The optimization scheme depicted in Fig. 4 has to be initialized with an object boundary and an object intensity description. These can be obtained in several ways; we have chosen to use a simple and robust initialization that proved to initialize (6)

5 VAN EEKEREN et al.: MULTIFRAME SUPER-RESOLUTION RECONSTRUCTION OF SMALL MOVING OBJECTS 2905 Fig. 4. Flow diagram illustrating the steady state of estimating a HR description of a moving object (p and f ). y denotes the measured intensities in a region of interest containing the moving object in all frames after registration and ~y denotes the corresponding estimated intensities at iteration i. Note that the initial HR object description (p and f ) is derived from the measured LR sequence and the object mask sequence. the method close enough to the global minimum to permit convergence to the global minimum in most practical cases. The initial object boundary is obtained by first calculating the frame-wise median width and the frame-wise median height of the mask in the object mask sequence (defined in the next section). Subsequently, we construct an elliptical object boundary from the previously calculated width and height. Upon initialization the vertices are evenly distributed over the ellipse. The number of vertices is fixed during minimization. The object intensity distribution is initialized by a constant intensity equal to the median value over all masked pixel intensities in the measured LR sequence. Furthermore, the optimization procedure is performed in two steps. The first step consists of the initialization described previously followed by a few iterations of the LM algorithm. We derived during experimentation that using more than five iterations has no effect on the final result. After this step the intensity description often contains large gradients perpendicular to the estimated object boundary, where pixels outside the contour still contain the initial initialization values. As this can cause getting stuck in local minima, a partial reinitialization step is proposed. In this step, all intensities of HR foreground pixels adjacent to a mixed boundary pixel but located completely inside the object boundary are propagated outwards. After this partial reinitialization, we continue the iterative procedure until convergence or for a fixed number of iterations to be determined in a simulation experiment. B. SR Reconstruction of Background and Moving Object Detection A small moving object causes a temporary change of a small localized set of pixel intensities. In previous work [17], we presented a framework for the detection of moving point targets against a static cluttered background. A robust pixel-based SR reconstruction method computes a HR background image by treating the local intensity variations caused by the small object as outliers. After registration of the HR background to a recorded LR frame we apply the camera model to simulate the LR frame with identical aliasing artifacts as in the recorded LR frame, but without the small object. Thresholding the absolute value of the residue image yields a powerful tool for object detection, provided that the apparent motion is sufficient given the number of frames to be used in background reconstruction. Assuming LR frames containing a moving object of width (expressed in LR pixels), the apparent lateral motion must exceed LR pixels/frame for a proper background reconstruction. Several robust SR reconstruction methods have been reported [15], [18], [19]. We choose the method developed by Zomet et al. [19], which is robust to intensity outliers, such as those caused by small moving objects. This method employs the same camera model as presented in (2). Its robustness is introduced by a robust back-projection where median denotes a scaled pixel-wise median over the frames and is the projection operator from HR image to LR frame. A LR representation of the background, obtained by applying the camera model to the shifted HR background image,is compared to the corresponding LR frame of the recorded image sequence where represents the blur and down-sample operation, is the th pixel of the shifted HR background in frame and is the recorded intensity of the th pixel in frame. All difference pixels constitute a residual image sequence in which a moving object can be detected. Thresholding this residual image sequence followed by tracking improves the detectability for low residue-to-noise ratios. Threshold selection is done with the chord method from Zack et al. [20], which is illustrated in Fig. 5. With this histogram based method an object mask sequence results for and, with the number of observed LR frames and the number of pixels in each LR frame. After thresholding, multiple events may have been detected in each frame of. We apply tracking to link the most similar events in each frame to a so-called reference event. This reference event is defined by the median width, the median height and the median residual energy of the largest event in each frame (median is computed frame-wise). Next, we search in each frame for the event with the smallest normalized Euclidian distance w.r.t. the reference event shown in (9) at the bottom of the next page, with the index of the event in frame with the smallest normalized Euclidian distance to the reference event. After this tracking step an object mask sequence is generated with in each frame at most one event, the one corresponding to the object giving rise to the reference event. Note that a frame can be empty if no event was detected. (7) (8)

6 2906 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 11, NOVEMBER 2010 To fit a trajectory, all object positions in time must be known w.r.t. a reference point in the background of the scene. This is done by adding the previously obtained apparent background translation to the calculated object position for each frame:. To obtain all object positions with subpixel precision, a robust fit to the measured object positions is performed. Assuming constant motion, all object positions can be described by a reference object position and a translation. Both the reference object position and the translation of the object are estimated by minimizing the following cost function: (11) Fig. 5. Threshold selection by the chord method is based upon finding the value of that maximizes the distance D between the histogram and the chord. The value T is used as threshold value. C. Moving Object Registration The object mask sequence, obtained after thresholding and tracking, gives a rough quantized indication of the position of the object in each frame. For performing SR reconstruction, a more precise, subpixel registration is needed. For large moving objects which contain a sufficient number of internal pixels with sufficient structure, gradient-based registration [21] can be performed. In the setting of small moving objects, this is usually not the case and another approach is needed. Assuming a linear motion model for a moving object in the real-world, the projected model can be fitted to the sequence of detected object positions. We assume a constant velocity without acceleration in the real world, which seems realistic given the nature of small moving objects: the objects are far away from the observer and will have a small acceleration within the frames due to the high frame rate of today s image sensors. First, the position of the object in each frame is determined by computing the weighted center-of-mass (COM) of the masked pixels as follows: (10) with the number of LR pixels in frame, the location of pixel, the corresponding mask value (0 or 1) and is the measured intensity. where denotes the Euclidean distance in LR pixels between the measured object position and the estimated object position at frame (12) The cost function in (11) is known as the Gaussian norm [22]. This norm is robust to outliers (e.g., false detections in our case). The smoothing parameter is set to 0.5 LR pixel. Minimizing the cost function in (11) with the Levenberg Marquardt algorithm results in an accurate subpixel precise registration of the moving object. If, e.g., 50 frames are used, the registration precision is improved by a factor 7. D. Computational Complexity The computational complexity is dominated by calculating (3), i.e., computing the SR reconstruction of the HR foreground. At every iteration of the LM optimization procedure, the cost function has to be calculated for variations in the estimated parameters to estimate the gradient w.r.t. the parameters to be solved. The cost function has to be evaluated times, with the number of HR foreground intensities, the number of vertices and # the number of LM iterations. A reconstruction as described in Section IV-B (,, ) using Matlab code took 37 min on a Pentium-4, 3.2 GHz processor under Windows. The processing time can be drastically reduced if a precomputation of the partial derivatives of the cost function w.r.t. the HR foreground intensities is performed off-line and stored. In this case, the computational complexity reduces to. Note that typically thereby forecasting a reduction in the computation time by one order of magnitude. (9)

7 VAN EEKEREN et al.: MULTIFRAME SUPER-RESOLUTION RECONSTRUCTION OF SMALL MOVING OBJECTS 2907 IV. EXPERIMENTS ON SIMULATED DATA The proposed SR reconstruction method for small moving objects is first applied to simulated data to study the behavior under controlled conditions. In a series of experiments, we tune the regularization parameters and the number of iterations. Then we study the convergence, the robustness in the presence of clutter and noise, and the robustness against violations of the underlying linear motion model. A. Generating the Simulated Car Sequence The simulated car sequence was generated to resemble the real-world sequence of the next section as good as possible. We simulated an under-sampled image sequence containing a small moving car using the camera model as depicted in Fig. 1. The parameters of the camera model were chosen to match the sensor properties of the real-world system, i.e., optical blurring (Gaussian kernel with standard deviation LR pixel) and sensor blurring (rectangular uniform filter with a 100% fillfactor) and Gaussian distributed noise to resemble the actual noise conditions (see below). The car follows a linear motion trajectory with zero acceleration. It consists of two internal intensities, which are both above the median background intensity. The low object intensity is exactly in between the median background intensity and the high object intensity. The boundary of the car is modeled by a polygon with seven vertices. Fig. 7(a) shows a HR image of the simulated car, which serves as a ground-truth for all SR reconstruction results. Fig. 7(b) and (c) show two LR image frames in which the car covers approximately 6 pixels. All 6 pixels are so called mixed pixels and contain contributions of the fore- and background. The image quality is further quantified by the signal-to-noise ration (SNR) and the signal-to-clutter ratio (SCR). The SNR is a measure for the contrast between the object and the time-averaged local background compared to stochastic variations called noise. The SNR is defined as (13) with the number of frames, the mean foreground intensity in frame and the mean local background intensity in frame. is calculated by taking the mean intensity of LR pixels that contain at least 50% foreground and is defined by the mean intensity of all 100% background pixels in a small neighborhood around the object. The SCR is a measure for the contrast between the object and the time-averaged local background compared to the variation in the local background. The SCR is defined as (14) with the standard deviation of the local background in frame. In the LR domain, the SNR is 29 db and the SCR is 14 db. These are realistic values and derived from the real-world image sequence of the next section. In the next subsections, different experiments on the simulated data are performed. For all experiments 50 LR frames Fig. 6. NMSE between the SR result and the ground truth as a function of the regularization parameters and. Here both parameters are kept constant throughout all iterations in step 1 and step 2. are used to estimate the HR foreground and 85 LR frames are used to estimate the HR background. In all used reconstruction methods, the zoom factor is set to 4 and the camera parameters are the same as in generating the simulated data. B. Test 1: Tuning the Algorithm Our algorithm contains several parameters such as the camera parameters, the regularization parameters, and a stopping criterion. Although the camera parameters such as the PSF and fill-factor can be estimated rather well, the regularization parameters and are far more difficult to tune. To study the influence of the regularization parameters on the final result and select the parameters for later use, a few experiments are performed on 50 LR frames of the simulated car sequence. In this experiment, we study the influence of the regularization parameters and on the SR result for the simulated car sequence with a SNR of 29 db and a SCR of 14 db. Note that both regularization parameters are kept constant during both steps of the optimization procedure. We use the normalized mean squared error (NMSE) between the SR result of the car and its ground truth as a figure-of-merit. Note that this measure considers only the foreground intensities, the background intensities are set to zero (15) with the number of HR pixels, the estimated foreground contributions using SR and the ground truth. Normalization is done with the squared maximum value of. From the result in Fig. 6 it can be seen that has by far the largest influence on the NMSE. Therefore it is recommended to set to. The value for is not critical and set to. In a broad range around these values, more than three to five iterations in step 1 did not change the final result. After 10 to 15 iterations in step 2 the solution converged. Hence, we set the

8 2908 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 11, NOVEMBER 2010 Fig. 7. Four times SR reconstruction of a simulated under-sampled image sequence containing a small moving car. (a) HR image representing the scene serving as ground truth; (b), (c) two typical LR frames (5 2 4 pixels) of the moving car; (d) 42 SR by a robust state-of-the-art method [18]; and (e) 42 SR by the proposed method. maximum number of iterations in step 1 to five and in step 2 to 15. C. Test 2: Comparison With a State-of-the-Art Pixel-Based Technique To assess the value of the proposed algorithm we compare it with the visually best result obtained by a robust state-of-the-art pixel-based SR technique [18]. Note that the registration is performed by the trajectory fitting technique of this paper (to 85 LR frames) to put both methods on equal footing. The state-ofthe-art pixel-based SR result is shown in Fig. 7(d) and bears very little resemblance to the ground truth. This is no surprise since the partial area effect at the boundary of the object which affects all object pixels is not accounted for. Using the optimal regularization parameters in both steps:, we performed a SR reconstruction with the proposed method to exactly the same LR image sequence. The result is depicted in Fig. 7(e) and shows a very good resemblance to the ground truth. Subtle changes along the boundary and along the intensity transition are caused by partial area effects due to the random placement of the reconstructed object w.r.t. the HR grid. The object boundary is approximated with 8 vertices, which is one more than used for constructing the data, so the boundary is slightly over-fitted. Comparing the results in Fig. 7(d) and (e) shows that the result of our proposed method is clearly superior to the pixel-based method of Pham [18]. D. Test 3: Robustness in the Presence of Clutter and Noise To investigate the robustness of our method under different conditions, we varied 1) the clutter amplitude of the local background and 2) the noise level of the simulated car sequence described in Section IV-A. The clutter of the background is varied by multiplying the background with a certain factor after subtracting the median intensity. Afterwards the median intensity is added again to return to the original median intensity. The object intensities as well as the size and shape of the car remain the same. All parameters that are used for the reconstruction are set to the same values as in test 2 in Section IV-C. The quality of the different SR results is expressed by the NMSE w.r.t. the ground truth as before. Fig. 8 depicts the NMSE as a function of SNR and SCR. We divided the results in three different categories: good, medium and bad.for each region a typical SR result is displayed to give a visual impression of the performance. It is clear that the SR result in the good region, obtained for values of the SNR and SCR that occur in practice, bears a good resemblance to the ground truth. Note that the visible background in these pictures is not used to calculate the NMSE. Fig. 8 shows that the performance decreases for a decreasing SNR. Furthermore, the boundary between the good and medium region indicates a decrease in performance under high clutter conditions. E. Test 4: Robustness Against Variations in Motion The proposed method assumes that the object moves with a constant speed and appears in all frames to be used for reconstruction with the same aspect angle. To demonstrate the robustness of our method to violations on these assumptions, two experiments are performed. The first experiment shall determine the robustness w.r.t. an acceleration of the object. The second experiment shall establish the robustness w.r.t. scaling of the object. We modified the simulated car sequence of Section IV-A. In the first experiment an acceleration, expressed in LR, is added and contributes to the object position by, with the frame number. In the second experiment, a scale factor, defined as the vehicle size last frame/vehicle size first frame, is added. A scale factor of 0.8 indicates that the observed length of the car varies from 3 LR pixel in the first frame to 2.4 LR pixel in the last frame. The NMSE as a function of acceleration and scaling is depicted in Fig. 9. Fig. 9(a) shows that a larger acceleration causes a larger error. An acceptable decrease of the is

9 VAN EEKEREN et al.: MULTIFRAME SUPER-RESOLUTION RECONSTRUCTION OF SMALL MOVING OBJECTS 2909 Fig. 8. NMSE for the SR results of the simulated car sequence as a function of the SNR and SCR. We have roughly divided the space in three categories: good, medium, bad and provided a typical SR result for each category. Fig. 9. NMSE for the SR results of the simulated car sequence as a function of (a) acceleration and (b) object scaling. Fig. 10. Top view of the acquisition geometry to capture the real-world data. obtained for accelerations smaller than LR. The error of a constant velocity model fitted to a constant acceleration motion will follow a parabolic model. This parabola will be symmetric, and has a top to end point difference of. From the mid-point between its top and an end point we get a maximum error of, with and this gives a maximum translational error of 0.16 LR pixel. For the second experiment Fig. 9(b) shows that a maximum scaling of 15% is allowed with an acceptable performance loss. This is a 7.5% maximum scale change from a mean scale. For a 3 pixel size object this translates to a maximum pixel shift error of LR pixel for both the front and back object edges compared to its center of mass position. Note that both experiments have well-comparable maximum position errors of 0.16 and 0.11 LR pixel, rather consistent with the requirement that the registration error for SR should at least be smaller than half the HR pixel pitch. This can easily be deduced from the argument below. Critical sampling of bandlimited signals can be modeled by a Gaussian low-pass filter followed by sampling with a sampling pitch of 1.1 times the standard deviation of the Gaussian PSF [23]. In [21], we showed that Gaussian noise in the LR image sequence leads to Gaussian distributed registration estimates. These registration errors act as an additional blur, even for sequences of infinite length [24]. If the standard deviation of this registration-error induced image blur is substantially (say two times) smaller than the optical image blur it will not affect the image quality after SR. V. EXPERIMENT ON REAL-WORLD DATA To demonstrate the potential of the proposed method under realistic conditions we applied it to a real-world image sequence. Real-world data permits us to study the impact of changes in object intensities caused by variations in reflection, lens aberrations, small changes in aspect angle of the object

10 2910 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 11, NOVEMBER 2010 Fig. 11. Four times SR reconstruction of a vehicle captured by an infrared camera (50 frames) at a large distance: (a) and (c) show the LR captured data; (b) and (d) show the SR reconstruction result obtained by the proposed method. (a) LR reference frame ( pixels); (b) SR with zoom factor 4; (c) close-up of moving object in (a); and (d) Close-up of moving object in (b). along the trajectory, and practical violations of the linear motion assumption. The data for this experiment is captured with an infrared camera (1 T from Amber Radiance). The sensor is composed of an indium antimonide (InSb) detector with pixels in the 3 5 wavelength band. Furthermore, we use optics with a focal length of 50 mm and a viewing angle of 11.2 (also from Amber Radiance). We captured a vehicle (Jeep Wrangler) at 15 frames/second, driving with a continuous velocity ( 1 pixel/frame apparent velocity) approximately perpendicular to the optical axis of the camera. A top view of the acquisition geometry is depicted in Fig. 10. During image capture, the platform of the camera was gently shaken to provide subpixel motion of the camera. Panning was used to keep the moving vehicle within the field of view of the camera. We selected the distance such that the vehicle appeared small (covering appr. 5 2 LR pixels in area) in the image plane. Fig. 11(a) shows a typical LR frame (64 64 pixels). A close-up of the vehicle is depicted in Fig. 11(c). The vehicle is driving from left to right at a distance of approximately 1150 meters. The SNR of the vehicle with the background is 30 db and the SCR is 13 db. In the simulation experiments, we have shown that for these values our method is capable of delivering good reconstruction performance. Fig. 11(b) shows the result after applying our SR reconstruction method with a close-up of the car in Fig. 11(d). The HR background is reconstructed from 85 frames with zoom factor 4. The camera blur is modeled by Gaussian optical blurring, followed by uniform rectangular sensor blurring (100% fill-factor). The HR foreground is reconstructed from 50 frames with zoom factor 4 with the same camera parameters. The object boundary is approximated with 12 vertices and during the reconstruction the following settings are used:, in both step 1 and 2. Note that much more detail is visible in the SR result than in the LR image. The shape of the vehicle is very well pronounced and the hot engine of the vehicle is well visible. For comparison we display in Fig. 12 the SR result next to a captured image of the vehicle at a 4 shorter distance. Please be aware that the intensity mapping is not the same for both images. So a grey level in Fig. 12(a) may not be compared with the same grey level in Fig. 12(b). Notice that Fig. 12(b) was captured at a later time. Differences in environmental conditions (position of the sun, clouds, etc.), heating of the engine and vehicle as well as

11 VAN EEKEREN et al.: MULTIFRAME SUPER-RESOLUTION RECONSTRUCTION OF SMALL MOVING OBJECTS 2911 Fig. 12. to camera. SR result with zoom factor 4 of a jeep in (a) compared with the same jeep captured at a 42 shorter distance (b). (a) 42 SR result. (b) Object 42 closer the pose of the vehicle contribute to the observed differences between the two images. The shape of the vehicle is reconstructed very well and the hot engine is located at a similar place. VI. CONCLUSION This paper presents a method for SR reconstruction of small moving objects. The method explicitly models the foreand background contribution to the partial area effect of the boundary pixels. The main novelty of the proposed SR reconstruction method is the use of a combined object boundary and intensity description of the target object. This enables us to simultaneously estimate the object boundary with subpixel precision and the foreground intensities from the boundary pixels subject to a modified total variation constraint. This modification permits the use of the Levenberg Marquardt algorithm for optimizing the cost function. This method is known to converge to the global optimum for a well behaved cost function and an initial estimate not too far away. The proposed multiframe SR reconstruction method clearly improves the visual recognition of small moving objects under realistic imaging conditions in terms of SNR and SCR. We showed that our method performs well in reconstructing a small moving object where a state-of-the-art pixel-based SR reconstruction method [18] fails. The robustness against deteriorations such as clutter and noise as well as violations of the linear motion model was established. Our method not only performs well on simulated data, but also provides an excellent result on a real-world image sequence captured with an infrared camera. REFERENCES [1] R. Y. Tsai and T. S. Huang, Multiframe image restoration and registration, in Advances in Computer Vision and Image Proscessing. Greenwich, CT: JAI Press, 1984, vol. 1, pp [2] M. Ben-Ezra, A. Zomet, and S. K. Nayar, Video super-resolution using controlled subpixel detector shifts, IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 6, pp , Jun [3] A. W. M. van Eekeren, K. Schutte, J. Dijk, D. J. J. de Lange, and L. J. van Vliet, Super-resolution on moving objects and background, in Proc. IEEE 13th Int. Conf. Image Process., 2006, vol. 1, pp [4] A. W. M. van Eekeren, K. Schutte, and L. J. van Vliet, Super-resolution on small moving objects, in Proc. IEEE 15th Int. Conf. Image Process., 2008, vol. 1, pp [5] P. E. Eren, M. I. Sezan, and A. M. Tekalp, Robust, object-based high resolution image reconstruction from low-resolution video, IEEE Trans. Image Process., vol. 6, no. 10, pp , Oct [6] S. Farsiu, M. Elad, and P. Milanfar, Video-to-video dynamic superresolution for grayscale and color sequences, J. Appl. Signal Process., pp. 1 15, [7] R. C. Hardie, T. R. Tuinstra, J. Bognart, K. J. Barnard, and E. E. Armstrong, High resolution image reconstruction from digital video with global and non-global scene motion, in Proc. IEEE 4th Int. Conf. Image Process., 1997, vol. 1, pp [8] F. W. Wheeler and A. J. Hoogs, Moving vehicle registration and super-resolution, in Proc. IEEE Appl. Imagery Pattern Recognit. Workshop, 2007, pp [9] M. Irani and S. Peleg, Improving resolution by image registration, Graph. Models Image Process., vol. 53, pp , [10] A. J. Patti, M. I. Sezan, and A. M. Tekalp, Superresolution video reconstruction with arbitrary sampling lattices and nonzero aperture time, IEEE Trans. Image Process., vol. 6, no. 8, pp , Aug [11] R. J. M. den Hollander, D. J. J. de Lange, and K. Schutte, Superresolution of faces using the epipolar constraint, in Proc. British Mach. Vis. Conf., 2007, pp [12] J. Wu, M. Trivedi, and B. Rao, High frequency component compensation based super-resolution algorithm for face video enhancement, in Proc. IEEE 17th Int. Conf. Pattern Recognit., 2004, vol. 3, pp [13] J. Starck, E. Pantin, and F. Murtagh, Deconvolution in astronomy: A review, Pub. Astron. Soc. Pacific, no. 114, pp , [14] K. Schutte, D. J. J. de Lange, and S. P. van den Broek, Signal conditioning algorithms for enhanced tactical sensor imagery, in Proc. SPIE: Infrared Imag. Syst.: Design, Anal., Model. and Testing XIV, 2003, vol. 5076, pp [15] S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, Fast and robust multi-frame super resolution, IEEE Trans. Image Process., vol. 13, no. 10, pp , Oct [16] J. J. Moré, The Levenberg Marquardt Algorithm: Implementation and Theory. New York: Springer-Verlag, 1978, vol. 630, pp [17] J. Dijk, A. W. M. van Eekeren, K. Schutte, D. J. J. de Lange, and L. J. van Vliet, Super-resolution reconstruction for moving point target detection, Opt. Eng., vol. 47, no. 8, [18] T. Q. Pham, L. J. van Vliet, and K. Schutte, Robust fusion of irregularly sampled data using adaptive normalized convolution, J. Appl. Signal Process., vol. 2006, pp. 1 12, [19] A. Zomet, A. Rav-Acha, and S. Peleg, Robust super-resolution, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2001, vol. 1, pp [20] G. W. Zack, W. E. Rogers, and S. A. Latt, Automatic measurement of sister chromatid exchange frequency, J. Histochem. Cytochem., vol. 25, no. 7, pp , [21] T. Q. Pham, M. Bezuijen, L. J. van Vliet, K. Schutte, and C. L. L. Hendriks, Performance of optimal registration estimators, in Proc. Vis. Inf. Process. XIV, 2005, vol. 5817, pp [22] J. van de Weijer and R. van den Boomgaard, Least squares and robust estimation of local image structure, Int. J. Comput. Vis., vol. 64, no. 2 3, pp , [23] P. Verbeek and L. van Vliet, On the location error of curved edges in low-pass filtered 2-d and 3-d images, IEEE Trans. Pattern Anal. Mach. Intell., vol. 16, no. 7, pp , Jul [24] T. Q. Pham, L. J. van Vliet, and K. Schutte, Influence of signal-tonoise ratio and point spread function on limits of super-resolution, in Proc. Image Process.: Algorithms Syst. IV, 2005, vol. 5672, pp , SPIE.

12 2912 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 11, NOVEMBER 2010 Adam W. M. van Eekeren (S 00 M 02) received the M.Sc. degree from the Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands, in 2002, and the Ph.D. degree from the Electro-Optics Group within TNO Defence, Security, and Safety, The Hague, in collaboration with the Quantitative Imaging Group at the Delft University of Technology, The Netherlands, in He did his graduation project within Philips Medical Systems on the topic of image enhancement using morphological operators. Subsequently, he worked for one year at the Philips Research Laboratory on image segmentation using level sets. He worked as a Research Scientist at the Electro-Optics Group, TNO Defence, Security, and Safety, where he works on image improvement, change detection, and 3-D reconstruction. His research interests include image restoration, super-resolution, image quality assessment, and object detection. Klamer Schutte received the M.Sc. degree in physics from the University of Amsterdam in 1989 and the Ph.D. degree from the University of Twente, Enschede, The Netherlands, in He had a Post-Doctoral position with the Delft University of Technology s Pattern Recognition (now Quantitative Imaging) group. Since 1996, he has been employed by TNO, currently as Senior Research Scientist Electro-Optics within the Business Unit Observation Systems. Within TNO he has actively lead multiple projects in areas of Signal and Image Processing. Recently, he has led many projects include super resolution reconstruction for both international industries and governments, resulting in super resolution reconstruction based products in active service. His research interests include pattern recognition, sensor fusion, image analysis, and image restoration. He is Secretary of the NVBHPV, The Netherlands branch of the IAPR. Lucas J. van Vliet (M 02) studied applied physics and received the Ph.D. degree (cum laude) from the Delft University of Technology, Delft, The Netherlands, in He was appointed Full Professor in multidimensional image analysis in Since 2009, he has been Director of the Delft Health Initiative, head of the Quantitative Imaging Group and chairman of the Department Imaging Science & Technology. He was president ( ) of the Dutch Society for Pattern Recognition and Image Analysis (NVPHBV) and sits on the board of the International Association for Pattern Recognition (IAPR) and the Dutch graduate school on Computing and Imaging (ASCI). He supervised 25 Ph.D. theses and is currently supervising 10 Ph.D. students. He was visiting scientist at Lawrence Livermore National Laboratories (1987), University of California San Francisco (1988), Monash University Melbourne (1996), and Lawrence Berkeley National Laboratories (1996). He has a track record on fundamental as well as applied research in the field of multidimensional image processing, image analysis, and image recognition; (co)author of 200 papers and four patents. Prof. van Vliet was awarded the prestigious talent research fellowship of the Royal Netherlands Academy of Arts and Sciences (KNAW) in 1996.

Super-Resolution on Moving Objects and Background

Super-Resolution on Moving Objects and Background Super-Resolution on Moving Objects and Background A. van Eekeren K. Schutte J. Dijk D.J.J. de Lange L.J. van Vliet TNO Defence, Security and Safety, P.O. Box 96864, 2509 JG, The Hague, The Netherlands

More information

Performance study on point target detection using super-resolution reconstruction

Performance study on point target detection using super-resolution reconstruction Performance study on point target detection using super-resolution reconstruction Judith Dijk a,adamw.m.vaneekeren ab, Klamer Schutte a Dirk-Jan J. de Lange a, Lucas J. van Vliet b a Electro Optics Group

More information

Super-resolution Image Reconstuction Performance

Super-resolution Image Reconstuction Performance Super-resolution Image Reconstuction Performance Sina Jahanbin, Richard Naething March 30, 2005 Abstract As applications involving the capture of digital images become more ubiquitous and at the same time

More information

Measuring the performance of super-resolution reconstruction algorithms

Measuring the performance of super-resolution reconstruction algorithms Measuring the performance of super-resolution reconstruction algorithms Judith Dijk, Klamer Schutte, Adam W.M. van Eekeren and Piet Bijl TNO P.O. Box 96864, 2509 JG The Hague, The Netherlands Email: Judith.Dijk@tno.nl

More information

IN MANY applications, it is desirable that the acquisition

IN MANY applications, it is desirable that the acquisition 1288 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL 17, NO 10, OCTOBER 2007 A Robust and Computationally Efficient Simultaneous Super-Resolution Scheme for Image Sequences Marcelo

More information

EFFICIENT PERCEPTUAL, SELECTIVE,

EFFICIENT PERCEPTUAL, SELECTIVE, EFFICIENT PERCEPTUAL, SELECTIVE, AND ATTENTIVE SUPER-RESOLUTION RESOLUTION Image, Video & Usability (IVU) Lab School of Electrical, Computer, & Energy Engineering Arizona State University karam@asu.edu

More information

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation

A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation , pp.162-167 http://dx.doi.org/10.14257/astl.2016.138.33 A Novel Image Super-resolution Reconstruction Algorithm based on Modified Sparse Representation Liqiang Hu, Chaofeng He Shijiazhuang Tiedao University,

More information

Robust Video Super-Resolution with Registration Efficiency Adaptation

Robust Video Super-Resolution with Registration Efficiency Adaptation Robust Video Super-Resolution with Registration Efficiency Adaptation Xinfeng Zhang a, Ruiqin Xiong b, Siwei Ma b, Li Zhang b, Wen Gao b a Institute of Computing Technology, Chinese Academy of Sciences,

More information

Super-Resolution. Many slides from Miki Elad Technion Yosi Rubner RTC and more

Super-Resolution. Many slides from Miki Elad Technion Yosi Rubner RTC and more Super-Resolution Many slides from Mii Elad Technion Yosi Rubner RTC and more 1 Example - Video 53 images, ratio 1:4 2 Example Surveillance 40 images ratio 1:4 3 Example Enhance Mosaics 4 5 Super-Resolution

More information

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University.

3D Computer Vision. Structured Light II. Prof. Didier Stricker. Kaiserlautern University. 3D Computer Vision Structured Light II Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Introduction

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov

Structured Light II. Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Structured Light II Johannes Köhler Johannes.koehler@dfki.de Thanks to Ronen Gvili, Szymon Rusinkiewicz and Maks Ovsjanikov Introduction Previous lecture: Structured Light I Active Scanning Camera/emitter

More information

Super-Resolution (SR) image re-construction is the process of combining the information from multiple

Super-Resolution (SR) image re-construction is the process of combining the information from multiple Super-Resolution Super-Resolution (SR) image re-construction is the process of combining the information from multiple Low-Resolution (LR) aliased and noisy frames of the same scene to estimate a High-Resolution

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING Proceedings of the 1994 IEEE International Conference on Image Processing (ICIP-94), pp. 530-534. (Austin, Texas, 13-16 November 1994.) A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

Multi-frame super-resolution with no explicit motion estimation

Multi-frame super-resolution with no explicit motion estimation Multi-frame super-resolution with no explicit motion estimation Mehran Ebrahimi and Edward R. Vrscay Department of Applied Mathematics Faculty of Mathematics, University of Waterloo Waterloo, Ontario,

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

MULTICHANNEL image processing is studied in this

MULTICHANNEL image processing is studied in this 186 IEEE SIGNAL PROCESSING LETTERS, VOL. 6, NO. 7, JULY 1999 Vector Median-Rational Hybrid Filters for Multichannel Image Processing Lazhar Khriji and Moncef Gabbouj, Senior Member, IEEE Abstract In this

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT

IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION VIDEO ENHANCEMENT 17th European Signal Processing Conference (EUSIPCO 009) Glasgow, Scotland, August 4-8, 009 IMPROVED MOTION-BASED LOCALIZED SUPER RESOLUTION TECHNIQUE USING DISCRETE WAVELET TRANSFORM FOR LOW RESOLUTION

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Multi-frame blind deconvolution: Compact and multi-channel versions. Douglas A. Hope and Stuart M. Jefferies

Multi-frame blind deconvolution: Compact and multi-channel versions. Douglas A. Hope and Stuart M. Jefferies Multi-frame blind deconvolution: Compact and multi-channel versions Douglas A. Hope and Stuart M. Jefferies Institute for Astronomy, University of Hawaii, 34 Ohia Ku Street, Pualani, HI 96768, USA ABSTRACT

More information

Human Motion Detection and Tracking for Video Surveillance

Human Motion Detection and Tracking for Video Surveillance Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,

More information

Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution

Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution 2011 IEEE International Symposium on Multimedia Hybrid Video Compression Using Selective Keyframe Identification and Patch-Based Super-Resolution Jeffrey Glaister, Calvin Chan, Michael Frankovich, Adrian

More information

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk

More information

Plane Wave Imaging Using Phased Array Arno Volker 1

Plane Wave Imaging Using Phased Array Arno Volker 1 11th European Conference on Non-Destructive Testing (ECNDT 2014), October 6-10, 2014, Prague, Czech Republic More Info at Open Access Database www.ndt.net/?id=16409 Plane Wave Imaging Using Phased Array

More information

A Fast Super-Resolution Reconstruction Algorithm for Pure Translational Motion and Common Space-Invariant Blur

A Fast Super-Resolution Reconstruction Algorithm for Pure Translational Motion and Common Space-Invariant Blur IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 10, NO. 8, AUGUST 2001 1187 A Fast Super-Resolution Reconstruction Algorithm for Pure Translational Motion and Common Space-Invariant Blur Michael Elad, Member,

More information

Super-Resolution Image with Estimated High Frequency Compensated Algorithm

Super-Resolution Image with Estimated High Frequency Compensated Algorithm Super-Resolution with Estimated High Frequency Compensated Algorithm Jong-Tzy Wang, 2 Kai-Wen Liang, 2 Shu-Fan Chang, and 2 Pao-Chi Chang 1 Department of Electronic Engineering, Jinwen University of Science

More information

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou

Express Letters. A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation. Jianhua Lu and Ming L. Liou IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 7, NO. 2, APRIL 1997 429 Express Letters A Simple and Efficient Search Algorithm for Block-Matching Motion Estimation Jianhua Lu and

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Context based optimal shape coding

Context based optimal shape coding IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET)

INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) ISSN 0976 6464(Print) ISSN 0976 6472(Online) Volume 3, Issue 3, October- December (2012), pp. 153-161 IAEME: www.iaeme.com/ijecet.asp

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Optical Flow-Based Person Tracking by Multiple Cameras

Optical Flow-Based Person Tracking by Multiple Cameras Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and

More information

Blur Space Iterative De-blurring

Blur Space Iterative De-blurring Blur Space Iterative De-blurring RADU CIPRIAN BILCU 1, MEJDI TRIMECHE 2, SAKARI ALENIUS 3, MARKKU VEHVILAINEN 4 1,2,3,4 Multimedia Technologies Laboratory, Nokia Research Center Visiokatu 1, FIN-33720,

More information

SECTION 5 IMAGE PROCESSING 2

SECTION 5 IMAGE PROCESSING 2 SECTION 5 IMAGE PROCESSING 2 5.1 Resampling 3 5.1.1 Image Interpolation Comparison 3 5.2 Convolution 3 5.3 Smoothing Filters 3 5.3.1 Mean Filter 3 5.3.2 Median Filter 4 5.3.3 Pseudomedian Filter 6 5.3.4

More information

A Real-time Algorithm for Atmospheric Turbulence Correction

A Real-time Algorithm for Atmospheric Turbulence Correction Logic Fruit Technologies White Paper 806, 8 th Floor, BPTP Park Centra, Sector 30, Gurgaon. Pin: 122001 T: +91-124-4117336 W: http://www.logic-fruit.com A Real-time Algorithm for Atmospheric Turbulence

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling

Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Locally Weighted Least Squares Regression for Image Denoising, Reconstruction and Up-sampling Moritz Baecher May 15, 29 1 Introduction Edge-preserving smoothing and super-resolution are classic and important

More information

Digital Image Restoration

Digital Image Restoration Digital Image Restoration Blur as a chance and not a nuisance Filip Šroubek sroubekf@utia.cas.cz www.utia.cas.cz Institute of Information Theory and Automation Academy of Sciences of the Czech Republic

More information

An Edge-Based Approach to Motion Detection*

An Edge-Based Approach to Motion Detection* An Edge-Based Approach to Motion Detection* Angel D. Sappa and Fadi Dornaika Computer Vison Center Edifici O Campus UAB 08193 Barcelona, Spain {sappa, dornaika}@cvc.uab.es Abstract. This paper presents

More information

Comparative Analysis of Edge Based Single Image Superresolution

Comparative Analysis of Edge Based Single Image Superresolution Comparative Analysis of Edge Based Single Image Superresolution Sonali Shejwal 1, Prof. A. M. Deshpande 2 1,2 Department of E&Tc, TSSM s BSCOER, Narhe, University of Pune, India. ABSTRACT: Super-resolution

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Novel Iterative Back Projection Approach

Novel Iterative Back Projection Approach IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 11, Issue 1 (May. - Jun. 2013), PP 65-69 Novel Iterative Back Projection Approach Patel Shreyas A. Master in

More information

A Robust Wipe Detection Algorithm

A Robust Wipe Detection Algorithm A Robust Wipe Detection Algorithm C. W. Ngo, T. C. Pong & R. T. Chin Department of Computer Science The Hong Kong University of Science & Technology Clear Water Bay, Kowloon, Hong Kong Email: fcwngo, tcpong,

More information

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation

Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Region Weighted Satellite Super-resolution Technology

Region Weighted Satellite Super-resolution Technology Region Weighted Satellite Super-resolution Technology Pao-Chi Chang and Tzong-Lin Wu Department of Communication Engineering, National Central University Abstract Super-resolution techniques that process

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Super Resolution Using Graph-cut

Super Resolution Using Graph-cut Super Resolution Using Graph-cut Uma Mudenagudi, Ram Singla, Prem Kalra, and Subhashis Banerjee Department of Computer Science and Engineering Indian Institute of Technology Delhi Hauz Khas, New Delhi,

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Removing Atmospheric Turbulence

Removing Atmospheric Turbulence Removing Atmospheric Turbulence Xiang Zhu, Peyman Milanfar EE Department University of California, Santa Cruz SIAM Imaging Science, May 20 th, 2012 1 What is the Problem? time 2 Atmospheric Turbulence

More information

Marcel Worring Intelligent Sensory Information Systems

Marcel Worring Intelligent Sensory Information Systems Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video

More information

Comparison Between The Optical Flow Computational Techniques

Comparison Between The Optical Flow Computational Techniques Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.

More information

Spatio-Temporal Stereo Disparity Integration

Spatio-Temporal Stereo Disparity Integration Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz

More information

Robust Super-Resolution by Minimizing a Gaussian-weighted L 2 Error Norm

Robust Super-Resolution by Minimizing a Gaussian-weighted L 2 Error Norm Robust Super-Resolution by Minimizing a Gaussian-weighted L 2 Error Norm Tuan Q. Pham 1, Lucas J. van Vliet 2, Klamer Schutte 3 1 Canon Information Systems Research Australia, 1 Thomas Holt drive, North

More information

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK

MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK MOVING OBJECT DETECTION USING BACKGROUND SUBTRACTION ALGORITHM USING SIMULINK Mahamuni P. D 1, R. P. Patil 2, H.S. Thakar 3 1 PG Student, E & TC Department, SKNCOE, Vadgaon Bk, Pune, India 2 Asst. Professor,

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Super resolution: an overview

Super resolution: an overview Super resolution: an overview C Papathanassiou and M Petrou School of Electronics and Physical Sciences, University of Surrey, Guildford, GU2 7XH, United Kingdom email: c.papathanassiou@surrey.ac.uk Abstract

More information

IMAGE RECONSTRUCTION WITH SUPER RESOLUTION

IMAGE RECONSTRUCTION WITH SUPER RESOLUTION INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND ROBOTICS ISSN 2320-7345 IMAGE RECONSTRUCTION WITH SUPER RESOLUTION B.Vijitha 1, K.SrilathaReddy 2 1 Asst. Professor, Department of Computer

More information

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION

COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA

More information

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging

Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Guided Image Super-Resolution: A New Technique for Photogeometric Super-Resolution in Hybrid 3-D Range Imaging Florin C. Ghesu 1, Thomas Köhler 1,2, Sven Haase 1, Joachim Hornegger 1,2 04.09.2014 1 Pattern

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov

GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS. Andrey Nasonov, and Andrey Krylov GRID WARPING IN TOTAL VARIATION IMAGE ENHANCEMENT METHODS Andrey Nasonov, and Andrey Krylov Lomonosov Moscow State University, Moscow, Department of Computational Mathematics and Cybernetics, e-mail: nasonov@cs.msu.ru,

More information

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH Ignazio Gallo, Elisabetta Binaghi and Mario Raspanti Universitá degli Studi dell Insubria Varese, Italy email: ignazio.gallo@uninsubria.it ABSTRACT

More information

Edge-Preserving MRI Super Resolution Using a High Frequency Regularization Technique

Edge-Preserving MRI Super Resolution Using a High Frequency Regularization Technique Edge-Preserving MRI Super Resolution Using a High Frequency Regularization Technique Kaveh Ahmadi Department of EECS University of Toledo, Toledo, Ohio, USA 43606 Email: Kaveh.ahmadi@utoledo.edu Ezzatollah

More information

Fingerprint Image Enhancement Algorithm and Performance Evaluation

Fingerprint Image Enhancement Algorithm and Performance Evaluation Fingerprint Image Enhancement Algorithm and Performance Evaluation Naja M I, Rajesh R M Tech Student, College of Engineering, Perumon, Perinad, Kerala, India Project Manager, NEST GROUP, Techno Park, TVM,

More information

Combinatorial optimization and its applications in image Processing. Filip Malmberg

Combinatorial optimization and its applications in image Processing. Filip Malmberg Combinatorial optimization and its applications in image Processing Filip Malmberg Part 1: Optimization in image processing Optimization in image processing Many image processing problems can be formulated

More information

FPGA-based Real-time Super-Resolution on an Adaptive Image Sensor

FPGA-based Real-time Super-Resolution on an Adaptive Image Sensor FPGA-based Real-time Super-Resolution on an Adaptive Image Sensor Maria E. Angelopoulou, Christos-Savvas Bouganis, Peter Y. K. Cheung, and George A. Constantinides Department of Electrical and Electronic

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

IRIS SEGMENTATION OF NON-IDEAL IMAGES

IRIS SEGMENTATION OF NON-IDEAL IMAGES IRIS SEGMENTATION OF NON-IDEAL IMAGES William S. Weld St. Lawrence University Computer Science Department Canton, NY 13617 Xiaojun Qi, Ph.D Utah State University Computer Science Department Logan, UT 84322

More information

RESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE

RESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE RESTORATION OF DEGRADED DOCUMENTS USING IMAGE BINARIZATION TECHNIQUE K. Kaviya Selvi 1 and R. S. Sabeenian 2 1 Department of Electronics and Communication Engineering, Communication Systems, Sona College

More information

Multiframe Blocking-Artifact Reduction for Transform-Coded Video

Multiframe Blocking-Artifact Reduction for Transform-Coded Video 276 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 12, NO. 4, APRIL 2002 Multiframe Blocking-Artifact Reduction for Transform-Coded Video Bahadir K. Gunturk, Yucel Altunbasak, and

More information

Image Sampling and Quantisation

Image Sampling and Quantisation Image Sampling and Quantisation Introduction to Signal and Image Processing Prof. Dr. Philippe Cattin MIAC, University of Basel 1 of 46 22.02.2016 09:17 Contents Contents 1 Motivation 2 Sampling Introduction

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Image Sampling & Quantisation

Image Sampling & Quantisation Image Sampling & Quantisation Biomedical Image Analysis Prof. Dr. Philippe Cattin MIAC, University of Basel Contents 1 Motivation 2 Sampling Introduction and Motivation Sampling Example Quantisation Example

More information

Fast and Effective Interpolation Using Median Filter

Fast and Effective Interpolation Using Median Filter Fast and Effective Interpolation Using Median Filter Jian Zhang 1, *, Siwei Ma 2, Yongbing Zhang 1, and Debin Zhao 1 1 Department of Computer Science, Harbin Institute of Technology, Harbin 150001, P.R.

More information

AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION. Ninad Thakoor, Jean Gao and Huamei Chen

AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION. Ninad Thakoor, Jean Gao and Huamei Chen AUTOMATIC OBJECT DETECTION IN VIDEO SEQUENCES WITH CAMERA IN MOTION Ninad Thakoor, Jean Gao and Huamei Chen Computer Science and Engineering Department The University of Texas Arlington TX 76019, USA ABSTRACT

More information

Automatic Tracking of Moving Objects in Video for Surveillance Applications

Automatic Tracking of Moving Objects in Video for Surveillance Applications Automatic Tracking of Moving Objects in Video for Surveillance Applications Manjunath Narayana Committee: Dr. Donna Haverkamp (Chair) Dr. Arvin Agah Dr. James Miller Department of Electrical Engineering

More information

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude

Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude Advanced phase retrieval: maximum likelihood technique with sparse regularization of phase and amplitude A. Migukin *, V. atkovnik and J. Astola Department of Signal Processing, Tampere University of Technology,

More information

Image Reconstruction from Videos Distorted by Atmospheric Turbulence

Image Reconstruction from Videos Distorted by Atmospheric Turbulence Image Reconstruction from Videos Distorted by Atmospheric Turbulence Xiang Zhu and Peyman Milanfar Electrical Engineering Department University of California at Santa Cruz, CA, 95064 xzhu@soe.ucsc.edu

More information

NIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03.

NIH Public Access Author Manuscript Proc Int Conf Image Proc. Author manuscript; available in PMC 2013 May 03. NIH Public Access Author Manuscript Published in final edited form as: Proc Int Conf Image Proc. 2008 ; : 241 244. doi:10.1109/icip.2008.4711736. TRACKING THROUGH CHANGES IN SCALE Shawn Lankton 1, James

More information

Morphological Image Processing

Morphological Image Processing Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures

More information

Predictive Interpolation for Registration

Predictive Interpolation for Registration Predictive Interpolation for Registration D.G. Bailey Institute of Information Sciences and Technology, Massey University, Private bag 11222, Palmerston North D.G.Bailey@massey.ac.nz Abstract Predictive

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION

IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION IMPLEMENTATION OF THE CONTRAST ENHANCEMENT AND WEIGHTED GUIDED IMAGE FILTERING ALGORITHM FOR EDGE PRESERVATION FOR BETTER PERCEPTION Chiruvella Suresh Assistant professor, Department of Electronics & Communication

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts Boise State University ScholarWorks Electrical and Computer Engineering Faculty Publications and Presentations Department of Electrical and Computer Engineering 1-1-2001 Scanner Parameter Estimation Using

More information

Image Inpainting Using Sparsity of the Transform Domain

Image Inpainting Using Sparsity of the Transform Domain Image Inpainting Using Sparsity of the Transform Domain H. Hosseini*, N.B. Marvasti, Student Member, IEEE, F. Marvasti, Senior Member, IEEE Advanced Communication Research Institute (ACRI) Department of

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Locally Adaptive Regression Kernels with (many) Applications

Locally Adaptive Regression Kernels with (many) Applications Locally Adaptive Regression Kernels with (many) Applications Peyman Milanfar EE Department University of California, Santa Cruz Joint work with Hiro Takeda, Hae Jong Seo, Xiang Zhu Outline Introduction/Motivation

More information

Histogram and watershed based segmentation of color images

Histogram and watershed based segmentation of color images Histogram and watershed based segmentation of color images O. Lezoray H. Cardot LUSAC EA 2607 IUT Saint-Lô, 120 rue de l'exode, 50000 Saint-Lô, FRANCE Abstract A novel method for color image segmentation

More information

Modern Medical Image Analysis 8DC00 Exam

Modern Medical Image Analysis 8DC00 Exam Parts of answers are inside square brackets [... ]. These parts are optional. Answers can be written in Dutch or in English, as you prefer. You can use drawings and diagrams to support your textual answers.

More information

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM

INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM INVARIANT CORNER DETECTION USING STEERABLE FILTERS AND HARRIS ALGORITHM ABSTRACT Mahesh 1 and Dr.M.V.Subramanyam 2 1 Research scholar, Department of ECE, MITS, Madanapalle, AP, India vka4mahesh@gmail.com

More information

Introduction to Image Super-resolution. Presenter: Kevin Su

Introduction to Image Super-resolution. Presenter: Kevin Su Introduction to Image Super-resolution Presenter: Kevin Su References 1. S.C. Park, M.K. Park, and M.G. KANG, Super-Resolution Image Reconstruction: A Technical Overview, IEEE Signal Processing Magazine,

More information