Principled point-source detection in collections of astronomical images

Size: px
Start display at page:

Download "Principled point-source detection in collections of astronomical images"

Transcription

1 Principled point-source detection in collections of astronomical images Dustin Lang 1,,3, David W. Hogg 4,5, & Others ABSTRACT Given a collection of images taken of the same patch of sky, it is possible to detect sources fainter than those detectable in any individual image. Traditionally, this operation is performed by first co-adding the images, after interpolation to a common grid, and running standard image-reduction routines on the co-added image. Issues begin to arise when the images contain different levels of noise, have different point-spread functions, or are taken through different wavelength bandpasses. Here we show that under assumptions optimal point-source detection proceeds by an operation that looks very much like convolution, interpolation, and co-addition, but which does not produce a final image that is a sensible measure of the intensity field; it is instead a heterogeneous significance map. We then show how to detect sources in images taken through different bandpass filters by combining these significance maps. This requires prior information about the distribution of spectral energy distributions. We show how traditional approaches can be interpreted as assuming particular spectral energy distribution priors. After sources have been detected in the correct way outlined here, photometry, astrometry, and other measurements should be made by simultaneously fitting models to the collection of pixels in the non-co-added individual images (which have not been degraded by convolution, interpolation, or co-addition). We show that the best practices differ from common practices only in small ways; in particular, best practices are not computationally expensive. Subject headings: methods: statistical techniques: image processing 1. Our image model We consider astronomical images such as those obtained from an (idealized) CCD. Aside from noise, each pixel contains a signal related to the number of photons incident upon it during the 1 To whom correspondence should be addressed; dstn@cmu.edu McWilliams Center for Cosmology, Carnegie Mellon University, 5 Forbes Avenue, Pittsburgh, PA 1513, USA 3 Princeton University Observatory, Princeton, NJ 8544, USA 4 Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 13, USA 5 Max-Planck-Institut für Astronomie, Königstuhl 17, D Heidelberg, Germany

2 exposure. We are measuring the intensity field I(θ, x, λ, t) (energy per unit solid angle, twodimensional transverse position, wavelength, and time) by taking an exposure for some amount of time through a filter, optics, and possibly atmosphere. This can be seen as an integration over time, wavelength, telescope aperture, and the point-spread function. Consider a discrete image made up of a square array of pixels j, each of which has signal (or counts) C j, where we use j as a two-dimensional focal-plane position, measured in pixel units. The signal C j is given by the integral C j = dc de Ω 1 I(θ c, x, λ, t) H(t) dt λ R(λ) dλ A(x) d x ψ(j c) d c + e j, (1) where dc/de is the calibration factor taking integrated energies to per-pixel counts (in the mean), Ω 1 is the solid angle of a single pixel (assumed the same for all pixels), I( ) is the intensity field, θ is a two-dimensional angular position on the sky, t is a time, H(t) is a function describing the time period over which the telescope was integrating this exposure, λ is a wavelength, R(λ) is a function describing the bandpass?, x is a two-dimensional position vector in the telescope aperture plane, A(x) is a function describing the telescope aperture, ψ( ) is a function describing the pixelconvolved point-spread function of the whole system, c is a two-dimensional focal-plane position in the same coordinates and units as j, and e j is the noise contribution to pixel j. Importantly, the PSF function is the pixel-convolved PSF, not simply the PSF due to optics and atmosphere, so image pixels are a sampling and not a convolution of the PSF-convolved intensity field. In the rest of this paper, we will assume that the (perhaps tiny) image of interest contains only a uniform background value plus one point source k with some (unknown) flux F k and position θ k. In this case, the intensity field is just I(θ, x, λ, t) = I sky (λ, t) + F k (λ, t) δ(θ k θ ), () where I sky is the intensity of the background (sky), F k has units of flux (energy per area per time per wavelength), and δ() is a two-dimensional delta function on the sky. For simplicity, we have dropped all assumption-violating contributions from galaxies, nebulosity, neighboring sources, or sky gradients. The noise values are drawn from a probability distribution which we assume to be p(e j ) = N ( e j, σ1 ), (3) where N (x m, V ) is the Gaussian distribution with mean m and variance V, and σ1 is the perpixel noise variance (assumed equal for all pixels). That is, we are assuming constant, zero-mean, Gaussian noise. This assumption is not too bad for sky-limited images from which the sky level has been subtracted: the sky contribution can be modeled as a Poisson process contributing Poisson or shot noise which we then approximate by the Gaussian. In the above, we have made extremely strong assumptions. In addition to the listed assumptions we have assumed that any electronic gain at readout has been calibrated out. We have also assumed that the image is not contaminated by cosmic rays, stray light, bad pixels, bad columns, electronic artifacts, or any other of the many defects in real images.

3 3. Detecting a point source in a single image In this section, we will assume a source k with constant flux F k (λ), and constant sky intensity I sky (λ). Applying the integral of equation 1 to the simple intensity model (equation ) with a constant-flux source, the aperture A(x) and exposure H(t) terms become constants which we will combine with the Ω 1 and dc de terms, leaving [ ] C j = K I sky (λ) + F k (λ) δ(θ k θ c ) λ R(λ) dλ ψ(j c) d c + e j, (4) and integrating out the sky and source terms yields C j = C sky + C k ψ(j k) + e j, (5) where k is the pixel position corresponding to the angular direction θ k of source k, and C k is the total number of counts (summed over pixels) from the source, which we will call the source counts. We will subtract the sky level C sky, leaving the sky-subtracted image S j. We assume that we can estimate the sky level exactly; in reality our estimate will be noisy. The sky term C sky contributes Poisson noise to the image, which we will approximate by Gaussian noise and combine with the other noise term e j, yielding S j C k ψ(j k) + ˆN (, σ1 ) (6) where the symbol means drawn from the distribution, and ˆN ( µ, σ ) is the Gaussian distribution with mean µ and variance σ. Put simply, in our model the image pixels contain signal due to a pixel-convolved point-spread function ψ( ) shifted to the source center k and scaled by the source counts C k ; plus additive, pixelwise-independent, constant-variance Gaussian noise. We will assume that the point-spread function ψ( ) is non-negative and, when evaluated on a pixel grid, sums to unity. See Figure Optimal linear detection In Appendix A.1 we show that correlating an image by its PSF results in an optimal detector for isolated point sources. We define the detection map D j as the correlation of the image with its PSF model, scaled to be in convenient units: D j = 1 ψ ψ i S i+j, (7) i in A where A is the support of the PSF and ψ i = ψ(i) is an image of the PSF evaluated at integer-offset pixel positions i. The PSF norm ψ is ψ = ψi, (8) i in A

4 4 and as shown in Appendix A.1.1, a Gaussian PSF with standard devation w pixels has a norm approximately: ψ G 1 πw. (9) The per-pixel error in the detection map is given by σ D = σ 1 ψ. (1) We have scaled the detection map so that each pixel contains the maximum-likelihood total counts of a source centered at that pixel. That is, computing at pixel j the total source counts C j to minimizes the chi-squared (χ ) residual within the support of the PSF: we find C j = arg min c ( ) Si+j c ψ i (11) i σ 1 Cj = i S i+j ψ i i ψ i (1) Cj = D j (13) as defined above. That is, a significant peak in the detection map indicates the likely position of a point source, and the value of the detection map at a pixel is the maximum-likelihood estimate of the total source counts for a source centered at that pixel. The intuition behind creating a detection map is that the point-spread function spreads the signal from a point-source over a number of pixels. By correlating with the PSF, we gather up that spread-out signal, weighting the pixels over which the signal has been spread by the fraction of the signal that went into each pixel. By scaling the detection map by the inverse PSF norm, we convert it to units of total source counts; this will make it easier to combine detection maps in what follows... Comments Convolution. It is common to hear the statement that convolving an image by its PSF model yields an optimal detection filter. This is only true if the PSF is defined to be flipped in both axes relative to the image coordinate system. We prefer to define the PSF in unflipped coordinates, so correlation rather than convolution is the correct operation (they are equivalent when the PSF model is symmetric). Peaks. We have not yet said anything about how to detect peaks at integer pixel or sub-pixel locations; we discuss this in??.

5 5 3. Detecting a point source in multiple images Assume we have a point source whose flux is constant over time, and a series of images taken through different bandpass filters and with different noise levels, exposure times, point-spread functions, and telescope pointings. We can achieve optimal detection of the source by building a detection map for each image and combining them with weights described below Identical bandpass filters We first present the simpler case where all the images are taken through identical bandpass filters. As we have seen, the detection map defined in equation 7 is a maximum-likelihood estimate of the total counts contributed by the source, in the arbitrary units of the original image. In order to combine information from multiple images, we must calibrate them so that they are in the same units. Since this calibration is simply a linear scaling, it can be applied to the original image or to the detection map. Similarly, if the images are on different pixel grids either from different pointings of the same CCD, or from different CCDs then we must resample the detection maps to a common pixel grid. If the original image is well-sampled, then the detection map (which has been further smoothed by PSF correlation) will also be well-sampled, so resampling to a pixel grid of the same or finer resolution results in no loss of information. Since the pixel values in the detection map represent the total flux from the source, the detection map does not need to be rescaled after resampling. Once the detection map for each image has been calibrated and resampled to a common pixel grid, we have multiple independent maximum-likelihood estimates of the source flux in our chosen filter, each with a known standard deviation and Gaussian statistics. That is, we have multiple Gaussian likelihood functions that we wish to combine. Since they are independent, the combined likelihood is the product of the individual likelihoods. For Gaussian distributions, the resulting aggregate maximum likelihood estimate is simply the inverse-variance-weighted sum of the individual estimates. If the calibration factor κ i scales image i to flux in common units, and R i represents resampling to the common pixel grid, then the flux estimate E i is with per-pixel error E i = R i (κ i D i ) (14) σ E,i = κ i σ 1,i ψ i (15)

6 6 and we combine the estimates from multiple images via E i σ E,i E i = (16) σ E i which has per-pixel error σ E, = ( i i ) 1 σ E,i. (17) This is simply the maximum-likelihood estimate of the flux based on a set of independent Gaussian estimates. 3.. Different bandpass filters As we saw in the single-bandpass case, we can combine multiple individual exposures into an aggregate estimate of the flux of a point source. In order to do this, it was essential to calibrate the images so that each one is an estimate of the same underlying quantity. The multiple-bandpass case is similar: We first combine all the images taken in each bandpass into a single estimate for that bandpass. Then, to combine the bandpasses we must scale them so that they are estimates of the same quantity. This requires knowing the spectral energy distribution, or at least the colors in the filters of interest, of the source to be detected; this allows us to scale the various bandpasses so that they are estimates of a common quantity: perhaps the flux in a canonical band, or some other linear quantity such as the integrated intensity. The intuition here is that if we know that our sources of interest are twice as bright in bandpass A than in bandpass B, then we can convert an estimate of the brightness in band B into an estimate of the brightness in band A by multiplying by two. The variance of the scaled estimate increases appropriately, so a bandpass in which a source is expected to be faint will contribute an estimate with a large variance and will be downweighted when the estimates are combined. We can also view the problem as one of estimating a total flux that has been split into the different bandpasses, and in that view it is analogous to the way flux is spread into pixels by the point-spread function. Assume we have detection maps D j, with per-pixel standard deviation σ j for a number of different bandpasses. Assume each bandpass has a known conversion factor f j to the canonical band; that is, D j ˆN ( f j C, σj ) (18) for flux in the canonical band C. Given a number of such detection maps, we assume they are independent and find that the maximum-likelihood estimate for C is: j Ĉ = D j f j σj j f j (19) σ j

7 7 with per-pixel error 1 ˆσ C = j f j σ j. () For example, if we treat r band as the canonical band and our objects of interest have color r i = 1, then we will scale our i-band detection map D i by f i = (.5) (r i=1) ; the sources are expected to be brighter in i band so we must scale down the i-band estimate to produce an r-band estimate. The i-band variance is also reduced in a corresponding way, so this does not dilute the weight of high-precision measurements. 4. Experiments 4.1. Sloan Digital Sky Survey Here we present some proof-of-concept experiments on Sloan Digital Sky Survey (SDSS) data from the Stripe 8 region?. Input data selection. We select input exposures to include as follows. Using the window flist table for SDSS Data Release 9?, we select photometric exposures from camera column 3, with RA in the range [45.15, 45.85] and Dec in the range [.,.1]. This results in 19 fields from 3 unique runs, which are listed in Table 1. Detection map and resampling. For each input image, we take the PSF to be the double- Gaussian model computed by the SDSS Photo pipeline?, and compute the detection map. Since SDSS DR9 images are already in sky-subtracted, calibrated nano-maggies, we do not need to apply a calibration factor to convert them to common units, nor do we need to remove a sky background level. We set to zero any image pixels that are masked as saturated, interpolated, cosmic ray, ghost, or not checked (close to the image boundaries). We compute the per-pixel noise (σ 1 ) based on a Gaussian approximation of the Poisson errors from the mean sky level, plus dark current and read noise, as recommended by the SDSS team. For each of the u, g, r, i and z bandpass filters ( bands ), we resample to a common pixel grid and then compute a per-band detection map as per equation 16. The common pixel grid is defined by a WCS tangent-plane projection centered on our patch of interest with the nominal SDSS pixel scale (.396 arcseconds per pixel) and image size ?. Since the SDSS pipeline adds a 18-pixel overlap between neighboring fields in a run, we first build a detection map for each run, PHOTOOBJ/frames/RERUN/RUN/CAMCOL/frame.html

8 8 averaging values in the overlapping regions to avoid double-counting, then combine these per-run detection maps as per equation 16. For resampling we use a third-order Lanczos kernel. Background correction. We find that there is considerable spatially-coherent structure in our combined detection maps, which we suspect is due to slightly biased background subtraction in the SDSS pipeline code. We estimate and remove a residual background as follows. After masking all pixels above 5 sigma, we compute the median value within each cell of a 1 1-pixel grid, and interpolate between these values with a third-order B-spline. The resulting background-subtracted detection maps have no apparent background structure and their pixel distributions are closer to Gaussian. See Figure 1. Detection In this paper we are not concerning ourselves with blended objects, so we identify peaks in the detection map naively. We find pixels above the desired threshold, expand by a -pixel radius, and find the connected components. For each component, we return the peak pixel, ignoring other peaks within the component. We construct a series of spectral energy distributions (SEDs), building the combined detection map and running the peak detection algorithm for each. A standard approach for detecting objects in imaging from multiple bands is to weight each band by its signal-to-noise. This is equivalent to assuming a flat spectrum, or zero color. Our SED-matched filters weight each band according to both the amount of flux we expect to see in the band, and its signal-to-noise. The SEDs we use here include Flat (expected color g = r = i), Red (g r = r i = 1), Blue (g r = r i = 1), and four points lying approximately along the stellar locus ((g r, r i) colors (.5,.), (1.,.4), (1.4,.7), and (1.4, 1.4). In the experiments here we will compare the detection strengths in the different SEDs. In practice we would return the union of peaks detected in this way. Results In Figure we show that objects with red colors in the SDSS catalog are detected more strongly by our SED-matched Red filter than a Flat SED (which corresponds to the traditional multi-band detection approach). There are very few objects that the Blue SED-matched filter detects more strongly than the Flat or Red filters; astrophysical objects with very blue colors are rare. In Figure 3 we show, in color-color space, the sources detected by each of our SED-matched filters. As expected, each filter is most sensitive to sources in a given region of color-color space. In Figure 4 we show postage-stamp images of sources that are detected by our SED-matched filters and not detected by a traditional approach. In Figure 5 we show in color-color space the additional objects that are detected as we apply a sequence of SED-matched filters.

9 9 4.. Simulations Single exposure, single band Multiple exposures (different PSFs, sky), single band Multiple exposures, multiple bands 5. Junk here different bandpass filters, with different sky noise levels, exposure times, and point-spread functions. How do we optimally detect the source? We saw in previous sections that source detection is essentially the same as estimating the flux of the putative source at a given source position. Given multi-band data, we must assume a spectral energy distribution (or set of colors in the available filters) for the source to be detected. This is exactly parallel to the assumption that we have a good model of the point-spread function; in both cases we are constructing a matched filter. Given an SED, we can either produce an estimate of the amplitude, or equivalently estimate the source counts in a canonical filter. Chi-squared detection image Noise the color of the sky 5.1. Comments

10 1 Run Min field Max field Table 1: SDSS fields included in the experiments. All fields are from camera column ( camcol ) 3. The minimum and maximum (inclusive) fields are listed.

11 11 16 Original image Background-subtracted Gaussian Number of pixels Per-pixel error (sigma) Fig. 1. Residual background subtraction. Top: combined detection map for the SDSS r band, combining images from 3 runs. There are clear patterns in the background level. Middle: after estimating and removing the residual background as described in the text, the background appears more flat. Bottom: after removing the background, the pixel error statistics are closer to Gaussian, though the distribution is still significantly broader than expected.

12 Red Blue Strength of SED-matched detections 1.4 Relative S/N vs Flat SDSS catalog color (g-i) Fig.. Detection strengths in the Red, Flat, and Blue SED-matched filters, versus SDSSmeasured colors. We performed a spatial match between our detected sources and SDSS sources (from the DR7 CasJobs database Stripe8 ;?) within 1 pixel. Redder objects in g i are detected more strongly in the Red SED-matched filter than Flat or Blue. The handful of objects with significantly blue colors are detected more strongly with the Blue filter.

13 Color-color space locations of strongest detections Flat Red S1 S S3 S4 r - i g - r Fig. 3. Color-color space locations of sources detected most strongly by each of the filters. The color measurements are from the SDSS Stripe8 coadd catalog?, matched to our detections using a 1-pixel matching radius. It is clear that our different SED-matched filters are tuned to detect objects in particular regions of color-color space. The X marks indicate the color each SEDmatched filter is tuned for. Note that many of the sources are detected above a given detection threshold by several of the SED-matched filters; here each source is assigned to the filter by which it is detected most strongly.

14 14 Flat detections missed by Single-band Red detections missed by Single-band and Flat Stellar-locus detections missed by Single-band, Flat, and Red S1 S S1 S S1 S S S1 S1 S S S S1 S1 S1 S S S1 S1 S1 S1 S1 S S S S1 S S S1 S S S S S1 S1 S1 S1 S1 S1 S S S3 S1 S1 S1 S1 S3 S3 S1 S1 Fig. 4. Sources detected by our SED-matched filters and not by a traditional approach; full caption on next page

15 15 Fig. 4. [figure previous page] Sources detected by applying a sequence of SED-matched filters. We start with sources detected by a traditional approach that takes the union of single-band detections in each of the g, r, and i filters. We then apply, in sequence, our Flat, Red, and Stellar locus SED-matched filters. Each panel shows the sources detected by the new filter and not detected by previous filters, in order of decreasing signal-to-noise. In order to make the objects easier to see, we have increased the detection threshold to σ Color-color space locations of additional detections Single-band (1464) Flat (43) Red (77) Stellar locus (76) 1. r - i g - r Fig. 5. Color-color space locations of additional sources detected as additional SED-matched filters are applied. We start by taking all single-band detections in the g, r, and i-band filters. Next, we add sources detected by the Flat SED-matched filter, then the Red filter, then a set of filters along the stellar locus. Each subsequent SED-matched filter detects sources that are missed (at a fixed detection threshold) by the previous filters.

16 16 6. Co-adding images First, it should be noted that co-adding a stack of images is usually not the Right Thing to do (cite Hogg). The various procedures that are typically applied to produce co-adds do more or less violence to the data in the name of expediency. Here we simply lay out the conditions under which it is appropriate to co-add images (noting that they rarely if ever occur in practice) and show how images should be co-added in that case. In the simplest case, we have a series of images taken through the same bandpass filter with perfectly aligned pixels, identical PSF and identical sky noise (which, as in the previous section, we will assume is pixelwise independent, with zero mean and constant variance), and no read noise. Assume also that the sources in the image have no proper motion or parallax and have constant flux. All that varies between the images is the exposure time. In this case, it turns out that simply adding the images together results in no information loss. It may seem obvious that simply adding the images together is the right thing to do: CCDs are linear devices (in this paper), so adding the images together is like taking a single exposure for the sum of the individual exposure times. For each pixel j in the image, the expected (mean) number of counts from the source, C j, is equal to the count rate r j times the exposure time t. Note that the source counts are also the result of stochastic processes, so will by drawn from some distribution of their own, but we will assume that the noise in our images is dominated by the sky background, so we will consider only the mean of the source distribution: C j = r j t. (1) Meanwhile, the sky contributes a mean count rate of s counts per unit time per pixel, yielding a total number of sky counts S per pixel, so the signal-to-noise is: S = s t () S Poisson(s t) (3) S N (s t, s t) (4) SN(j) = C j var (S) (5) = r j t (6) s t = r j t. (7) s This is the well-known result that the signal-to-noise increases as the square root of exposure time.

17 17 Consider two pixel-aligned images, I 1 and I, with different exposure times, t 1 and t. A linear combination of these (sky-subtracted) images with the coefficients α 1 and α yields the image: I = α 1 I 1 + α I (8) α 1 N (r t 1, s t 1 ) + α N (r t, s t ) (9) N ( r(α 1 t 1 + α t ), s(α 1 t 1 + α t ) ) (3) and the signal-to-noise in this image is: SN(I) = r(α 1 t 1 + α t ) s α 1 t 1 + α t (31) which is optimized by setting α 1 = α regardless of t 1 and t ; optimal signal-to-noise is achieved by simply adding the image pixel values together Comments Why equal weights? The result that images should just be added together with no uniform weights should seem slightly surprising: if a longer exposure has better signal-to-noise, shouldn t we weight it more heavily in the co-add? In effect, we have done this: In the equations above, we computed the total source counts, not the counts per unit time we effectively weighted each image by its exposure time. Recall that the exposure time is proportional to the square root of signal-to-noise. Thus, in effect we have co-added the images by weighting each image by its squared signal-to-noise. [ref appendix?] Different sky. We assumed that the sky rate s was equal in the images to be co-added. If this is not the case, then equation 9 can be applied but with s 1 and s replacing s. The result is that the images should be combined with the coefficients α 1 = s. (3) α s 1 This is not surprising, since we have seen that we must weight the images by their signal-to-noisesquared, and from equation 7 we know that this goes as the inverse of the sky rate. Different PSF. Above, we have assumed that the point-spread function is identical in the two images. How should the images be co-added if it is not? First, we must ask what quantity we wish to optimize. If we are interested in detecting point sources, then we wish to optimize the signal-to-noise in the co-added detection map. Assume that we have a good PSF model for each of the two images I 1 and I. We combine them with coefficients α: I c = α 1 I 1 + α I (33)

18 18 so the combined image I c has a point-spread function: The per-pixel sky noise variance in the combined image, σ c, is We will assume equal sky variances: σ = σ 1 = σ. ψ c = α 1ψ 1 + α ψ α 1 + α. (34) σ c = α 1σ 1 + α σ. (35) From equation A8, the co-added detection map has signal-to-noise: SN(D c ) = C(α 1 + α ) ψ c σ c (36) = C(α 1 + α ) α 1 ψ 1 + α ψ σ 1 (α 1 + α ). (37) Assuming Gaussian point-spread functions with standard deviations w 1 and w, we get ( [ SN(D G c ) ] (α 1 + α ) α 1 ψ 1 +α ψ P ) +α 1 α i ψ 1(i)ψ (i) (α 1 +α ) α1 + α α 1 + α 4πw1 + α 4πw 1 α ˆN ( x + y, w ) ( 1 ˆN x + y, w ) dx dy α1 + α (38) (39) α1 w1 + α w + 4α 1α w 1 + w α 1 + α (4) which has extrema at (which we found with the help of Sage?). α = w4 1 w4 ± w w1 4w4 + w8 α 1 4 w1 w (41) By defining ω = w /w 1, we get the simpler expression: α = 1 ω4 ± α 1 ω ω ω (4) which is still strikingly complex, but shows the expected behavior as illustrated in Figure Discussion similarities and differences from Szalay et al SED-matched filters optimal subspace filtering

19 19 galaxy detection? No weights go to zero all images contribute *something*, in proportion to t he information they bring. Contrast with lucky imaging and some co-adding schemes. sub-pixel shifts: lower threshold so that max pixel is above threshold; post-process to cut out peaks that really were below threshold. 8. Conclusions [[What to do with a detection image; how to proceed. Build models, fit simultaneously to all images in the stack.]] [[Practicalities: bad pixels / masks, interpolation, etc]] A. Appendices A.1. Optimal linear detection Let us assume that there exists a linear filter whose output allows optimal detection of isolated point sources. That is, we seek a (two-dimensional) set of coefficients a i that, when correlated with the image S j, produces a map M j whose peak is the likely location of the point source. See Figure 7. The linear filtering (correlation) operation is M j = a i S i+j i in A (A1) where A is the support of a (integer pixel positions), and the center of a is (, ). We will demand that the elements of a are non-negative and sum to unity. Inserting equation 6, we get M j i in A ˆN ( C k a i ψ(i + j k) + ˆN (, σ1 ) C k i in A a i ψ(i + j k), i in A a i σ 1 ) (A) (A3) and the per-pixel signal-to-noise in the map is SN(M j ) = C k ai ψ(i + j k) σ 1 i a i. (A4)

20 We want to choose coefficients a i to maximize the signal-to-noise at the true pixel position of the source, k. Rewriting the expression using dot-products and norms, treating the two-dimensional images a i and ψ(i + j k) as vectors indexed by i, we have: SN(M j ) = C k a ψ(j k) σ 1 a a = C k a ψ(j k) cos θ σ 1 a = C k ψ(j k) cos θ σ 1 (A5) (A6) (A7) where θ is a generalized angle between a and ψ(j k). At the pixel position of the source, k, SN(M k ) = C k ψ() cos θ σ 1 (A8) Clearly this is maximized when θ =, i.e., when a is a multiple of ψ(), the PSF evaluated at a grid of integer pixel positions. Since we have defined both the PSF and coefficients a to sum to unity, we find that the optimal linear filter for detection is given by: a = ψ(), (A9) which means that the operation of correlating the image with its PSF produces a map with optimal signal-to-noise. Repeating equation A1, we have found that the map M j can be computed by correlating the image with its PSF: M j = ψ i S i+j, (A1) i in A where, as before, A is the support of the PSF and ψ i = ψ(i) is an image of the PSF evaluated at integer pixel positions i. The signal-to-noise in this map at the true source pixel position k is SN(M k ) = C k ψ σ 1. (A11) A.1.1. Norm of a Gaussian PSF For a Gaussian PSF with standard deviation w pixels, ψ G 1 (x, y) = ( πw exp x w ) exp ) ( y w (A1) All norms in this paper are l norms: X = pp i X i.

21 1 the norm is ψ G ( ( ) ( )) 1 = πw exp x w exp y w (A13) ψ G 1 x ψ G 1 πw y 4π w 4 exp ( ) ( ) x w exp y w dx dy (A14) (A15) so the detection map has signal-to-noise at the true source position k, SN(D G k ) = C k πwσ 1. (A16) Note, however, that we have defined the point-spread function ψ( ) to be the pixel-convolved response, so it cannot be exactly Gaussian if the pixel response is assumed to be a boxcar function. In practice, however, a two-dimensional Gaussian with variance v correlated with a two-dimensional boxcar function is well approximated by a Gaussian with variance v + 1 1, as long as v 1. A.1.. Why not signal-to-noise-squared? In correlating the image with the PSF, it looks like the detection map weights pixels by their signal-to-noise, rather than signal-to-noise squared. This apparent conflict can be resolved by scaling the pixel values so that each pixel is an estimate of the same quantity. That is, we want to estimate the total source counts C, but the pixels contain estimates of the source counts scaled by the PSF, Cψ; we must undo this scaling by multiplying the pixels by 1/ψ. Given a source at position k, we define the image K whose pixels each contain an estimate of the total source counts: K j = K j K j N 1 S j ψ j k 1 N ( C k ψ j k, σ ) 1 ψ j k ( ) C k, σ 1 ψ j k (A17) (A18). (A19) The signal-to-noise remains the same, since we have just scaled the values: SN(K j ) = C k ψ j k σ 1 (A) SN(K j ) = SN(S j ). (A1)

22 As before, the detection map pixels are a linear combination of the (shifted) pixels of the K image with weights b i : D j = i i N ( b i K i+j b i N ( C k, C k b i, i i σ 1 ψ i+j k and the signal-to-noise in that detection map at pixel k is SN(Dk ) = C k i b i σ 1 i ) b i σ 1 ψ i+j k b i ψi ) (A) (A3) (A4) (A5) which is maximized by setting the b i b i ψ i : (A6) proportional to the signal-to-noise squared, as expected y (pixels) c Pixel value (counts) x (pixels) x (pixels) y (pixels) 6 4 Pixel value (counts) x (pixels) x (pixels) Fig. 6. An example image. Top-left: without noise, with a source at an integer pixel offset, k = (5, 5). The image is simply a shifted version of the PSF function ψ( ) sampled at pixel positions.. Top-right: slice through y = 5, the source centroid. Bottom-left: with noise added. Bottom-right: slice through y = 5, with noise. The dashed line shows the standard deviation σ of the per-pixel noise.

23 y (pixels) 6 4 Detection map value x (pixels) x (pixels) y (pixels) 6 4 Detection map value x (pixels) x (pixels) y (pixels) 6 4 Detection map value x (pixels) x (pixels) y (pixels) 6 4 Detection map value x (pixels) x (pixels) Fig. 7. The detection map of the example image. Top-left: pixelized detection map. Top-right: slice through y = 5, with dashed lines showing 1, 3, and 5 standard deviations of the noise in the detection map pixels. Bottom-left: subsampled detection map. The true peak position is marked with solid lines. The detected peak position is marked with dashed lines. Bottom-right: slice through y = 5.

24 4 Fractional weight of image, α/(α1 + α) /ω weighting Optimal weighting Ratio of PSF widths, ω = w /w 1 Fractional of available S/N achieved Equal weighting 1/ω weighting Optimal weighting Ratio of PSF widths, ω = w /w 1 Fig. 8. Left: The fractional weight assigned to one image in an optimal co-add of two images with different PSF widths, from equation 4. In the limit that image I has a PSF that is much narrower (ω 1), the coadd weights that image highly. When the PSF widths are equal (ω = 1), the two images are assigned equal weight, and when the PSF is much broader (ω 1) the weight goes to zero, as expected. Also shown is the weighting by 1/ω, which has a similar overall shape. Right: The fraction of the available signal-to-noise captured by the 1/ω and optimal coadd weightings. Notice that neither captures all the available signal-to-noise when the PSFs are different, since detecting sources in a coadd necessarily entails using a mismatched filter rather than an optimal matched filter for detection. Using the optimal coadd weighting achieves about a 1% increase in signal-to-noise in detecting point sources when the PSF widths differ by a factor of two.

From multiple images to catalogs

From multiple images to catalogs Lecture 14 From multiple images to catalogs Image reconstruction Optimal co-addition Sampling-reconstruction-resampling Resolving faint galaxies Automated object detection Photometric catalogs Deep CCD

More information

Single-epoch Measurement Algorithms Robert Lupton Applications Lead

Single-epoch Measurement Algorithms Robert Lupton Applications Lead Single-epoch Measurement Algorithms Robert Lupton Applications Lead 2013-09-19 CDP FINAL DESIGN REVIEW September 19-20, 2013 Name of Mee)ng Loca)on Date - Change in Slide Master 1 Outline Single-epoch

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

WSDC Subsystem Peer Review

WSDC Subsystem Peer Review WSDC Subsystem Peer Review Multiband DETector () Ken Marsh (IPAC/Caltech) 1 Outline Relationship of to other WSDS pipeline modules Why multiband? Theoretical basis Procedure - Steps involved - Allowance

More information

Edge Detection (with a sidelight introduction to linear, associative operators). Images

Edge Detection (with a sidelight introduction to linear, associative operators). Images Images (we will, eventually, come back to imaging geometry. But, now that we know how images come from the world, we will examine operations on images). Edge Detection (with a sidelight introduction to

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

PACS Spectrometer Simulation and the Extended to Point Correction

PACS Spectrometer Simulation and the Extended to Point Correction PACS Spectrometer Simulation and the Extended to Point Correction Jeroen de Jong February 11, 2016 Abstract This technical note describes simulating a PACS observation with a model source and its application

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Spectroscopy techniques II. Danny Steeghs

Spectroscopy techniques II. Danny Steeghs Spectroscopy techniques II Danny Steeghs Conducting long-slit spectroscopy Science goals must come first, what are the resolution and S/N requirements? Is there a restriction on exposure time? Decide on

More information

( ) = First Bessel function, x = π Dθ

( ) = First Bessel function, x = π Dθ Observational Astronomy Image formation Complex Pupil Function (CPF): (3.3.1) CPF = P( r,ϕ )e ( ) ikw r,ϕ P( r,ϕ ) = Transmittance of the aperture (unobscured P = 1, obscured P = 0 ) k = π λ = Wave number

More information

Lecture 17 Reprise: dirty beam, dirty image. Sensitivity Wide-band imaging Weighting

Lecture 17 Reprise: dirty beam, dirty image. Sensitivity Wide-band imaging Weighting Lecture 17 Reprise: dirty beam, dirty image. Sensitivity Wide-band imaging Weighting Uniform vs Natural Tapering De Villiers weighting Briggs-like schemes Reprise: dirty beam, dirty image. Fourier inversion

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation Texture Outline Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation 1 Image Representation The standard basis for images is the set

More information

Lecture 4: Spatial Domain Transformations

Lecture 4: Spatial Domain Transformations # Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material

More information

A new algorithm for difference image analysis

A new algorithm for difference image analysis Mon. Not. R. Astron. Soc. 386, L77 L81 (2008) doi:10.1111/j.1745-3933.2008.00464.x A new algorithm for difference image analysis D. M. Bramich Isaac Newton Group of Telescopes, Apartado de Correos 321,

More information

Computer Graphics. Sampling Theory & Anti-Aliasing. Philipp Slusallek

Computer Graphics. Sampling Theory & Anti-Aliasing. Philipp Slusallek Computer Graphics Sampling Theory & Anti-Aliasing Philipp Slusallek Dirac Comb (1) Constant & δ-function flash Comb/Shah function 2 Dirac Comb (2) Constant & δ-function Duality f(x) = K F(ω) = K (ω) And

More information

Imaging and Deconvolution

Imaging and Deconvolution Imaging and Deconvolution Urvashi Rau National Radio Astronomy Observatory, Socorro, NM, USA The van-cittert Zernike theorem Ei E V ij u, v = I l, m e sky j 2 i ul vm dldm 2D Fourier transform : Image

More information

Neurophysical Model by Barten and Its Development

Neurophysical Model by Barten and Its Development Chapter 14 Neurophysical Model by Barten and Its Development According to the Barten model, the perceived foveal image is corrupted by internal noise caused by statistical fluctuations, both in the number

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Midterm Exam Solutions

Midterm Exam Solutions Midterm Exam Solutions Computer Vision (J. Košecká) October 27, 2009 HONOR SYSTEM: This examination is strictly individual. You are not allowed to talk, discuss, exchange solutions, etc., with other fellow

More information

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited Summary We present a new method for performing full-waveform inversion that appears

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

XRDUG Seminar III Edward Laitila 3/1/2009

XRDUG Seminar III Edward Laitila 3/1/2009 XRDUG Seminar III Edward Laitila 3/1/2009 XRDUG Seminar III Computer Algorithms Used for XRD Data Smoothing, Background Correction, and Generating Peak Files: Some Features of Interest in X-ray Diffraction

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

MaNGA Technical Note Finding Fiber Bundles in an Image

MaNGA Technical Note Finding Fiber Bundles in an Image MaNGA Technical Note Finding Fiber Bundles in an Image Jeffrey W Percival 05-May-2015 Contents 1 Introduction 2 2 Description of the Problem 3 3 Hexagon Nomenclature and Methods 4 3.1 Ranks and Coordinate

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

FFT-Based Astronomical Image Registration and Stacking using GPU

FFT-Based Astronomical Image Registration and Stacking using GPU M. Aurand 4.21.2010 EE552 FFT-Based Astronomical Image Registration and Stacking using GPU The productive imaging of faint astronomical targets mandates vanishingly low noise due to the small amount of

More information

Data products. Dario Fadda (USRA) Pipeline team Bill Vacca Melanie Clarke Dario Fadda

Data products. Dario Fadda (USRA) Pipeline team Bill Vacca Melanie Clarke Dario Fadda Data products Dario Fadda (USRA) Pipeline team Bill Vacca Melanie Clarke Dario Fadda Pipeline (levels 1 à 2) The pipeline consists in a sequence of modules. For each module, files are created and read

More information

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town

Computer Vision. Recap: Smoothing with a Gaussian. Recap: Effect of σ on derivatives. Computer Science Tripos Part II. Dr Christopher Town Recap: Smoothing with a Gaussian Computer Vision Computer Science Tripos Part II Dr Christopher Town Recall: parameter σ is the scale / width / spread of the Gaussian kernel, and controls the amount of

More information

VERY LARGE TELESCOPE 3D Visualization Tool Cookbook

VERY LARGE TELESCOPE 3D Visualization Tool Cookbook European Organisation for Astronomical Research in the Southern Hemisphere VERY LARGE TELESCOPE 3D Visualization Tool Cookbook VLT-SPE-ESO-19500-5652 Issue 1.0 10 July 2012 Prepared: Mark Westmoquette

More information

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534: Introduction to Computer Vision Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators Laplacian

More information

OPTI-521 Graduate Report 2 Matthew Risi Tutorial: Introduction to imaging, and estimate of image quality degradation from optical surfaces

OPTI-521 Graduate Report 2 Matthew Risi Tutorial: Introduction to imaging, and estimate of image quality degradation from optical surfaces OPTI-521 Graduate Report 2 Matthew Risi Tutorial: Introduction to imaging, and estimate of image quality degradation from optical surfaces Abstract The purpose of this tutorial is to introduce the concept

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

OU-VIS: Status. H.J. McCracken. and the OU-VIS team

OU-VIS: Status. H.J. McCracken. and the OU-VIS team OU-VIS: Status H.J. McCracken and the OU-VIS team What is OU-VIS for? From raw VIS data, create the algorithms and software to produce calibrated images suitable for cosmic shear measurement Implications:

More information

Dense Image-based Motion Estimation Algorithms & Optical Flow

Dense Image-based Motion Estimation Algorithms & Optical Flow Dense mage-based Motion Estimation Algorithms & Optical Flow Video A video is a sequence of frames captured at different times The video data is a function of v time (t) v space (x,y) ntroduction to motion

More information

ACS/WFC Crosstalk after Servicing Mission 4

ACS/WFC Crosstalk after Servicing Mission 4 Instrument Science Report ACS 2010-02 ACS/WFC Crosstalk after Servicing Mission 4 Anatoly Suchkov, Norman Grogin, Marco Sirianni, Ed Cheng, Augustyn Waczynski, & Marcus Loose March 10, 2010 ABSTRACT The

More information

Spectral Extraction of Extended Sources Using Wavelet Interpolation

Spectral Extraction of Extended Sources Using Wavelet Interpolation The 2005 HST Calibration Workshop Space Telescope Science Institute, 2005 A. M. Koekemoer, P. Goudfrooij, and L. L. Dressel, eds. Spectral Extraction of Extended Sources Using Wavelet Interpolation Paul

More information

Application of Proximal Algorithms to Three Dimensional Deconvolution Microscopy

Application of Proximal Algorithms to Three Dimensional Deconvolution Microscopy Application of Proximal Algorithms to Three Dimensional Deconvolution Microscopy Paroma Varma Stanford University paroma@stanford.edu Abstract In microscopy, shot noise dominates image formation, which

More information

FIFI-LS: Basic Cube Analysis using SOSPEX

FIFI-LS: Basic Cube Analysis using SOSPEX FIFI-LS: Basic Cube Analysis using SOSPEX Date: 1 Oct 2018 Revision: - CONTENTS 1 INTRODUCTION... 1 2 INGREDIENTS... 1 3 INSPECTING THE CUBE... 3 4 COMPARING TO A REFERENCE IMAGE... 5 5 REFERENCE VELOCITY

More information

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8

x' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8 1. Explain about gray level interpolation. The distortion correction equations yield non integer values for x' and y'. Because the distorted image g is digital, its pixel values are defined only at integer

More information

CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on MRO. Calibration Upgrade, version 2 to 3

CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on MRO. Calibration Upgrade, version 2 to 3 CRISM (Compact Reconnaissance Imaging Spectrometer for Mars) on MRO Calibration Upgrade, version 2 to 3 Dave Humm Applied Physics Laboratory, Laurel, MD 20723 18 March 2012 1 Calibration Overview 2 Simplified

More information

An Introduction to Markov Chain Monte Carlo

An Introduction to Markov Chain Monte Carlo An Introduction to Markov Chain Monte Carlo Markov Chain Monte Carlo (MCMC) refers to a suite of processes for simulating a posterior distribution based on a random (ie. monte carlo) process. In other

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

1. ABOUT INSTALLATION COMPATIBILITY SURESIM WORKFLOWS a. Workflow b. Workflow SURESIM TUTORIAL...

1. ABOUT INSTALLATION COMPATIBILITY SURESIM WORKFLOWS a. Workflow b. Workflow SURESIM TUTORIAL... SuReSim manual 1. ABOUT... 2 2. INSTALLATION... 2 3. COMPATIBILITY... 2 4. SURESIM WORKFLOWS... 2 a. Workflow 1... 3 b. Workflow 2... 4 5. SURESIM TUTORIAL... 5 a. Import Data... 5 b. Parameter Selection...

More information

Computer Vision I - Filtering and Feature detection

Computer Vision I - Filtering and Feature detection Computer Vision I - Filtering and Feature detection Carsten Rother 30/10/2015 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

The maximum reachable astrometric precision. The Cramér-Rao Limit

The maximum reachable astrometric precision. The Cramér-Rao Limit The maximum reachable astrometric precision The Cramér-Rao Limit Ulrich Bastian, ARI, Heidelberg, 28-04-2004 translated into English by Helmut Lenhardt, 2009 Abstract: This small investigation shall give

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

2D Image Processing Feature Descriptors

2D Image Processing Feature Descriptors 2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Bioimage Informatics. Lecture 8, Spring Bioimage Data Analysis (II): Point Feature Detection

Bioimage Informatics. Lecture 8, Spring Bioimage Data Analysis (II): Point Feature Detection Bioimage Informatics Lecture 8, Spring 2012 Bioimage Data Analysis (II): Point Feature Detection Lecture 8 February 13, 2012 1 Outline Project assignment 02 Comments on reading assignment 01 Review: pixel

More information

Chapter 38 Wave Optics (II)

Chapter 38 Wave Optics (II) Chapter 38 Wave Optics (II) Initiation: Young s ideas on light were daring and imaginative, but he did not provide rigorous mathematical theory and, more importantly, he is arrogant. Progress: Fresnel,

More information

Digital Image Processing

Digital Image Processing Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents

More information

Chapter 36. Diffraction. Dr. Armen Kocharian

Chapter 36. Diffraction. Dr. Armen Kocharian Chapter 36 Diffraction Dr. Armen Kocharian Diffraction Light of wavelength comparable to or larger than the width of a slit spreads out in all forward directions upon passing through the slit This phenomena

More information

IMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE

IMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE IMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE OUTLINE Filters for color images Edge detection for color images Canny edge detection FILTERS FOR COLOR IMAGES

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:

More information

Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks

Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks Distributed Detection in Sensor Networks: Connectivity Graph and Small World Networks SaeedA.AldosariandJoséM.F.Moura Electrical and Computer Engineering Department Carnegie Mellon University 5000 Forbes

More information

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11 Announcement Edge and Corner Detection Slides are posted HW due Friday CSE5A Lecture 11 Edges Corners Edge is Where Change Occurs: 1-D Change is measured by derivative in 1D Numerical Derivatives f(x)

More information

STREAMING ALGORITHMS. Tamás Budavári / Johns Hopkins University ANALYSIS OF ASTRONOMY IMAGES & CATALOGS 10/26/2015

STREAMING ALGORITHMS. Tamás Budavári / Johns Hopkins University ANALYSIS OF ASTRONOMY IMAGES & CATALOGS 10/26/2015 STREAMING ALGORITHMS ANALYSIS OF ASTRONOMY IMAGES & CATALOGS 10/26/2015 / Johns Hopkins University Astronomy Changed! Always been data-driven But we used to know the sources by heart! Today large collections

More information

2.710 Optics Spring 09 Solutions to Problem Set #1 Posted Wednesday, Feb. 18, 2009

2.710 Optics Spring 09 Solutions to Problem Set #1 Posted Wednesday, Feb. 18, 2009 MASSACHUSETTS INSTITUTE OF TECHNOLOGY.70 Optics Spring 09 Solutions to Problem Set # Posted Wednesday, Feb. 8, 009 Problem : Spherical waves and energy conservation In class we mentioned that the radiation

More information

Lecture Image Enhancement and Spatial Filtering

Lecture Image Enhancement and Spatial Filtering Lecture Image Enhancement and Spatial Filtering Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu September 29, 2005 Abstract Applications of

More information

Subpixel accurate refinement of disparity maps using stereo correspondences

Subpixel accurate refinement of disparity maps using stereo correspondences Subpixel accurate refinement of disparity maps using stereo correspondences Matthias Demant Lehrstuhl für Mustererkennung, Universität Freiburg Outline 1 Introduction and Overview 2 Refining the Cost Volume

More information

Coarse-to-fine image registration

Coarse-to-fine image registration Today we will look at a few important topics in scale space in computer vision, in particular, coarseto-fine approaches, and the SIFT feature descriptor. I will present only the main ideas here to give

More information

Histograms of Oriented Gradients

Histograms of Oriented Gradients Histograms of Oriented Gradients Carlo Tomasi September 18, 2017 A useful question to ask of an image is whether it contains one or more instances of a certain object: a person, a face, a car, and so forth.

More information

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS

More information

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1

Last update: May 4, Vision. CMSC 421: Chapter 24. CMSC 421: Chapter 24 1 Last update: May 4, 200 Vision CMSC 42: Chapter 24 CMSC 42: Chapter 24 Outline Perception generally Image formation Early vision 2D D Object recognition CMSC 42: Chapter 24 2 Perception generally Stimulus

More information

Advanced Image Combine Techniques

Advanced Image Combine Techniques Advanced Image Combine Techniques Ray Gralak November 16 2008 AIC 2008 Important Equation 1 of 22 (Joke! ) Outline Some History The Imager s Enemies artifacts! The Basic Raw Image Types Image Calibration

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur

Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Physics I : Oscillations and Waves Prof. S Bharadwaj Department of Physics & Meteorology Indian Institute of Technology, Kharagpur Lecture - 20 Diffraction - I We have been discussing interference, the

More information

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will

Recent advances in Metamodel of Optimal Prognosis. Lectures. Thomas Most & Johannes Will Lectures Recent advances in Metamodel of Optimal Prognosis Thomas Most & Johannes Will presented at the Weimar Optimization and Stochastic Days 2010 Source: www.dynardo.de/en/library Recent advances in

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Image Restoration and Reconstruction

Image Restoration and Reconstruction Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image

More information

Final Review CMSC 733 Fall 2014

Final Review CMSC 733 Fall 2014 Final Review CMSC 733 Fall 2014 We have covered a lot of material in this course. One way to organize this material is around a set of key equations and algorithms. You should be familiar with all of these,

More information

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford

Harder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer

More information

Photon counting strategies with low-light-level CCDs

Photon counting strategies with low-light-level CCDs Mon. Not. R. Astron. Soc. 345, 985 99 (3) Photon counting strategies with low-light-level CCDs A. G. Basden, C. A. Haniff and C. D. Mackay Astrophysics Group, Cavendish Laboratory, Madingley Road, Cambridge

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

INTENSITY TRANSFORMATION AND SPATIAL FILTERING

INTENSITY TRANSFORMATION AND SPATIAL FILTERING 1 INTENSITY TRANSFORMATION AND SPATIAL FILTERING Lecture 3 Image Domains 2 Spatial domain Refers to the image plane itself Image processing methods are based and directly applied to image pixels Transform

More information

Scanner Parameter Estimation Using Bilevel Scans of Star Charts

Scanner Parameter Estimation Using Bilevel Scans of Star Charts ICDAR, Seattle WA September Scanner Parameter Estimation Using Bilevel Scans of Star Charts Elisa H. Barney Smith Electrical and Computer Engineering Department Boise State University, Boise, Idaho 8375

More information

Capturing, Modeling, Rendering 3D Structures

Capturing, Modeling, Rendering 3D Structures Computer Vision Approach Capturing, Modeling, Rendering 3D Structures Calculate pixel correspondences and extract geometry Not robust Difficult to acquire illumination effects, e.g. specular highlights

More information

Stereo imaging ideal geometry

Stereo imaging ideal geometry Stereo imaging ideal geometry (X,Y,Z) Z f (x L,y L ) f (x R,y R ) Optical axes are parallel Optical axes separated by baseline, b. Line connecting lens centers is perpendicular to the optical axis, and

More information

Multiple Model Estimation : The EM Algorithm & Applications

Multiple Model Estimation : The EM Algorithm & Applications Multiple Model Estimation : The EM Algorithm & Applications Princeton University COS 429 Lecture Dec. 4, 2008 Harpreet S. Sawhney hsawhney@sarnoff.com Plan IBR / Rendering applications of motion / pose

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

CS 664 Segmentation. Daniel Huttenlocher

CS 664 Segmentation. Daniel Huttenlocher CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017 Reading Jain, Kasturi, Schunck, Machine Vision. McGraw-Hill, 1995. Sections 4.2-4.4, 4.5(intro), 4.5.5, 4.5.6, 5.1-5.4. [online handout] Image processing Brian Curless CSE 457 Spring 2017 1 2 What is an

More information

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters

EECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters EECS 556 Image Processing W 09 Image enhancement Smoothing and noise removal Sharpening filters What is image processing? Image processing is the application of 2D signal processing methods to images Image

More information

Image Enhancement: To improve the quality of images

Image Enhancement: To improve the quality of images Image Enhancement: To improve the quality of images Examples: Noise reduction (to improve SNR or subjective quality) Change contrast, brightness, color etc. Image smoothing Image sharpening Modify image

More information

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute.

Dynamic Reconstruction for Coded Aperture Imaging Draft Unpublished work please do not cite or distribute. Dynamic Reconstruction for Coded Aperture Imaging Draft 1.0.1 Berthold K.P. Horn 2007 September 30. Unpublished work please do not cite or distribute. The dynamic reconstruction technique makes it possible

More information

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"

ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing Larry Matthies ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies" lhm@jpl.nasa.gov, 818-354-3722" Announcements" First homework grading is done! Second homework is due

More information

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into

More information

Key properties of local features

Key properties of local features Key properties of local features Locality, robust against occlusions Must be highly distinctive, a good feature should allow for correct object identification with low probability of mismatch Easy to etract

More information

Sampling, Aliasing, & Mipmaps

Sampling, Aliasing, & Mipmaps Sampling, Aliasing, & Mipmaps Last Time? Monte-Carlo Integration Importance Sampling Ray Tracing vs. Path Tracing source hemisphere Sampling sensitive to choice of samples less sensitive to choice of samples

More information

Computer Vision I - Basics of Image Processing Part 1

Computer Vision I - Basics of Image Processing Part 1 Computer Vision I - Basics of Image Processing Part 1 Carsten Rother 28/10/2014 Computer Vision I: Basics of Image Processing Link to lectures Computer Vision I: Basics of Image Processing 28/10/2014 2

More information