Keywords: correlation filters, detection, foliage penetration (FOPEN) radar, synthetic aperture radar (SAR).

Size: px
Start display at page:

Download "Keywords: correlation filters, detection, foliage penetration (FOPEN) radar, synthetic aperture radar (SAR)."

Transcription

1 header for SPIE use Distortion-Invariant FOPEN Detection Filter Improvements David Casasent, Kim Ippolito (Carnegie Mellon University) and Jacques Verly (MIT Lincoln Laboratory) ABSTRACT Various new improvements to the MINACE distortion-invariant filter are considered. Detection (P D ) and false alarms (P FA ) improvements obtained are noted. P D improved by 25%. Initial ROC data and algorithm fusion results indicate that these new filters can improve the performance obtained by other methods. Keywords: correlation filters, detection, foliage penetration (FOPEN) radar, synthetic aperture radar (SAR). 1. INTRODUCTION Foliage penetration (FOPEN) wideband synthetic aperture radar (SAR) sensors can provide high resolution (0.33 m x 0.66 m) imagery of objects behind foliage. The processing of such data to produce imagery is detailed elsewhere 1. Prior work on object detection in FOPEN data has used contrast box methods 2, split aperture techniques 3, polarization information 4,5, and other techniques. The SAR imagery used for this effort was processed by MIT Lincoln Laboratory, and was collected during a four-day experiment at Grayling, Michigan in The experiment utilized 33 military vehicles, and consisted of four flight paths over the terrain. One flight collected target signature data from targets deployed in the open, while the other flights were devoted to collecting data with the vehicles hidden in the trees and along tree lines. The SAR used was the ERIM/NAWC UWB (215 MHz-724 MHz) P-3 SAR, a fully-polarimetric sensor flown on a Navy P-3 aircraft 1. The data was collected in three polarizations (HH, HV, VV). We combined these to produce Polarimetric Whitening Filter (PWF) imagery using whitening filter concepts detailed earlier 6,7,8,9. The whitening process combines the three complex-valued channels into a real-valued image by the formula 1 x = y H ( î ) y (1 ) where y=[hh hv vv] T is the vector containing the elements from the three complex-valued channels, and Σ is the 3x3 Hermitian covariance matrix of the clutter data. No zero cross-correlation assumptions were made. All tests used this PWF data and imagery at a 45 o depression angle. All data was log scaled after PWF with no offset. We consider detection of class A1 objects (specifically object 1 in class A) in different aspect views in foliage using a distortion invariant MINACE filter 10 and rejection of false alarms. The filters are formed from different aspect views of the object in the open (training set); detection performance (P D ) is given for different aspect views of the object in foliage (test set); initial false alarm performance (P FA ) is given for 200 regions of one scene that gave worst case false alarms in other tests. The training set (objects in the open, referred to as open objects) were cut out from mission 20 pass 2 through 9 (M20P2-M20P9) data; a 101x101 pixel region around the center of each aspect view of object A1 was extracted; and from these a 41x41 pixel filter was formed (this size covers most object pixels with some background present around the object). The test set, referred to as foliage objects, was similarly extracted from M18P9, M18P19, M18P21, and M18P43. The 200 worst-case clutter chips (101x101 pixels) were extracted from M19P43. The 41x41 pixel filter allows search of a 61x61 pixel region in the center of each 101x101 pixel test image. If a correlation peak above threshold occurred within 20 pixels of the center of a test object chip, a detection was declared (P D ); for clutter chip and regions, if a correlation peak above threshold occurred anywhere, a false alarm was declared (P FA ). All energy values calculated are in 41x41 pixel regions (the size of the filter). Section 2 discusses the distortion invariant filter design. Section 3 describes the database. Section 4 presents P D versus P FA (200 clutter chips) results for various filter improvements. Section 5 presents initial full scene ROC and algorithm fusion results. 2. MINACE FILTER SYNTHESIS MINACE (minimum noise and average correlation plane energy) distortion-invariant filters have proven very attractive for detection and recognition of objects in infrared 10 and SAR 11,12 imagery. Our initial tests on FOPEN data use only the basic MINACE filter. We then address various filter improvements. We denote vectors (matrices) as lower (upper) case bold letters. All data are Fourier transform (FT) data, as synthesis is easier. The FT of the filter is h (all vectors are one-dimensional lexicographically ordered two-dimensional data). The columns of the data matrix X are the FTs of the training set images (different aspect views of objects). We require the filter to give correlation plane values of 1 for each training set image; this peak constraint is described by

2 [ 1,..., ] T X H h = u = 1, where () H denotes conjugate transpose and the elements of the vector u are specified correlation peak values for the distorted object views. To improve performance, we define objective functions that we wish to minimize: E S (correlation plane energy due to signal) and E D (correlation plane energy due to distorted versions of the objects). We thus also require the filter h to minimize the objective function E = ( 1 c) E S + ce D, (3 ) which is a weighted combination of two objective functions; the control parameter c (0 c 1) is chosen to emphasize minimization of E S or E D. The correlation plane energy due to all training set images is h H Sh=E S (this is the energy leaving the matched spatial filter in an FT correlator), where S is a diagonal matrix whose entries are the average spectrum of the full set of training set images. The correlation plane energy due to distorted versions of the training set images is similarly h H Dh=E D where the elements of the diagonal matrix D follow from some model for distortions; we use a zero-mean white Gaussian noise model with the elements of D being the variance σ 2 N of the noise model. Thus, h H Sh h H Dh h H [ S D] h h H E = 1 c) + c = (1 c) + c = Th (. (4 ) The Lagrange multiplier solution that satisfies (2) and minimizes (4) is 1 H 1 1. (5 ) h = T X( X Reducing E S in (3) subject to the peak constraint in (2) produces delta-function-like correlation peaks; this reduces correlation plane sidelobes and hence localizes the object well in the correlation plane. To achieve this, the filter emphasizes high spatial frequencies in the object data. Hence, minimizing E S tends to reject imagery not present in the training set; i.e. it minimizes false alarms, but can make detection of test set images not present in the training set difficult. Conversely, minimizing E D emphasizes lower spatial frequencies and thus improves detection P D, while making P FA worse. An intermediate value of c is thus preferable. MINACE filter synthesis allows such flexibility and thus offers improved performance over other distortioninvariant filters; it only requires selection of one filter parameter c and is thus easier to design compared to other filters with several parameters to be selected. In conclusion, large (small) values of c improve P D (P FA ). In synthesizing h, we select a value of c, choose the object aspect view closest to head-on as the first image to include in the filter. We synthesize a filter with this single aspect view, correlate it versus the rest of the training set object aspect views, locate the aspect view with the lowest correlation value (at the central peak), and form a new filter from the old one and this new object aspect view. We continue this process until the filter recognizes all training set images with central correlation peak values above some value (we used 0.8). The parameter c is defined as 2 (6 ) T X) σ N c = max x (0) where the denominator is the maximum value at dc of the power spectrum of any training set image I, and the numerator σ 2 N is the variance of the noise. The Energy term minimized in (3) is written in the new form. The definitions of S and c are also new. The filter preprocessing T performed is now independent of the number of training set images included in the filter. Thus, it does not now change as aspect views are added to the filter. 3. DATABASE Figure 1 shows several different aspect views of the A1 object in the open (training set). This object was chosen as it has the most open (80) and foliage (44) aspect views in the database. As seen, the image varies radically with aspect. Even a 1 o change results in a large difference (e.g., Figure 1, g vs h). In addition, a large range of object energy is present in these 80 aspect views (e.g., Figure 1, b vs g). In this paper we denote the average energy per pixel (in a 41x41 pixel region) by E. For the open images, E varied from The ability to detect objects with a factor of two difference in energy with one filter with comparable filter outputs is quite challenging. For the 44 foliage objects, E varied by a factor of four (from 80 to 360). The 0 o view is the head-on view. In all images, cross-range is vertical (sensor was located left of image). i i 2, u (2 )

3 Figure 1. Training set: Several aspect views (41x41) of object A1 in the open. Figure 2 shows several typical object aspect views in foliage (test set). Some have reasonable object to background ratios (Figure 2a,b), others are very poor (Figure 2e-h), and a 1 o difference can significantly affect imagery (Figure 2b,c). The energy of the objects in clutter is significantly reduced by 30% to 60% with respect to objects in the open. Clearly, detection of objects in FOPEN in foliage is quite difficult. Figure 2. Test set: Several aspect views (41x41) of object A1 in foliage. Figure 3 shows several examples with the same (or nearly so) object aspect view in the open versus in foliage. There is significant loss of object information, contrast, and energy. Figure 4 shows several typical clutter chips. Some have high energy regions and some have shapes that are very much like those of objects in foliage. Figure 3. Similar object aspect view in the open (top) and in foliage (bottom).

4 Figure 4. Typical false alarm regions. 4. TEST RESULTS We consider the detection (P D ) and false alarm (P FA ) performance of our filters on the 44 test objects (A1) and 200 worst-case clutter chips (at low P D ). Various filter improvements are considered. We denote the number of object aspect views in the filter by N T and total number of training images by N TR =80. The energy E is the average energy per pixel in 41x41 pixels (our filter size) of our filter. The purpose of these tests was to determine which filter modifications to include and to quantify which gave the most improvement and to analyze this ala future work. Several general expectations are advanced. If N T increases, we expect filter E to increase and we expect both effects to be bad. We can increase c to reduce N T and E, but then we expect P FA to be worse. Thus, a filter with a lower c is expected to have better clutter rejection. In presenting data, we will frequently show P D versus P FA. But, to provide easier comparison as various filter improvements are considered, we will compare P D at two P FA values (25% and 35%). These P FA values are only for 200 worst-case clutter chips and thus we do not refer to these as full scene ROC (receiver operating characteristic) data. We address ROC data in Section Baseline results The filter with newly defined parameters gave the results in Table 1. P D is very low as seen. This is expected given the severe degradation in FOPEN versus open imagery and the similarity of clutter to objects. Test Enhancement P FA = 25% P FA = 35% 1 Baseline P D = 54.6% P D = 59.1% Table 1. Initial baseline filter performance. 4.2 Realigned Training Set Imagery The training set open images of the different object aspect views are centered on the object center. However, the image that results of the object in different aspect views has the object energy shifted noticeably (see Figure 1). In filter synthesis, we add different training set object aspect view images to form the filter; if these images are shifted, the extent of the filter will increase, and it will not adequately convey the common information present in different object aspect views. In addition, filter energy and N T will increase, as the common information in various object aspect views is not being optimally captured and combined. To overcome this, we recenter the different training set images and use this realigned training set to synthesize our filter. We considered various methods for realignment and chose cross-correlation. In this technique, we start with the 0 o image and correlate it with the next closest two aspect view images. We shift the new aspect view images so that the correlation peak occurs at the central pixel. These shifted adjacent aspect views are then used as the reference, and we correlate them with their neighboring aspect views. The new views are recentered, and the process continues until all aspect view images are realigned. Table 2 shows the results of this realigned training set image filter and those for the baseline filter at P FA = 25%. As seen, the P D increased by over 11% from 54.6% to 65.9%. Table 2 also notes the c value used. As seen, c decreased from 0.04 to 0.02 (a decreased c tends to result in a lower P FA or a better P D for a fixed P FA ). The correlation threshold T used for P FA = 25% is also noted. As seen it is lower for the new filter, hence an increased P D but with improved clutter rejection (due to a lower c). The number of training set aspect view images included in the filter N T increases (from 7 to 9), but the filter performs better. Filter energy remained the same at 1.2x10-5, even though N T increased. The training set image shifts needed were significant; 62 of the 80 training images required shifts, 10 of these were over 5 pixels. The maximum shift needed was 8 pixels and the average shift was 2.2 pixels. As expected, the training images included in the filter were also now different and automatically selected.

5 Test Enhancement c T N T Aspects E P D 1 Baseline ,69 0,308 0,229 0,27 0,38 0, 1.2x % Realigned Training Images ,119 0,229 0, 307 0,199 0,147 0,38 0,137 0, x % Table 2. Detection results of realigned training images at P FA =25%. Thus, a filter using realigned training set images performs better. This occurs because the realignment makes the training set images more similar. This allows c to decrease and P D and P FA to improve. With centered training set imagery, the filter is better able to look for specific object information rather than handling differences in aspect views due also to image shifts. Figure 5 shows the spatial domain image of the baseline and realigned training image filter. As seen, the spatial support is less (the filter is more compact) with this new filter improvement. Figure 5. Spatial domain filter images (a) baseline and (b) with realigned training images. 4.3 Search Window Increase (+/- 5 pixels) Despite the alignment procedure used (Section 4.2), we will not be able to perfectly align all training set images with respect to a filter. Thus, in evaluating how many training set images to add to the filter, we consider the use of a search window, i.e. we originally evaluate the correlation of our filter with each training set image at only the central point, but when the training set images are not aligned, we must allow search of a larger number of pixels (a larger search window). We initially consider this improvement alone on the baseline filter. The results are shown in Table 3 at P FA =25% and 35% for the original and the new (larger search window) filter. These P D improvements are not significant (2% P D increase at P FA =25% and no improvement at P FA =35%, P D =59.1% in both cases). This is expected, since the effect of this algorithm improvement will not be seen until it is combined with the realigned training set filter improvement in Section 4.2. P D Test Enhancement (P FA =25%) (P FA =35%) 1 Baseline 54.6% 59.1% 3 +/-5 Pixel Search Window 56.8% 59.1% Table 3. Improvement of larger search window alone. Table 4 shows the results of both the realignment and increased search window improvements versus those of the baseline filter and those of only the realigned filter. As seen, P D at P FA =25% improves by over 2% when the increased window size technique is added to the realigned training set image method. This filter synthesis improvement is attractive theoretically and results are better and thus is also used. From Table 4, we also note a reduction in the number of aspect view images included in the filter from 7 in the original case or 9 in only the realigned training set case to 4 in the new combined realigned and increased search window case. We also note a decreased filter energy from 1.2x10-5 in the original filter to a lower filter energy of 0.7x10-5 in the combined case. This filter parameter thus seems to reflect the associated filter P D and P FA improvements. The combined realigned and search window improvements have increased P D from 54.6% to 68.2% (a 13.6% increase). As before, the aspect views chosen change for the different cases. Test Enhancement N T Aspects E P FA = 25% P FA = 35% 1 Baseline 7 8 0,69 0,308 0,229 0,27 0,38 0, 1.2x10-5 P D = 54.6% P D = 59.1% Realigned Training Images 9 8 0,119 0,229 0, 1.2x10-5 P D = 65.9% P D = 72.7% 307 0,199 0,147 0,38 0,137 0, /-5 Pixel Search Window 6 8 0,69 0,229 0,148 0,340 0, x10-5 P D = 56.8% P D = 59.1% 4 (2) & (3) 4 8 0,229 0,148 0, x10-5 P D = 68.2% P D = 75.0% Table 4. Comparison of (2) realigned training images, (3) larger search window, and (4) both improvements.

6 4.4 Normalized Correlations and Zero-Mean Filters The next filter improvement we considered was the use of normalized correlations and a zero-mean filter. We note that the training set images have a large range of energies (a factor of two) and that we require a similar output correlation peak value for all training set images. The image aspect view closest to head on is selected as the first image included in the filter. The next aspect view included is the one with the lowest correlation value with the filter. The second and subsequent aspect views included in the filter will be those that look more different from the prior images and also those with lower energy. To remove the second effect, we use normalized correlations and synthesize the filter from normalized training set images. When we correlate a filter with a training set image to select training set images to include in the filter, we scale the correlation plane output magnitude by 1/σ of the associated 41x41 input image region. This makes σ=(energy) 1/2 of all training set object aspect views equal. The filter now selects the aspect views to be included based on those with different structure, rather than those with differences in energy. During correlation with an input scene, the output correlation plane is scaled by 1/σ in each local 41x41 region of the input. The output we use is the magnitude of the correlation. This scaling factor is easily obtained from the correlation of the input scene with a unit 41x41 filter with all values equal to one. Since any region in the input scene that has a low σ will cause the input image to be inversely scaled, we disregard any portion of the input scene that has a σ below σ min =262. We expect false alarm problems with such filters since each local clutter region will now also have the same energy as that of a bright target and uniform input regions will have a low σ and hence a large correlation plane value after normalization. Thus, we also make the normalized correlation filters zero-mean. We do this by automatically including a second training set image (a unit 41x41 input) with the first training set object aspect view. We require the filter to give an output of 1 for the object aspect view and zero for the unit input. This forces the filter to be zero-mean. It remains zero-mean as subsequent aspect view images are added and the filter now selects the aspect view images to add based upon those with different shape and structure not based upon their energy. Table 5 shows the results of this filter improvement added to the prior best filter case (Test 4). As seen, P D increases further by a significant amount ( 10%). The four filter improvements are thus all retained. As seen, they significantly improve P D by 25%. They allow detection of P D =86.4% of the objects, while rejecting 65% of the worst case clutter chips. P D Test Enhancement (P FA =25%) (P FA =35%) 1 Baseline 54.6% 59.1% 2 Realigned Training Images 65.9% 72.7% 3 +/-5 Pixel Search Window 56.8% 59.1% 4 (2) & (3) 68.2% 75.0% 5 (4) & Normalized/Zero-Mean 77.3% 86.4% Table 5. Summarized P D performance of various filter improvements at P FA =25% and 35%. 4.5 Summary Table 6 and Table 7 list all filter parameters for each filter improvement for the filters used at P FA =25% and 35%. Figure 6 shows the P D and P FA (200 worst-case clutter chips) data for the different filter improvements. The improvements occur at all P D and P FA values (except at very low P FA where the test 4 and 5 filters perform similarly). Thus, the present improved MINACE filter is chosen to include the three improvements noted: realigned training set images, larger search window, normalized correlations and a zero-mean filter. Test Enhancement c T N T Aspects E P D 1 Baseline ,69 0,308 0,229 0,27 0,38 0, 1.15x % Realigned Training Images ,119 0,229 0, 1.21x % 307 0,199 0,147 0,38 0,137 0, /-5 Pixel Search Window ,69 0,229 0,148 0,340 0, x % 4 (2) & (3) ,229 0,148 0, x % 5 (4) & Normalized/Zero-Mean ,69 0,157 0, 307 0,198 0, % Table 6. Filter parameters for each filter improvement for filter used at P FA =25%.

7 Test Enhancement c T N T Aspects E P D 1 Baseline ,69 0,308 0,229 0,27 0,38 0, 13.3x % 138 0, Realigned Training Images ,118 0,229 0, x % 3 +/-5 Pixel Search Window ,69 0,229 0, x % 4 (2) & (3) ,229 0, x % 5 (4) & Normalized/Zero-Mean ,69 0,157 0,307 0,198 0, % Table 7. Filter parameters for each filter improvement for filter used at P FA =35%. Figure 6. Performance comparison of various filter improvements. 4.6 Single Filter Performance In compiling the data in Section 4, the filter parameter c was varied for each P FA choice and for each filter improvement. This was necessary to fairly compare the best possible performance for each filter improvement at any given P FA. We do not expect and do not find the same c choice to be best as we include different filter improvements. Similarly, a different c value is expected to be best in different P FA and P D regions; higher (lower) c is expected to be best at higher (lower) P D. However, in practice a single filter with one c choice (or several filters with different c values) would be used. Figure 7 shows the performance of a single filter with c=0.04 and with c=0.05. These curves correspond to decreasing the correlation plane threshold for a single filter until all A1 objects are detected. The performance of the two filters are fairly comparable. Thus, the choice of c is not critical. A single filter performs worse than the ÒoptimalÓ c choice at lower P D and P FA, since a lower c value is expected to be best there. Thus, we expect to use 2-3 filters with different c values in different regions of P D and P FA.

8 Figure 7. Comparison of filters with different c values. We expect different filters to be better in different regions of a P D versus P FA curve; specifically, we expect filters with lower c values to perform better at low P FA and filters with larger c values to perform better at higher P D and P FA. Figure 8 shows this expected trend. If the desired P D or P FA range of operation is known, a single filter with one c value can be selected. If operation over a large P D and P FA range is needed, then several filters seem useful. Figure 8. Comparison of improved MINACE filters with different c values. 5. INITIAL ROC AND ALGORITHM FUSION RESULTS To provide ROC data, P D versus P FA (FAs/km 2 ), we used two full frames of data and recorded all target detections P D and false alarms in these frames (a 101x101 window was used for each false alarm). Only a single improved MINACE filter with c=0.035 was used. It was applied to these two frames and the correlation plane threshold was decreased to generate the ROC curve in Figure 9 (only the upper portion of the ROC curve with P D >0.65 is shown; this is the expected region of operation). There are only 24 A1 objects in these 2 frames (12 per frame). Thus to increase the number of objects on which data is collected, we also used twelve 101x101-pixel chips of A1 objects in foliage from frame M18P9L4F1. This gave us

9 ROC data for 36 A1 objects and two full scenes. Only the portion of the scenes with valid data was included in all calculations. To obtain comparative data from which to assess how well the CMU MINACE filters perform, the performance of the MIT Lincoln Laboratory (MIT/LL) CFAR FOPEN detector on the same data are shown in Figure 9. As seen, the MINACE filter provides better P D and P FA results. At 250 FA/km 2, P D is over 15% better; at P D =85%, the FA/km 2 is reduced by about one third ( 100). Thus, initial tests on these filters are encouraging. Figure 9. ROC data for a single c=0.035 MINACE filter (CMU) and for the Lincoln Laboratory (MIT/LL) CFAR FOPEN detector on the same 36 A1 objects and 2 full frames of clutter. We feel that the best use of these improved MINACE filters is to use them in conjunction with the MIT/LL CFAR detectors using algorithm fusion. As an initial example of this, we applied the MIT/LL algorithm to the database with a fixed threshold so that all 36 A1 objects were detected (as well as 1628 false alarms). We then applied a single (c=0.035) CMU improved MINACE filter to these = 1664 regions and reduced the correlation plane threshold. The results of this simple initial algorithm fusion test are shown in the MIT/LL+CMU curve in Figure 10. This is shown for comparison purposes together with the two prior ROC curves from Figure 9. As seen, the algorithm fusion ROC curve generally lies to the left of the ROC curve for either algorithm alone. Thus, this initial test indicates that algorithm fusion should be superior to use of either algorithm alone.

10 Figure 10. Initial algorithm fusion (MIT/LL+CMU) results are superior to those of either algorithm (CMU, MIT/LL). 6. SUMMARY AND CONCLUSION New MINACE distortion-invariant filter improvements have been advanced and tested on FOPEN data. They show significant improvements in performance; P D increased by 25%. Thus, these improvements were adopted into a new filter design for future tests. Additional improvements are possible, but there was no time to pursue them. Initial tests show that the new MINACE filter performs better than the present MIT/LL baseline CFAR-based detector for FOPEN SAR data. Since the MINACE-based CMU and CFAR-based MIT/LL processors are intended to operate as detectors (i.e., to locate potential target regions without too many false alarms) and since their on-line computational complexity are similar, the comparison made here is fair and justified. In addition, the better detection performance of the MINACE filter should be expected since this filter uses more detailed spatial and amplitude information than a simple CFAR filter. In a sense, the MINACE detector is better tuned to the data. Whether this advantage will be maintained in the presence of multiple target types remains to be seen. One potential problem for the MINACE approach is one of capacity, whereby it will be more and more difficult for a single MINACE filter to deal with an increasing number of target types. Initial tests also indicate that fusion of the MIT/LL and CMU detector results will give better FOPEN SAR detection performance than that of either detector used alone. These issues will be addressed in future work using larger databases and more target types. 7. ACKNOWLEDGEMENTS The authors thank Serpil Ayasli, Steve Crooks, and Peter Henstock from MIT Lincoln Laboratory, Paul Maloney from SM&A, and Mark Davis from DARPA for their support, interest and suggestions during this initial research. This work was sponsored by the Defense Advanced Research Projects Agency under Air Force Contract F C M. Toups et al., Foliage penetration data collections and investigations utilizing the P-3 UWB SAR. Algorithms for Synthetic Aperture Radar Imagery III, Edmund G. Zelnio, Robert J. Douglass, Editors, SPIE, volume 2757, pages , L. Novak et al., Performance of a high-resolution polarimetric SAR automatic target recognition system. The Lincoln Laboratory Journal, 6(1):11-23, J. Nanis et al., Adaptive filters for detection of targets in foliage. IEEE Transactions on Aerospace and Electronic Systems, 10(8):34-36, August D. MacDonald et al., Automatic detection and cueing for foliage concealed targets. Algorithms for Synthetic Aperture Radar Imagery III, Edmund G. Zelnio, Robert J. Douglass, Editors, SPIE, volume 2757, pages , 1996.

11 5 R. Kapoor and N. Nandhakumar. Multi-aperture ultra-wideband SAR processing with polarimetric diversity. Society of Photo-Optical Instrumentation Engineers, 2487:26-37, J. G. Fleischman et al. Part I: Foliage attenuation and backscatter analysis of SAR imagery. IEEE Transactions on Aerospace and Electronic Systems, volume 32, pages , M. E. Toups et al. Part II: Analysis of foliage-induced synthetic pattern distortions. IEEE Transactions on Aerospace and Electronic Systems, volume 32, pages , Fleischman et al. Part III: Multichannel whitening of SAR imagery. IEEE Transactions on Aerospace and Electronic Systems, volume 32, pages , L. M. Novak and C. M. Netishen. Polarimetric Synthetic Aperture Radar Imaging. International Journal of Imaging Systems and Technology, 4: , G. Ravichandran and D. Casasent. Minimum noise and correlation energy filter. Applied Optics, 31: , April D. Casasent and R. Shenoy. Synthetic aperture radar detection and clutter rejection MINACE filters. Pattern Recognition, 30(1): , D. Casasent and S. Ashizawa. Synthetic aperture radar detection, recognition, and clutter rejection with new minimum noise and correlation energy filters. Optical Engineering, 36: , October 1997.

Radar Target Identification Using Spatial Matched Filters. L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory

Radar Target Identification Using Spatial Matched Filters. L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory Radar Target Identification Using Spatial Matched Filters L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory Abstract The application of spatial matched filter classifiers to the synthetic

More information

Distortion-invariant Kernel Correlation Filters for General Object Recognition

Distortion-invariant Kernel Correlation Filters for General Object Recognition Distortion-invariant Kernel Correlation Filters for General Object Recognition Dissertation by Rohit Patnaik Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy

More information

Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms

Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms Illumination invariant face recognition and impostor rejection using different MINACE filter algorithms Rohit Patnaik and David Casasent Dept. of Electrical and Computer Engineering, Carnegie Mellon University,

More information

Effects of Image Quality on Target Recognition

Effects of Image Quality on Target Recognition Leslie M. Novak Scientific Systems Company, Inc. 500 West Cummings Park, Suite 3000 Woburn, MA 01801 USA E-mail lnovak@ssci.com novakl@charter.net ABSTRACT Target recognition systems using Synthetic Aperture

More information

Effects of Image Quality on SAR Target Recognition

Effects of Image Quality on SAR Target Recognition Leslie M. Novak Scientific Systems Company, Inc. 500 West Cummings Park, Suite 3000 Woburn, MA 01801 UNITED STATES OF AMERICA lnovak@ssci.com novakl@charter.net ABSTRACT Target recognition systems using

More information

Multiple target detection in video using quadratic multi-frame correlation filtering

Multiple target detection in video using quadratic multi-frame correlation filtering Multiple target detection in video using quadratic multi-frame correlation filtering Ryan Kerekes Oak Ridge National Laboratory B. V. K. Vijaya Kumar Carnegie Mellon University March 17, 2008 1 Outline

More information

Improving the Discrimination Capability with an Adaptive Synthetic Discriminant Function Filter

Improving the Discrimination Capability with an Adaptive Synthetic Discriminant Function Filter Improving the Discrimination Capability with an Adaptive Synthetic Discriminant Function Filter 83 J. Ángel González-Fraga 1, Víctor H. Díaz-Ramírez 1, Vitaly Kober 1, and Josué Álvarez-Borrego 2 1 Department

More information

Leslie M. Novak, Gregory J. Owirka, and Christine M. Netishen

Leslie M. Novak, Gregory J. Owirka, and Christine M. Netishen NOVAK ET AL. Performance of a High-Resolution Polarimetric SAR Automatic Target Recognition System Leslie M. Novak, Gregory J. Owirka, and Christine M. Netishen Lincoln Laboratory is investigating the

More information

Design and Analysis of an Euler Transformation Algorithm Applied to Full-Polarimetric ISAR Imagery

Design and Analysis of an Euler Transformation Algorithm Applied to Full-Polarimetric ISAR Imagery Design and Analysis of an Euler Transformation Algorithm Applied to Full-Polarimetric ISAR Imagery Christopher S. Baird Advisor: Robert Giles Submillimeter-Wave Technology Laboratory (STL) Presented in

More information

International Research Journal of Engineering and Technology (IRJET) e-issn: Volume: 04 Issue: 09 Sep p-issn:

International Research Journal of Engineering and Technology (IRJET) e-issn: Volume: 04 Issue: 09 Sep p-issn: Automatic Target Detection Using Maximum Average Correlation Height Filter and Distance Classifier Correlation Filter in Synthetic Aperture Radar Data and Imagery Puttaswamy M R 1, Dr. P. Balamurugan 2

More information

USING A CLUSTERING TECHNIQUE FOR DETECTION OF MOVING TARGETS IN CLUTTER-CANCELLED QUICKSAR IMAGES

USING A CLUSTERING TECHNIQUE FOR DETECTION OF MOVING TARGETS IN CLUTTER-CANCELLED QUICKSAR IMAGES USING A CLUSTERING TECHNIQUE FOR DETECTION OF MOVING TARGETS IN CLUTTER-CANCELLED QUICKSAR IMAGES Mr. D. P. McGarry, Dr. D. M. Zasada, Dr. P. K. Sanyal, Mr. R. P. Perry The MITRE Corporation, 26 Electronic

More information

PROOF COPY JOE. Recognition of live-scan fingerprints with elastic distortions using correlation filters

PROOF COPY JOE. Recognition of live-scan fingerprints with elastic distortions using correlation filters Recognition of live-scan fingerprints with elastic distortions using correlation filters Craig Watson National Institute of Standards and Technology MS8940 Gaithersburg, Maryland 20899 David Casasent,

More information

Development and assessment of a complete ATR algorithm based on ISAR Euler imagery

Development and assessment of a complete ATR algorithm based on ISAR Euler imagery Development and assessment of a complete ATR algorithm based on ISAR Euler imagery Baird* a, R. Giles a, W. E. Nixon b a University of Massachusetts Lowell, Submillimeter-Wave Technology Laboratory (STL)

More information

AUTOMATIC TARGET RECOGNITION IN HIGH RESOLUTION SAR IMAGE BASED ON BACKSCATTERING MODEL

AUTOMATIC TARGET RECOGNITION IN HIGH RESOLUTION SAR IMAGE BASED ON BACKSCATTERING MODEL AUTOMATIC TARGET RECOGNITION IN HIGH RESOLUTION SAR IMAGE BASED ON BACKSCATTERING MODEL Wang Chao (1), Zhang Hong (2), Zhang Bo (1), Wen Xiaoyang (1), Wu Fan (1), Zhang Changyao (3) (1) National Key Laboratory

More information

Detection, Classification, & Identification of Objects in Cluttered Images

Detection, Classification, & Identification of Objects in Cluttered Images Detection, Classification, & Identification of Objects in Cluttered Images Steve Elgar Washington State University Electrical Engineering 2752 Pullman, Washington 99164-2752 elgar@eecs.wsu.edu Voice: (509)

More information

Разработки и технологии в области защитных голограмм

Разработки и технологии в области защитных голограмм Разработки и технологии в области защитных голограмм SECURITY HOLOGRAM MASTER-MATRIX AUTOMATIC QUALITY INSPECTION BASED ON SURFACE RELIEF MICRO-PHOTOGRAPHS DIGITAL PROCESSING Zlokazov E., Shaulskiy D.,

More information

Computer Experiments: Space Filling Design and Gaussian Process Modeling

Computer Experiments: Space Filling Design and Gaussian Process Modeling Computer Experiments: Space Filling Design and Gaussian Process Modeling Best Practice Authored by: Cory Natoli Sarah Burke, Ph.D. 30 March 2018 The goal of the STAT COE is to assist in developing rigorous,

More information

Coherence Based Polarimetric SAR Tomography

Coherence Based Polarimetric SAR Tomography I J C T A, 9(3), 2016, pp. 133-141 International Science Press Coherence Based Polarimetric SAR Tomography P. Saranya*, and K. Vani** Abstract: Synthetic Aperture Radar (SAR) three dimensional image provides

More information

Feature Enhancement and ATR Performance Using Nonquadratic Optimization-Based SAR Imaging

Feature Enhancement and ATR Performance Using Nonquadratic Optimization-Based SAR Imaging I. INTRODUCTION Feature Enhancement and ATR Performance Using Nonquadratic Optimization-Based SAR Imaging MÜJDAT ÇETIN, Member, IEEE M.I.T. WILLIAM C. KARL, Senior Member, IEEE DAVID A. CASTAÑON, Senior

More information

The HPEC Challenge Benchmark Suite

The HPEC Challenge Benchmark Suite The HPEC Challenge Benchmark Suite Ryan Haney, Theresa Meuse, Jeremy Kepner and James Lebak Massachusetts Institute of Technology Lincoln Laboratory HPEC 2005 This work is sponsored by the Defense Advanced

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

A comparison of fully polarimetric X-band ISAR imagery of scaled model tactical targets

A comparison of fully polarimetric X-band ISAR imagery of scaled model tactical targets A comparison of fully polarimetric X-band ISAR imagery of scaled model tactical targets Thomas M. Goyette * a, Jason C. Dickinson a, Robert Giles a, Jerry Waldman a, William E. Nixon b a Submillimeter-Wave

More information

Spatial Enhancement Definition

Spatial Enhancement Definition Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing

More information

Figure 1. T72 tank #a64.

Figure 1. T72 tank #a64. Quasi-Invariants for Recognition of Articulated and Non-standard Objects in SAR Images Grinnell Jones III and Bir Bhanu Center for Research in Intelligent Systems University of California, Riverside, CA

More information

New Results on the Omega-K Algorithm for Processing Synthetic Aperture Radar Data

New Results on the Omega-K Algorithm for Processing Synthetic Aperture Radar Data New Results on the Omega-K Algorithm for Processing Synthetic Aperture Radar Data Matthew A. Tolman and David G. Long Electrical and Computer Engineering Dept. Brigham Young University, 459 CB, Provo,

More information

Analysis of the Impact of Non-Quadratic Optimization-based SAR Imaging on Feature Enhancement and ATR Performance

Analysis of the Impact of Non-Quadratic Optimization-based SAR Imaging on Feature Enhancement and ATR Performance 1 Analysis of the Impact of Non-Quadratic Optimization-based SAR Imaging on Feature Enhancement and ATR Performance Müjdat Çetin, William C. Karl, and David A. Castañon This work was supported in part

More information

Detection of Buried Objects using GPR Change Detection in Polarimetric Huynen Spaces

Detection of Buried Objects using GPR Change Detection in Polarimetric Huynen Spaces Detection of Buried Objects using GPR Change Detection in Polarimetric Huynen Spaces Firooz Sadjadi Lockheed Martin Corporation Saint Anthony, Minnesota USA firooz.sadjadi@ieee.org Anders Sullivan Army

More information

Fusion of Radar and EO-sensors for Surveillance

Fusion of Radar and EO-sensors for Surveillance of Radar and EO-sensors for Surveillance L.J.H.M. Kester, A. Theil TNO Physics and Electronics Laboratory P.O. Box 96864, 2509 JG The Hague, The Netherlands kester@fel.tno.nl, theil@fel.tno.nl Abstract

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Classification of Targets in SAR Images Using ISAR Data

Classification of Targets in SAR Images Using ISAR Data See discussions, stats, and author profiles for this publication at: http://www.researchgate.net/publication/235107119 Classification of Targets in SAR Images Using ISAR Data ARTICLE APRIL 2005 READS 16

More information

Motivation. Gray Levels

Motivation. Gray Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

Ultrasonic Multi-Skip Tomography for Pipe Inspection

Ultrasonic Multi-Skip Tomography for Pipe Inspection 18 th World Conference on Non destructive Testing, 16-2 April 212, Durban, South Africa Ultrasonic Multi-Skip Tomography for Pipe Inspection Arno VOLKER 1, Rik VOS 1 Alan HUNTER 1 1 TNO, Stieltjesweg 1,

More information

NAVAL POSTGRADUATE SCHOOL THESIS

NAVAL POSTGRADUATE SCHOOL THESIS NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS INVESTIGATION OF COHERENT AND INCOHERENT CHANGE DETECTION ALGORITHMS by Nicholas S. Underwood December 2017 Thesis Advisor: Co-Advisor: Second Reader:

More information

EE640 FINAL PROJECT HEADS OR TAILS

EE640 FINAL PROJECT HEADS OR TAILS EE640 FINAL PROJECT HEADS OR TAILS By Laurence Hassebrook Initiated: April 2015, updated April 27 Contents 1. SUMMARY... 1 2. EXPECTATIONS... 2 3. INPUT DATA BASE... 2 4. PREPROCESSING... 4 4.1 Surface

More information

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality

A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality A Comparison of Still-Image Compression Standards Using Different Image Quality Metrics and Proposed Methods for Improving Lossy Image Quality Multidimensional DSP Literature Survey Eric Heinen 3/21/08

More information

Textural Features for Image Database Retrieval

Textural Features for Image Database Retrieval Textural Features for Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington Seattle, WA 98195-2500 {aksoy,haralick}@@isl.ee.washington.edu

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Combining Gabor Features: Summing vs.voting in Human Face Recognition *

Combining Gabor Features: Summing vs.voting in Human Face Recognition * Combining Gabor Features: Summing vs.voting in Human Face Recognition * Xiaoyan Mu and Mohamad H. Hassoun Department of Electrical and Computer Engineering Wayne State University Detroit, MI 4822 muxiaoyan@wayne.edu

More information

A Challenge Problem for 2D/3D Imaging of Targets from a Volumetric Data Set in an Urban Environment

A Challenge Problem for 2D/3D Imaging of Targets from a Volumetric Data Set in an Urban Environment A Challenge Problem for 2D/3D Imaging of Targets from a Volumetric Data Set in an Urban Environment Curtis H. Casteel, Jr,*, LeRoy A. Gorham, Michael J. Minardi, Steven M. Scarborough, Kiranmai D. Naidu,

More information

by Using a Phase-Error Correction Algorithm

by Using a Phase-Error Correction Algorithm Detecting Moving Targets in SAR Imagery by Using a Phase-Error Correction Algorithm J.R. Fienup and A.M. Kowalczyk Environmental Research Institute of Michigan P.O. Box 134001, Ann Arbor, MI 48113-4001

More information

Automated Hyperspectral Target Detection and Change Detection from an Airborne Platform: Progress and Challenges

Automated Hyperspectral Target Detection and Change Detection from an Airborne Platform: Progress and Challenges Automated Hyperspectral Target Detection and Change Detection from an Airborne Platform: Progress and Challenges July 2010 Michael Eismann, AFRL Joseph Meola, AFRL Alan Stocker, SCC Presentation Outline

More information

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature

Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature ITM Web of Conferences, 0500 (07) DOI: 0.05/ itmconf/070500 IST07 Research on Multi-sensor Image Matching Algorithm Based on Improved Line Segments Feature Hui YUAN,a, Ying-Guang HAO and Jun-Min LIU Dalian

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Do It Yourself 2. Representations of polarimetric information

Do It Yourself 2. Representations of polarimetric information Do It Yourself 2 Representations of polarimetric information The objectives of this second Do It Yourself concern the representation of the polarimetric properties of scatterers or media. 1. COLOR CODED

More information

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly) Introduction Image Compression -The goal of image compression is the reduction of the amount of data required to represent a digital image. -The idea is to remove redundant data from the image (i.e., data

More information

Artifacts and Textured Region Detection

Artifacts and Textured Region Detection Artifacts and Textured Region Detection 1 Vishal Bangard ECE 738 - Spring 2003 I. INTRODUCTION A lot of transformations, when applied to images, lead to the development of various artifacts in them. In

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Cycle Criteria for Detection of Camouflaged Targets

Cycle Criteria for Detection of Camouflaged Targets Barbara L. O Kane, Ph.D. US Army RDECOM CERDEC NVESD Ft. Belvoir, VA 22060-5806 UNITED STATES OF AMERICA Email: okane@nvl.army.mil Gary L. Page Booz Allen Hamilton Arlington, VA 22203 David L. Wilson,

More information

Motivation. Intensity Levels

Motivation. Intensity Levels Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding

More information

Fourier analysis of low-resolution satellite images of cloud

Fourier analysis of low-resolution satellite images of cloud New Zealand Journal of Geology and Geophysics, 1991, Vol. 34: 549-553 0028-8306/91/3404-0549 $2.50/0 Crown copyright 1991 549 Note Fourier analysis of low-resolution satellite images of cloud S. G. BRADLEY

More information

Fast Anomaly Detection Algorithms For Hyperspectral Images

Fast Anomaly Detection Algorithms For Hyperspectral Images Vol. Issue 9, September - 05 Fast Anomaly Detection Algorithms For Hyperspectral Images J. Zhou Google, Inc. ountain View, California, USA C. Kwan Signal Processing, Inc. Rockville, aryland, USA chiman.kwan@signalpro.net

More information

FRESNEL EQUATION RECIPROCAL POLARIZATION METHOD

FRESNEL EQUATION RECIPROCAL POLARIZATION METHOD FRESNEL EQUATION RECIPROCAL POLARIZATION METHOD BY DAVID MAKER, PH.D. PHOTON RESEARCH ASSOCIATES, INC. SEPTEMBER 006 Abstract The Hyperspectral H V Polarization Inverse Correlation technique incorporates

More information

Wide Angle, Staring Synthetic Aperture Radar

Wide Angle, Staring Synthetic Aperture Radar 88 ABW-12-0578 Wide Angle, Staring Synthetic Aperture Radar Feb 2012 Ed Zelnio Sensors Directorate Air Force Research Laboratory Outline Review SAR Focus on Wide Angle, Staring SAR (90%) Technology Challenges

More information

Clutter model for VHF SAR imagery

Clutter model for VHF SAR imagery Clutter model for VHF SAR imagery Julie Ann Jackson and Randolph L. Moses The Ohio State University, Department of Electrical and Computer Engineering 2015 Neil Avenue, Columbus, OH 43210, USA ABSTRACT

More information

ERROR RECOGNITION and IMAGE ANALYSIS

ERROR RECOGNITION and IMAGE ANALYSIS PREAMBLE TO ERROR RECOGNITION and IMAGE ANALYSIS 2 Why are these two topics in the same lecture? ERROR RECOGNITION and IMAGE ANALYSIS Ed Fomalont Error recognition is used to determine defects in the data

More information

ROC Analysis of ATR from SAR images using a Model-Based. Recognizer Incorporating Pose Information

ROC Analysis of ATR from SAR images using a Model-Based. Recognizer Incorporating Pose Information ROC Analysis of ATR from SAR images using a Model-Based Recognizer Incorporating Pose Information David Cyganski, Brian King, Richard F. Vaz, and John A. Orr Machine Vision Laboratory Electrical and Computer

More information

Conspicuous Character Patterns

Conspicuous Character Patterns Conspicuous Character Patterns Seiichi Uchida Kyushu Univ., Japan Ryoji Hattori Masakazu Iwamura Kyushu Univ., Japan Osaka Pref. Univ., Japan Koichi Kise Osaka Pref. Univ., Japan Shinichiro Omachi Tohoku

More information

Daniel A. Lavigne Defence Research and Development Canada Valcartier. Mélanie Breton Aerex Avionics Inc. July 27, 2010

Daniel A. Lavigne Defence Research and Development Canada Valcartier. Mélanie Breton Aerex Avionics Inc. July 27, 2010 A new fusion algorithm for shadow penetration using visible and midwave infrared polarimetric images Daniel A. Lavigne Defence Research and Development Canada Valcartier Mélanie Breton Aerex Avionics Inc.

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features

More information

Dietrich Paulus Joachim Hornegger. Pattern Recognition of Images and Speech in C++

Dietrich Paulus Joachim Hornegger. Pattern Recognition of Images and Speech in C++ Dietrich Paulus Joachim Hornegger Pattern Recognition of Images and Speech in C++ To Dorothea, Belinda, and Dominik In the text we use the following names which are protected, trademarks owned by a company

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

FEATURE LEVEL SENSOR FUSION

FEATURE LEVEL SENSOR FUSION Approved for public release; distribution is unlimited. FEATURE LEVEL SENSOR FUSION Tamar Peli, Mon Young, Robert Knox, Ken Ellis, Fredrick Bennett Atlantic Aerospace Electronics Corporation 470 Totten

More information

Sobel Edge Detection Algorithm

Sobel Edge Detection Algorithm Sobel Edge Detection Algorithm Samta Gupta 1, Susmita Ghosh Mazumdar 2 1 M. Tech Student, Department of Electronics & Telecom, RCET, CSVTU Bhilai, India 2 Reader, Department of Electronics & Telecom, RCET,

More information

Correlation filters for facial recognition login access control

Correlation filters for facial recognition login access control Correlation filters for facial recognition login access control Daniel E. Riedel, Wanquan Liu and Ronny Tjahyadi Department of Computing, Curtin University of Technology, GPO Box U1987 Perth, Western Australia

More information

ENHANCED RADAR IMAGING VIA SPARSITY REGULARIZED 2D LINEAR PREDICTION

ENHANCED RADAR IMAGING VIA SPARSITY REGULARIZED 2D LINEAR PREDICTION ENHANCED RADAR IMAGING VIA SPARSITY REGULARIZED 2D LINEAR PREDICTION I.Erer 1, K. Sarikaya 1,2, H.Bozkurt 1 1 Department of Electronics and Telecommunications Engineering Electrics and Electronics Faculty,

More information

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.

More information

OPTIMIZING A VIDEO PREPROCESSOR FOR OCR. MR IBM Systems Dev Rochester, elopment Division Minnesota

OPTIMIZING A VIDEO PREPROCESSOR FOR OCR. MR IBM Systems Dev Rochester, elopment Division Minnesota OPTIMIZING A VIDEO PREPROCESSOR FOR OCR MR IBM Systems Dev Rochester, elopment Division Minnesota Summary This paper describes how optimal video preprocessor performance can be achieved using a software

More information

The Detection of Faces in Color Images: EE368 Project Report

The Detection of Faces in Color Images: EE368 Project Report The Detection of Faces in Color Images: EE368 Project Report Angela Chau, Ezinne Oji, Jeff Walters Dept. of Electrical Engineering Stanford University Stanford, CA 9435 angichau,ezinne,jwalt@stanford.edu

More information

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON WITH S.Shanmugaprabha PG Scholar, Dept of Computer Science & Engineering VMKV Engineering College, Salem India N.Malmurugan Director Sri Ranganathar Institute

More information

Modeling and Estimation of FPN Components in CMOS Image Sensors

Modeling and Estimation of FPN Components in CMOS Image Sensors Modeling and Estimation of FPN Components in CMOS Image Sensors Abbas El Gamal a, Boyd Fowler a,haomin b,xinqiaoliu a a Information Systems Laboratory, Stanford University Stanford, CA 945 USA b Fudan

More information

Image Matching Using Run-Length Feature

Image Matching Using Run-Length Feature Image Matching Using Run-Length Feature Yung-Kuan Chan and Chin-Chen Chang Department of Computer Science and Information Engineering National Chung Cheng University, Chiayi, Taiwan, 621, R.O.C. E-mail:{chan,

More information

Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels

Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels Training-Free, Generic Object Detection Using Locally Adaptive Regression Kernels IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIENCE, VOL.32, NO.9, SEPTEMBER 2010 Hae Jong Seo, Student Member,

More information

MULTI-TEMPORAL SAR DATA FILTERING FOR LAND APPLICATIONS. I i is the estimate of the local mean backscattering

MULTI-TEMPORAL SAR DATA FILTERING FOR LAND APPLICATIONS. I i is the estimate of the local mean backscattering MULTI-TEMPORAL SAR DATA FILTERING FOR LAND APPLICATIONS Urs Wegmüller (1), Maurizio Santoro (1), and Charles Werner (1) (1) Gamma Remote Sensing AG, Worbstrasse 225, CH-3073 Gümligen, Switzerland http://www.gamma-rs.ch,

More information

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction Compression of RADARSAT Data with Block Adaptive Wavelets Ian Cumming and Jing Wang Department of Electrical and Computer Engineering The University of British Columbia 2356 Main Mall, Vancouver, BC, Canada

More information

IMAGING WITH SYNTHETIC APERTURE RADAR

IMAGING WITH SYNTHETIC APERTURE RADAR ENGINEERING SCIENCES ; t rical Bngi.net IMAGING WITH SYNTHETIC APERTURE RADAR Didier Massonnet & Jean-Claude Souyris EPFL Press A Swiss academic publisher distributed by CRC Press Table of Contents Acknowledgements

More information

Matched filters for multispectral point target detection

Matched filters for multispectral point target detection Matched filters for multispectral point target detection S. Buganim and S.R. Rotman * Ben-Gurion University of the Negev, Dept. of Electro-optical Engineering, Beer-Sheva, ISRAEL ABSRAC Spectral signatures

More information

( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components

( ) =cov X Y = W PRINCIPAL COMPONENT ANALYSIS. Eigenvectors of the covariance matrix are the principal components Review Lecture 14 ! PRINCIPAL COMPONENT ANALYSIS Eigenvectors of the covariance matrix are the principal components 1. =cov X Top K principal components are the eigenvectors with K largest eigenvalues

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

An Angle Estimation to Landmarks for Autonomous Satellite Navigation

An Angle Estimation to Landmarks for Autonomous Satellite Navigation 5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian

More information

Photo-realistic Renderings for Machines Seong-heum Kim

Photo-realistic Renderings for Machines Seong-heum Kim Photo-realistic Renderings for Machines 20105034 Seong-heum Kim CS580 Student Presentations 2016.04.28 Photo-realistic Renderings for Machines Scene radiances Model descriptions (Light, Shape, Material,

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

Context based optimal shape coding

Context based optimal shape coding IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,

More information

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1

Feature Detection. Raul Queiroz Feitosa. 3/30/2017 Feature Detection 1 Feature Detection Raul Queiroz Feitosa 3/30/2017 Feature Detection 1 Objetive This chapter discusses the correspondence problem and presents approaches to solve it. 3/30/2017 Feature Detection 2 Outline

More information

Report: Reducing the error rate of a Cat classifier

Report: Reducing the error rate of a Cat classifier Report: Reducing the error rate of a Cat classifier Raphael Sznitman 6 August, 2007 Abstract The following report discusses my work at the IDIAP from 06.2007 to 08.2007. This work had for objective to

More information

Kohei Arai 1 Graduate School of Science and Engineering Saga University Saga City, Japan

Kohei Arai 1 Graduate School of Science and Engineering Saga University Saga City, Japan 3D Map Creation Based on Knowledgebase System for Texture Mapping Together with Height Estimation Using Objects Shadows with High Spatial Resolution Remote Sensing Satellite Imagery Data Kohei Arai 1 Graduate

More information

CT NOISE POWER SPECTRUM FOR FILTERED BACKPROJECTION AND ITERATIVE RECONSTRUCTION

CT NOISE POWER SPECTRUM FOR FILTERED BACKPROJECTION AND ITERATIVE RECONSTRUCTION CT NOISE POWER SPECTRUM FOR FILTERED BACKPROJECTION AND ITERATIVE RECONSTRUCTION Frank Dong, PhD, DABR Diagnostic Physicist, Imaging Institute Cleveland Clinic Foundation and Associate Professor of Radiology

More information

Reconstruction of Images Distorted by Water Waves

Reconstruction of Images Distorted by Water Waves Reconstruction of Images Distorted by Water Waves Arturo Donate and Eraldo Ribeiro Computer Vision Group Outline of the talk Introduction Analysis Background Method Experiments Conclusions Future Work

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Simple Spatial Domain Filtering

Simple Spatial Domain Filtering Simple Spatial Domain Filtering Binary Filters Non-phase-preserving Fourier transform planes Simple phase-step filters (for phase-contrast imaging) Amplitude inverse filters, related to apodization Contrast

More information

Examination in Image Processing

Examination in Image Processing Umeå University, TFE Ulrik Söderström 203-03-27 Examination in Image Processing Time for examination: 4.00 20.00 Please try to extend the answers as much as possible. Do not answer in a single sentence.

More information

This paper describes an analytical approach to the parametric analysis of target/decoy

This paper describes an analytical approach to the parametric analysis of target/decoy Parametric analysis of target/decoy performance1 John P. Kerekes Lincoln Laboratory, Massachusetts Institute of Technology 244 Wood Street Lexington, Massachusetts 02173 ABSTRACT As infrared sensing technology

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

2D Image Processing Feature Descriptors

2D Image Processing Feature Descriptors 2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview

More information

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION

CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 122 CHAPTER 5 GLOBAL AND LOCAL FEATURES FOR FACE RECOGNITION 5.1 INTRODUCTION Face recognition, means checking for the presence of a face from a database that contains many faces and could be performed

More information

Target Detection for VHF SAR Ground Surveillance

Target Detection for VHF SAR Ground Surveillance Target Detection for VHF SAR Ground Surveillance Wenxing Ye, Christopher Paulson, Dapeng Wu Department of Electrical and Computer Engineering, University of Florida, Gainesville, Florida, 32611 Correspondence

More information

An introduction on several biometric modalities. Yuning Xu

An introduction on several biometric modalities. Yuning Xu An introduction on several biometric modalities Yuning Xu The way human beings use to recognize each other: equip machines with that capability Passwords can be forgotten, tokens can be lost Post-9/11

More information

Performance Characterization in Computer Vision

Performance Characterization in Computer Vision Performance Characterization in Computer Vision Robert M. Haralick University of Washington Seattle WA 98195 Abstract Computer vision algorithms axe composed of different sub-algorithms often applied in

More information