Active Tracking of Surface Targets in Fused Video 1
|
|
- Edwin Dickerson
- 5 years ago
- Views:
Transcription
1 Active Tracking of Surface Targets in Fused Video 1 Allen Waxman, David Fay, Paul Ilardi, Pablo Arambel and Jeffrey Silver Fusion Technology & Systems Division BAE SYSTEMS Advanced Information Technologies Burlington, MA, U.S.A. allen.waman@baesystems.com Abstract - We seek to establish a quality metric for sensor fused video based on the performance of a multisensor system that actively tracks multiple surface targets over an extended field of regard. Such a system must, in real-time, fuse multisensor/spectral imagery, detect targets reliably, track the targets for extended time periods as they move, stop, approach, cross, hide/emerge, over an extended field of regard. The system must automatically control the pointing, zoom and modes of cameras so as to maintain and re-establish track as targets move outside the current field of view. All this must be done for an increasing number and density of targets in clutter. Individual performance metrics relating to these various stages of processing include percent detection vs. false alarm rate, track identity lifetime, track purity, track accuracy, field of regard, target revisit rate, and number of targets tracked. All of these metrics reflect the quality of the multisensor imagery and the algorithms (image fusion, detection, tracking, sensor resource manage-ment) for the multitarget tracking task. This same task applies to human observers tracking multiple surface targets in displays of multisensor/spectral video. One intuitively expects that the "better" the image quality, the more "effectively" one can track multiple targets. We are exploring this relationship between tracking and fused video as a quality metric. This paper introduces the computational fused video tracking system that forms the basis of this new effort. Future papers will provide results and performance metrics noted above. Keywords: Image fusion, image quality, target detection, multi-target tracking, sensor resource management 1 Introduction With growing interest in the use of fused imagery for enhanced vision and target detection, and the emergence of real-time systems that can fuse multisensor video using a variety of alternative approaches, the question of fused image & video quality has arisen. A variety of traditional metrics exist for visible and infrared still imagery, and were summarized in [1] in the context of visible, thermal and fused image quality metrics. For dynamic multisensor imagery, we propose to develop fused video quality metrics based on performance of an explicitly dynamic task, that is, multi-target tracking over an extended field of regard. This is a task that can be quantified for human performance in a psychophysical context, and for a completely automatic target tracking system that utilizes fused multisensor video (e.g., VNIR & or SWIR & or RGB & or VNIR, SWIR & ), detects targets, fingerprints targets, tracks multiple targets, and controls its sensors to point among targets while maintaining wide-area search over the field of regard. In this paper we introduce our system for active tracking of surface (water & ground) targets in sensor fused video. Each stage of processing has its own set of relevant metrics, however, it is the system as a whole that must perform the task of detecting and tracking multiple targets while maintaining wide-area situational awareness. Overall performance will depend on many real-world parameters that ultimately impact the quality of the fused video derived from input imagery streams. Comparison of automated multi-target tracking to human performance using the same fused video will enable us to establish a relationship between quantifiable metrics derived from the automated system and the expected performance of human observers endowed with sensor fused displays (e.g., goggles, cockpit displays, or UAV turret displays). Establishing this relationship to human performance is the subject of future collaborative research. Here we focus on the fused video tracking system that supports this research. 2 Image Fusion and Quality Factors We have reviewed our opponent-color neural architecture for color image fusion in many previous publications [1 4], and noted alternative approaches to fusing imagery into panchromatic and color products [5 8]. These alternative methods for fusing imagery can start with identical input data and produce dramatically different results. Various measures of image quality would, no doubt, yield different results for these alternative fusion products. Thus, the method for image fusion itself can affect fused image quality (as clearly shown in Fig. 12 of ref. [1]). Our multi-resolution opponent-color image fusion architecture is shown in Figure 1. It combines gray-scale 1 This research sponsored, in part, by the U.S. Air Force Office of Scientific Research.
2 brightness fusion on multiple scales [5] with opponentsensor contrast mapped to human opponent-colors [2 4], and can be implemented in real-time on inexpensive commercial DSP boards [4, 9]. of the SWIR sensor (InGaAs focal plane) is much less than the VNIR sensor (intensified-ccd), the resulting apparent resolution and quality of the fused SWIR/ appears higher than the fused VNIR/ imagery. VNIR Rev. + - Y Q I Rotate Hue Hue Map Map to to RGB VNIR - + Fused Result Register and Noise Clean LACEG Local adaptive contrast & gain Opponent Contrast: Multi-sensor/scale contrasts drive display brightness & color = center-surround shunting dynamics Figure 1: Multi-resolution opponent-color fusion network uses center-surround shunting dynamics to support local adaptive contrast enhancement and gain (LACEG), and two stages of opponent-color that are interpreted as the elements of the Y, I, Q human opponent-color space. Typically, one uses this approach to fuse images containing complementary information, e.g., reflected light in day or night (VNIR or SWIR) with emitted light (MWIR or ) from the scene. This same architecture can be used to fuse two, three or four imaging sensors, as shown in Figure 2. Clearly, as the spectral diversity of the input imagery increases, the resulting fused image quality is improved, inasmuch as it supports spontaneous visual segmentation into scenic objects. InGaAs SWIR ( µm) Fused SWIR/ Figure 3: Dual-sensor fusion of VNIR/ using an intensified-ccd for the VNIR channel, and SWIR/ using an InGaAs imager for the SWIR channel, both using an uncooled microbolometer for the channel. Taken in overcast full-moon (40 mlux), the SWIR/ image has better quality than the VNIR/ image, though the I 2 CCD has higher resolution than the InGaAs focal plane. 3 Target Detection in Fused Video In order to track multiple targets, it is first necessary to detect the targets reliably from the fused imagery. It is often the case that false alarms may also be detected. However, in a tracking context, the temporal stability of the real targets should sustain them, whereas the likely temporal instability of the false alarms should suppress them at the tracker output. Thus, the temporal component of fused video serves to extend image quality beyond a static metric such as percent detection of targets vs false alarms (P d vs P fa ). In order to detect targets in fused imagery, we utilize the opponent-color contrasts created in the fusion process itself together with the multisensor inputs, to create a feature vector as shown in Figure 4 [9]. Pixels Fused VNIR/SWIR/ Fused VNIR/SWIR/MWIR/ Figure 2: The architecture of Figure 1 is used to fuse 2, 3, or 4 sensors, including VNIR/SWIR/MWIR/ shown here, as collected under quarter-moon (27 mlux). Image quality improves, with increased color contrast, as more complementary spectral bands are fused. Even the choice of spectral bands and sensing technology can affect image quality, due to a combination of the natural illumination in that band (e.g., VNIR vs SWIR at night) and the noise factor of the sensor which impacts the noise-limiting resolution of the imagery. Figure 3 illustrates this with a comparison of fused VNIR/ and SWIR/. Though the sampling resolution Features Color fused Visible SWIR MWIR Opponent-Band contrasts Contours Textures 3D Context Figure 4: Multi-modality and cross-band contrast image layers contribute to a feature vector at each pixel....
3 This feature vector is then input to an adaptive classifier (shown in Figure 5) based on Adaptive Resonance Theory (ART) [10, 11], which we ve modified for salient feature subspace discovery and real-time implementation [9]. Recognized Class F3 F3 Target Non-Target Match Tracking I 2 CCD Category-to-Class Learned Mapping Unsupervised Clustering F2 F2 Reset Bottom-Up and Top-Down Adaptive Filters ρ T Complement Coding F1 F1 f 1 f 3 f i ρ NT Input Feature Vector Figure 5: Modified Fuzzy ARTMAP neural classifier is used to learn category & salient feature representations of targets and normalcy models of backgrounds. In this form, the classifier learns fused representations of targets by example and counter-example as designated by an operator. Or, it can learn backgrounds through random sampling or operator designation, and then detect fused spectral anomalies as potential targets which can be learned as categories. Typically, with only two sensors (e.g., VNIR/ or SWIR/), the classifier learns a more generic target detector, such as a man-detector or a boat-detector, as opposed to a specific target detector that serves to fingerprint a particular target among many similar targets. Clearly, this kind of target detector is not based on target shape or resolution. It is, essentially, a spectral-based target detector that exploits cross-sensor spectral contrast. This is illustrated in Figures 6 & 7. VNIR MWIR Fused Detections Kayaks Figure 6: The same neural architecture shown in Figure 1 is used to fuse three sensors VNIR/MWIR/ here. The various outputs of the opponent-colors are used to train a target detector for the spatially unresolved kayaks. Fused Figure 7: Dual-sensor video is fused in real-time. One man-target is designated by an operator as a target of interest, and the classifier rapidly learns a detector that then detects all men moving around in the scene, and runs in real-time on compact commercial hardware. Key advantages of fused multisensor/spectral target detection are independence from both target shape and target motion. Independence from target shape enables detection of small unresolved targets, hence, supporting a wider field-of-view. When tracking targets from a moving platform, there is no need to stabilize the background in order to detect movers while coping with false alarms due to imperfect background stabilization. Stationary targets can be detected easily as well. However, purely spectralbased target detection also has its shortcomings, when other scenic elements have similar spectral content in the limited sensing bands available. As with the human visual system, the most robust target detectors will likely utilize a combination of the color, shape and motion pathways.
4 Nonetheless, we have been quite successful in detecting a large variety of target types in different backgrounds. The nature of our target detector, being based on continuous learning in an ART network (Fig. 5), supports adaptation of the representation while tracking the target. This helps tune the detector on-the-fly as the target moves through extended operating conditions and among confusers. 4 Multi-Hypothesis Tracking and Sensor Resource Management Modern target trackers utilize a combination of Kalman filtering, interacting motion models, and multi-hypothesis association strategies [12]. This enables the tracker to deal with multiple targets undergoing maneuvers and coasting through temporary occlusions. Commonly associated with radar tracking, similar methods have been applied to airto-ground video tracking [13]. This same association strategy can be extended to track-to-track fusion as well as detection-to-track fusion [14], providing a means to fuse tracks generated by multiple sensor platforms. Typical performance metrics for trackers include track identity lifetime, track purity, and track accuracy. Figure 8 illustrates fusion, detection and tracking of multiple targets on the sea surface. Some of these targets are stationary and some are moving. The sensors, consisting of a daylight CCD and MWIR imager, were in a turret aboard an aircraft in flight. This example was computed in real-time from recorded flight imagery. We have conducted related experiments in real-time while flying over water-borne and ground-based targets, both moving and stationary. The fused color imagery has sufficient quality to enable detection of all targets in the field-of-view without any false alarms. In order to track targets over an extended field-ofregard that exceeds the field-of-view of the sensors, it is necessary to have a dynamic sensor control strategy that supports both wide-area search as well as multi-target tracking. The combined control of sensor pointing (e.g., a turret or pan-tilt unit), field-of-view or zoom, and operating mode (e.g., frame-rate, resolution, or possibly integration time), is known as sensor resource management (SRM). Figure 8: Real-time VNIR/MWIR image fusion, detection and tracking of multiple water-borne targets from an airborne platform. Originally hazy daylight TV imagery is enhanced and fused with the MWIR imagery to produce high quality color fused imagery that supports detection of targets with no false alarms [9]. Tracking is performed using MHT multi-target tracking based on ATIF [14]. Figure 9: (Top) Sensor resource manager (SRM) uses the current set of tracker outputs (positions & uncertainties) to plan the next view (pointing, zoom, mode) in order to optimize a cost function (Bottom). The arrow (in the upper panel) shows the planned shift in view that minimizes the cost function (the bright red spot in the bottom panel). In support of the DARPA VIVID program, we have developed SRM methods based on approximate stochastic dynamic programming [13] that have been adapted for the multisensor fused imaging approach. The method utilizes a cost function that aims to optimize the number of targets
5 imaged in a field-of-view, the size of the field-of-view, the time between revisiting targets, and accounts for a planning horizon that includes not only the next view but following views as well. This is illustrated in Figure 9, where the SRM uses a prediction of multiple target locations from the tracker to plan its next optimal view. 5 Tower-Based Field Experiment In order to study the real-time fusion, detection and tracking of targets in a controlled environment, we have constructed a multisensor pan-tilt-zoom unit and conduct experiments on a tower overlooking an open range. The sensors and tower are shown in Figures 10 & 11. A human observer can also be tasked with detecting and tracking the ground targets being imaged by these sensors, either with direct viewing, enhanced viewing using intensifier tubes or thermal imagers at night, or by observing the monitor displaying the fused imagery from the sensor array. observers, humans with fused goggles or controlling fused turrets, and automated fused sensing & tracking systems. Correlating results between human observers and automated systems remains a challenge for the future. Figure10. Multisensor pan-tilt unit carries a daylight RGB zoom camera, an intensified-ccd (VNIR) sensor, a SWIR camera, a thermal imager, and a laser range finder (LRF). The pointing of the pan-tilt and zoom, as well as the various sensors are under control of the SRM. Figure 12 illustrates the view from the tower, showing the RGB visible, thermal, and fused imagery with target detections overlaid, as well as the target tracks bounded by the instantaneous field-of-view of the sensors. The scene shown is quite simple, with the targets prominently visible. But as the targets become smaller and denser, embedded in background clutter, and of lower contrast in the individual modalities, the task of detecting and tracking multiple targets gets progressively harder. The system will be faced with missed detections, false alarms, occluded targets, and proximity to confusers. This can easily be the case with man-targets moving among the trees at night, similar to those shown in Figure 7. We can expect that as overall image quality degrades in one or more sensor modalities, the task of multi-target detection and tracking becomes harder for both human Figure 11: Test tower 75ft high, used for field testing of the fused imaging and active multi-target tracking system. 6 Conclusions We have previously considered various aspects of fused image quality [1], and in this paper turned to fused video quality in its ability to support detection and tracking of multiple targets. We have developed the capability to do this in real-time, combining methods of opponent-color image fusion, target learning & detection, multihypothesis multi-target tracking, and sensor resource management. Examples are shown for controlled tower tests, field imaging at night, and airborne sensing of vessels on the sea surface. It will be necessary to develop metrics of human performance in this task domain, providing observers with joystick control over the pan-tilt unit (or possibly fused goggles), and requiring them to perform wide-area search, detection and tracking of multiple targets. Then, by correlating human performance with automated target detection & tracking performance using the same imaging modalities and image fusion methods, we expect to be able to predict human performance in such tasks under extended operating conditions. This is the subject of our future research on performance metrics using fused video.
6 Figure 12: Fused RGB/ imaging, target detection & tracking of ground vehicles from the test tower shown in Figure 11. Active tracking involves dynamic control of the multisensor pan-tilt unit to support both wide-area search and tracking of multiple targets in the field-of-regard. References [1] A. Waxman, D. Fay, P. Ilardi, E. Savoye, R. Biehl and D. Grau, Sensor fused night vision: Assessing image quality in the lab and in the field, Proceeds. of the 9 th International Conference on Information Fusion, Fusion 2007, Florence, [2] A. Waxman, A. Gove, D. Fay, J. Racamato, J. Carrick, M. Seibert, and E. Savoye, Color night vision: Opponent processing in the fusion of visible and IR imagery, Neural Networks, 10, 1-6, [3] A. Waxman, D. Fay, E. Savoye, et al., Solid-state color night vision: Fusion of low-light visible and thermal infrared imagery, Lincoln Laboratory Journal, 11(1): 41-60, [4] M. Aguilar, D. Fay, D. Ireland, J. Racamato, W. Ross, and A. Waxman, Field evaluations of dual-band fusion for color night vision, SPIE-3691, Enhanced and Synthetic Vision, [5] A. Toet, L. van Ruyven, and J. Valeton, Merging thermal and visual images by a contrast pyramid, Optical Engineering, 28, , [6] A. Toet, and J. Walraven, New false color mapping for image fusion, Optical Engineering, 35, , [7] A. Toet, J. IJspeert, A. Waxman, and M. Aguilar, Fusion of visible and thermal imagery improves situational awareness, SPIE-3088, Enhanced and Synthetic Vision, , [8] P. Burt and R. Kolczynski, Enhanced image capture through fusion, Fourth International Conference on Computer Vision, , Los Alamitos: IEEE Computer Society Press, [9] D. Fay, P. Ilardi, N. Sheldon, D. Grau, R. Biehl, and A. Waxman, Real-time image fusion and target learning & detection on a laptop attached processor, Proceeds. of the 7 th International Conference on Information Fusion, Fusion 2005, Stockholm, [10] G.A. Carpenter, S. Grossberg, N. Markuzon, J.H. Reynolds, and D.B. Rosen, Fuzzy ARTMAP: A neural network architecture for incremental supervised learning of analog multidimensional maps, IEEE Transactions on Neural Networks, 3, pp , [11] T. Kasuba, Simplified Fuzzy ARTMAP, AI Expert, pp , Nov 1993.
7 [12] Y. Bar-Shalom and X.-R. Li, Multi-target multisensor tracking: Principles and techniques, YBS Inc., [13] P. Arambel, M. Antone, M. Bosse, J. Silver, J.Krant, and T. Strat, Performance assessment of a video-based air-to-ground multiple target tracker with dynamic sensor control, SPIE Defense and Security Symposium, Signal Processing, Sensor Fusion, and Target Recognition XIV, Orlando, [14] S. Coraluppi, C. Carthel, M. Luettgen and S. Lynch, All-source track and identity fusion (ATIF), National Symposium on Sensor and Data Fusion, 2000.
Daniel A. Lavigne Defence Research and Development Canada Valcartier. Mélanie Breton Aerex Avionics Inc. July 27, 2010
A new fusion algorithm for shadow penetration using visible and midwave infrared polarimetric images Daniel A. Lavigne Defence Research and Development Canada Valcartier Mélanie Breton Aerex Avionics Inc.
More informationA Multiple-Hypothesis Tracking of Multiple Ground Targets from Aerial Video with Dynamic Sensor Control *
A Multiple-Hypothesis Tracking of Multiple Ground Targets from Aerial Video with Dynamic Sensor Control * Pablo Arambel, Matthew Antone, Constantino Rago, Herbert Landau ALPHATECH, Inc, 6 New England Executive
More informationImplementation of Image Fusion Algorithm Using Laplace Transform
Implementation of Image Fusion Algorithm Using Laplace Transform S.Santhosh kumar M.Tech, Srichaitanya Institute of Technology & Sciences, Karimnagar,Telangana. G Sahithi, M.Tech Assistant Professor, Srichaitanya
More informationVisionary EXT. Intelligent Camera System PATENT PENDING
Visionary EXT Intelligent Camera System PATENT PENDING EXTended Range Aventura's Visionary EXT represents a new paradigm in video surveillance, where an operator can concentrate on watching a single screen
More informationFusion of Multi-Modality Volumetric Medical Imagery
To appear in Proc. of the Fifth International oference on Information Fusion, 2002. Fusion of Multi-Modality Volumetric Medical Imagery Mario Aguilar Knowledge Systems Laboratory Mathematical, omputing
More informationFusion of Radar and EO-sensors for Surveillance
of Radar and EO-sensors for Surveillance L.J.H.M. Kester, A. Theil TNO Physics and Electronics Laboratory P.O. Box 96864, 2509 JG The Hague, The Netherlands kester@fel.tno.nl, theil@fel.tno.nl Abstract
More informationFusion of Multi-Modality Volumetric Medical Imagery
Fusion of Multi-Modality Volumetric Medical Imagery Mario Aguilar Knowledge ystems Laboratory Mathematical, omputing and Information ciences Department Jacksonville tate University marioa@ksl.jsu.edu Abstract
More informationUnited Vision Offering F3.5 brightness and 60x zoom (focal length of 2000mm when used with an extender), these lenses are suited for long range surveillance, including harbor surveillance. The compact and lightweight design allows establishment of compact remote surveillance systems. Day/Night capability for high-quality imaging around the clock. Equipped with a 2x extender that can see the actions of a person at a distance of 4km. Multiple power sources supported, can use existing control systems. Iris Override function allows manual iris adjustment.
United Vision Solutions Long Range Camera System Our camera system designed for long range surveillance 24/7 utilized the most advance optical sensors and lenses [ EV3000-DIR100H Hi-Resolution 644 x 512
More informationReal-Time Vehicle Detection and Tracking DDDAS Using Hyperspectral Features from Aerial Video
Real-Time Vehicle Detection and Tracking DDDAS Using Hyperspectral Features from Aerial Video Matthew J. Hoffman, Burak Uzkent, Anthony Vodacek School of Mathematical Sciences Chester F. Carlson Center
More informationVisible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness
Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com
More informationFC Series Traffic Camera. Architect & Engineering Specifications
FC Series Traffic Camera Architect & Engineering Specifications This document is controlled to FLIR Technology Level 1. The information contained in this document pertains to a dual use product controlled
More informationDigital Image Processing Lectures 1 & 2
Lectures 1 & 2, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013 Introduction to DIP The primary interest in transmitting and handling images in digital
More informationIR Laser Illuminators Long Range PTZ IP HD color D/N Camera Long Range PTZ IP HD IR camera Long Range PTZ IP HD Thermal Imager Long Range PTZ IP HD Cooled thermal camera 20 km detection range Long Range PTZ IP HD Uncooled thermal camera lens 4km detection Long Range PTZ IP HD EMCCD camera Long Range PTZ IP HD video wireless up to 50 km Long Range PTZ IP HD Explosive-proof Cameras Long Range PTZ IP HD Stainless-Steel Housing Long Range PTZ IP HD Rugged Camera enclosure Long Range PTZ IP HD Solar Camera System Long Range PTZ IP HD Radar interface camera system Long Range PTZ IP HD Vehicle Mast Camera Long Range PTZ IP HD Tower long-range camera
United Vision Solutions PAN/TILT THERMAL & COLOR CAMERAS - All Weather Rugged Housing resist high humidity and salt water. Eagle Vision Laser Infrared EV3000-LIR -CCD300-128 preset positions for perimeter
More informationCS 664 Segmentation. Daniel Huttenlocher
CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical
More informationFACILITATING INFRARED SEEKER PERFORMANCE TRADE STUDIES USING DESIGN SHEET
FACILITATING INFRARED SEEKER PERFORMANCE TRADE STUDIES USING DESIGN SHEET Sudhakar Y. Reddy and Kenneth W. Fertig Rockwell Science Center, Palo Alto Laboratory Palo Alto, California and Anne Hemingway
More informationImage Mining: frameworks and techniques
Image Mining: frameworks and techniques Madhumathi.k 1, Dr.Antony Selvadoss Thanamani 2 M.Phil, Department of computer science, NGM College, Pollachi, Coimbatore, India 1 HOD Department of Computer Science,
More informationUSING A CLUSTERING TECHNIQUE FOR DETECTION OF MOVING TARGETS IN CLUTTER-CANCELLED QUICKSAR IMAGES
USING A CLUSTERING TECHNIQUE FOR DETECTION OF MOVING TARGETS IN CLUTTER-CANCELLED QUICKSAR IMAGES Mr. D. P. McGarry, Dr. D. M. Zasada, Dr. P. K. Sanyal, Mr. R. P. Perry The MITRE Corporation, 26 Electronic
More informationDeep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks
Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin
More informationLong Range Camera System
United Vision Solutions Long Range Camera System Our camera system designed for long range surveillance 24/7 utilized the most advance optical sensors and lenses Day time, Night, Dark, Fog, Rain, and Smoke
More informationRemote Sensing Image Analysis via a Texture Classification Neural Network
Remote Sensing Image Analysis via a Texture Classification Neural Network Hayit K. Greenspan and Rodney Goodman Department of Electrical Engineering California Institute of Technology, 116-81 Pasadena,
More informationThe Feature Analyst Extension for ERDAS IMAGINE
The Feature Analyst Extension for ERDAS IMAGINE Automated Feature Extraction Software for GIS Database Maintenance We put the information in GIS SM A Visual Learning Systems, Inc. White Paper September
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More informationKeywords: correlation filters, detection, foliage penetration (FOPEN) radar, synthetic aperture radar (SAR).
header for SPIE use Distortion-Invariant FOPEN Detection Filter Improvements David Casasent, Kim Ippolito (Carnegie Mellon University) and Jacques Verly (MIT Lincoln Laboratory) ABSTRACT Various new improvements
More informationVideo Georegistration: Key Challenges. Steve Blask Harris Corporation GCSD Melbourne, FL 32934
Video Georegistration: Key Challenges Steve Blask sblask@harris.com Harris Corporation GCSD Melbourne, FL 32934 Definitions Registration: image to image alignment Find pixel-to-pixel correspondences between
More informationEvaluation of Moving Object Tracking Techniques for Video Surveillance Applications
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation
More informationEfficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network
Efficient Object Extraction Using Fuzzy Cardinality Based Thresholding and Hopfield Network S. Bhattacharyya U. Maulik S. Bandyopadhyay Dept. of Information Technology Dept. of Comp. Sc. and Tech. Machine
More informationDUAL MODE SCANNER for BROKEN RAIL DETECTION
DUAL MODE SCANNER for BROKEN RAIL DETECTION ROBERT M. KNOX President; Epsilon Lambda Electronics Dr. BENEDITO FONSECA Northern Illinois University Presenting a concept for improved rail safety; not a tested
More informationHuman Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg
Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationRemote Sensing Introduction to the course
Remote Sensing Introduction to the course Remote Sensing (Prof. L. Biagi) Exploitation of remotely assessed data for information retrieval Data: Digital images of the Earth, obtained by sensors recording
More informationPROFESSIONAL. Night Vision Devices.
PROFESSIONAL Night Vision Devices www.starlightitalia.com Self Cleaning Lens Pan 360 / Tilt 45 Image Intensifier Technology Direct Brightness Regulation 12" High Def. LED Monitor 15" Optional 17" for HSC
More informationIMAGINE Objective. The Future of Feature Extraction, Update & Change Mapping
IMAGINE ive The Future of Feature Extraction, Update & Change Mapping IMAGINE ive provides object based multi-scale image classification and feature extraction capabilities to reliably build and maintain
More informationReal Time Motion Detection Using Background Subtraction Method and Frame Difference
Real Time Motion Detection Using Background Subtraction Method and Frame Difference Lavanya M P PG Scholar, Department of ECE, Channabasaveshwara Institute of Technology, Gubbi, Tumkur Abstract: In today
More informationAn Angle Estimation to Landmarks for Autonomous Satellite Navigation
5th International Conference on Environment, Materials, Chemistry and Power Electronics (EMCPE 2016) An Angle Estimation to Landmarks for Autonomous Satellite Navigation Qing XUE a, Hongwen YANG, Jian
More informationRadar Target Identification Using Spatial Matched Filters. L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory
Radar Target Identification Using Spatial Matched Filters L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory Abstract The application of spatial matched filter classifiers to the synthetic
More informationReal-time target tracking using a Pan and Tilt platform
Real-time target tracking using a Pan and Tilt platform Moulay A. Akhloufi Abstract In recent years, we see an increase of interest for efficient tracking systems in surveillance applications. Many of
More informationUK Industrial Vision SWIR Challenges
UK Industrial Vision SWIR Challenges Ian Alderton ALRAD IMAGING ALRAD INSTRUMENTS LTD 1 ALRAD - Company Profile. ALRAD are a Distribution company for Imaging, Electro Optic, Analytical Components and Instruments
More informationSingle-Frame Image Processing Techniques for Low-SNR Infrared Imagery
Single-Frame Image Processing Techniques for Low-SNR Infrared Imagery Richard Edmondson, Michael Rodgers, Michele Banish, Michelle Johnson Sensor Technologies Huntsville, AL Heggere Ranganath University
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationThe Evolution of Thermal Imaging Cameras
170 Years of Continued Innovation The Evolution of Thermal Imaging Cameras The World s Finest Manufacturers of Temperature, Pressure & Humidity, Test and Calibration Instruments t May, 2007 What is a Thermal
More informationAcadia II Product Guide. Low-Power, Low-Latency Video Processing for Enhanced Vision in Any Condition
Acadia II Product Guide Low-Power, Low-Latency Video Processing for Enhanced Vision in Any Condition The Acadia II SoC is a state-of-the-art solution for integrated vision processing. Built around advanced
More informationGABRIELE GUIDI, PHD POLITECNICO DI MILANO, ITALY VISITING SCHOLAR AT INDIANA UNIVERSITY NOV OCT D IMAGE FUSION
GABRIELE GUIDI, PHD POLITECNICO DI MILANO, ITALY VISITING SCHOLAR AT INDIANA UNIVERSITY NOV 2017 - OCT 2018 3D IMAGE FUSION 3D IMAGE FUSION WHAT A 3D IMAGE IS? A cloud of 3D points collected from a 3D
More informationV-Sentinel: A Novel Framework for Situational Awareness and Surveillance
V-Sentinel: A Novel Framework for Situational Awareness and Surveillance Suya You Integrated Media Systems Center Computer Science Department University of Southern California March 2005 1 Objective Developing
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationAUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S
AUTONOMOUS IMAGE EXTRACTION AND SEGMENTATION OF IMAGE USING UAV S Radha Krishna Rambola, Associate Professor, NMIMS University, India Akash Agrawal, Student at NMIMS University, India ABSTRACT Due to the
More informationSHIP RECOGNITION USING OPTICAL IMAGERY FOR HARBOR SURVEILLANCE
SHIP RECOGNITION USING OPTICAL IMAGERY FOR HARBOR SURVEILLANCE Dr. Patricia A. Feineigle, Dr. Daniel D. Morris, and Dr. Franklin D. Snyder General Dynamics Robotic Systems, 412-473-2159 (phone), 412-473-2190
More informationMAPPS 2013 Winter Conference 2013 Cornerstone Mapping, Inc. 1
MAPPS 2013 Winter Conference 2013 Cornerstone Mapping, Inc. 1 What is Thermal Imaging? Infrared radiation is perceived as heat Heat is a qualitative measure of temperature Heat is the transfer of energy
More informationCS231N Section. Video Understanding 6/1/2018
CS231N Section Video Understanding 6/1/2018 Outline Background / Motivation / History Video Datasets Models Pre-deep learning CNN + RNN 3D convolution Two-stream What we ve seen in class so far... Image
More informationARTICLE IN PRESS. ARTMAP neural networks for information fusion and data mining: map production and target recognition methodologies
Neural Networks xx (2003) 1 15 www.elsevier.com/locate/neunet ARTMAP neural networks for information fusion and data mining: map production and target recognition methodologies Olga Parsons, Gail A. Carpenter*
More informationOPERATIONAL SHIP DETECTION & RAPID URBAN MAPPING : EXPLORING DIVERSE METHODOLOGICAL APPROACHES IN OBJECT RECOGNTION AND SATELLITE IMAGE CLASSIFICATION
Journées ORFEO Méthodo 10-11 Janvier 2008 OPERATIONAL SHIP DETECTION & RAPID URBAN MAPPING : EXPLORING DIVERSE METHODOLOGICAL APPROACHES IN OBJECT RECOGNTION AND SATELLITE IMAGE CLASSIFICATION Michel Petit,
More informationAtom New 17 Micron Pixel Design! ATOM 1024: Uncooled oled Infrared Camera with XGA Resolution UNCOOLED CORES
UNCOOLED CORES Atom 1024 ATOM 1024: Uncooled oled Infrared Camera with XGA Resolution New 17 Micron Pixel Design! Frame Rate: 30Hz XGA, 60Hz VGA Very Low Power Consumption < 50mK Detector Thermal Sensitivity
More information3.2 Level 1 Processing
SENSOR AND DATA FUSION ARCHITECTURES AND ALGORITHMS 57 3.2 Level 1 Processing Level 1 processing is the low-level processing that results in target state estimation and target discrimination. 9 The term
More informationOptical flow and tracking
EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,
More informationPilot Assistive Safe Landing Site Detection System, an Experimentation Using Fuzzy C Mean Clustering
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Pilot Assistive Safe Landing Site Detection System, an Experimentation Using Fuzzy C Mean Clustering Jeena Wilson 1 1 Federal Institute
More informationIR Laser Illuminators
Eagle Vision PAN/TILT THERMAL & COLOR CAMERAS - All Weather Rugged Housing resist high humidity and salt water. - Image overlay combines thermal and video image - The EV3000 CCD colour night vision camera
More informationThermal and Optical Cameras. By Philip Smerkovitz TeleEye South Africa
Thermal and Optical Cameras By Philip Smerkovitz TeleEye South Africa phil@teleeye.co.za OPTICAL CAMERAS OVERVIEW Traditional CCTV Camera s (IP and Analog, many form factors). Colour and Black and White
More informationNonlinear Multiresolution Image Blending
Nonlinear Multiresolution Image Blending Mark Grundland, Rahul Vohra, Gareth P. Williams and Neil A. Dodgson Computer Laboratory, University of Cambridge, United Kingdom October, 26 Abstract. We study
More informationPresented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey
Presented at the FIG Congress 2018, May 6-11, 2018 in Istanbul, Turkey Evangelos MALTEZOS, Charalabos IOANNIDIS, Anastasios DOULAMIS and Nikolaos DOULAMIS Laboratory of Photogrammetry, School of Rural
More informationFigure 1: Workflow of object-based classification
Technical Specifications Object Analyst Object Analyst is an add-on package for Geomatica that provides tools for segmentation, classification, and feature extraction. Object Analyst includes an all-in-one
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationDesigning a Site with Avigilon Self-Learning Video Analytics 1
Designing a Site with Avigilon Self-Learning Video Analytics Avigilon HD cameras and appliances with self-learning video analytics are easy to install and can achieve positive analytics results without
More informationFast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Fast Image Registration via Joint Gradient Maximization: Application to Multi-Modal Data Xue Mei, Fatih Porikli TR-19 September Abstract We
More informationENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning
1 ENY-C2005 Geoinformation in Environmental Modeling Lecture 4b: Laser scanning Petri Rönnholm Aalto University 2 Learning objectives To recognize applications of laser scanning To understand principles
More informationAircraft Tracking Based on KLT Feature Tracker and Image Modeling
Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University
More informationULISSE COMPACT THERMAL
2015/03/17 OUTDOOR PTZ CAMERA DUAL VISION, DAY/NIGHT AND THERMAL, FOR TOTAL DARKNESS MONITORING IP66 PROTECTION TYPE 4X TYPE 4X THERMAL WIPER INTEGRATED CAM FEATURES Variable speed: 0.1-200 /s Pan/Tilt
More informationLecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013
Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth
More informationCS 4758: Automated Semantic Mapping of Environment
CS 4758: Automated Semantic Mapping of Environment Dongsu Lee, ECE, M.Eng., dl624@cornell.edu Aperahama Parangi, CS, 2013, alp75@cornell.edu Abstract The purpose of this project is to program an Erratic
More informationPortable Long Range Camera System
United Vision Solutions Portable Long Range Camera System Our camera system designed for long range surveillance 24/7 utilized the most advance optical sensors and lenses EV3000-TriPod-EMCCD Hi-Resolution
More informationTarget Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering
Target Tracking Based on Mean Shift and KALMAN Filter with Kernel Histogram Filtering Sara Qazvini Abhari (Corresponding author) Faculty of Electrical, Computer and IT Engineering Islamic Azad University
More informationAppendix III: Ten (10) Specialty Areas - Remote Sensing/Imagry Science Curriculum Mapping to Knowledge Units-RS/Imagry Science Specialty Area
III. Remote Sensing/Imagery Science Specialty Area 1. Knowledge Unit title: Remote Sensing Collection Platforms A. Knowledge Unit description and objective: Understand and be familiar with remote sensing
More informationPortable Long Range Camera System
United Vision Solutions Portable Long Range Camera System Our camera system designed for long range surveillance 24/7 utilized the most advance optical sensors and lenses EV3000-TriPod-EMCCD Hi-Resolution
More informationInternational Journal of Modern Engineering and Research Technology
Volume 4, Issue 3, July 2017 ISSN: 2348-8565 (Online) International Journal of Modern Engineering and Research Technology Website: http://www.ijmert.org Email: editor.ijmert@gmail.com A Novel Approach
More informationThermal Imaging Systems.
www.aselsan.com.tr Thermal Imaging Systems ASELSAN offers superior capabilities to its customers with its Airborne and Naval Thermal Imaging Systems, commonly referred to as Forward Looking Infrared (FLIR).
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationColor Local Texture Features Based Face Recognition
Color Local Texture Features Based Face Recognition Priyanka V. Bankar Department of Electronics and Communication Engineering SKN Sinhgad College of Engineering, Korti, Pandharpur, Maharashtra, India
More informationCo-ocurrence Matrix for Binary Data. Inputs Outputs. Training Data. Co-ocurrence Matrix for Depth Data
Ecient Indexing for Object Recognition Using Large Networks Mark R. Stevens Charles W. Anderson J. Ross Beveridge Department of Computer Science Colorado State University Fort Collins, CO 80523 fstevensm,anderson,rossg@cs.colostate.edu
More informationIMAGE SEGMENTATION. Václav Hlaváč
IMAGE SEGMENTATION Václav Hlaváč Czech Technical University in Prague Faculty of Electrical Engineering, Department of Cybernetics Center for Machine Perception http://cmp.felk.cvut.cz/ hlavac, hlavac@fel.cvut.cz
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationMotion Tracking and Event Understanding in Video Sequences
Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!
More information(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)
Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application
More informationFluke Ti40FT and Ti45FT IR FlexCam Thermal Imagers with IR-Fusion Technology
Thermal Imaging Fluke Ti40FT and Ti45FT IR FlexCam Thermal Imagers with IR-Fusion Technology The versatile choice for maintenance and production engineers and technicians The Fluke Ti4x models feature
More informationCS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves
CS443: Digital Imaging and Multimedia Perceptual Grouping Detecting Lines and Simple Curves Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines Perceptual Grouping and Segmentation
More information2 Proposed Methodology
3rd International Conference on Multimedia Technology(ICMT 2013) Object Detection in Image with Complex Background Dong Li, Yali Li, Fei He, Shengjin Wang 1 State Key Laboratory of Intelligent Technology
More informationORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION
ORGANIZATION AND REPRESENTATION OF OBJECTS IN MULTI-SOURCE REMOTE SENSING IMAGE CLASSIFICATION Guifeng Zhang, Zhaocong Wu, lina Yi School of remote sensing and information engineering, Wuhan University,
More informationPROBLEM FORMULATION AND RESEARCH METHODOLOGY
PROBLEM FORMULATION AND RESEARCH METHODOLOGY ON THE SOFT COMPUTING BASED APPROACHES FOR OBJECT DETECTION AND TRACKING IN VIDEOS CHAPTER 3 PROBLEM FORMULATION AND RESEARCH METHODOLOGY The foregoing chapter
More information3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots
3D Fusion of Infrared Images with Dense RGB Reconstruction from Multiple Views - with Application to Fire-fighting Robots Yuncong Chen 1 and Will Warren 2 1 Department of Computer Science and Engineering,
More informationLukas Paluchowski HySpex by Norsk Elektro Optikk AS
HySpex Mjolnir the first scientific grade hyperspectral camera for UAV remote sensing Lukas Paluchowski HySpex by Norsk Elektro Optikk AS hyspex@neo.no lukas@neo.no 1 Geology: Rock scanning Courtesy of
More informationCOMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION
COMBINING HIGH SPATIAL RESOLUTION OPTICAL AND LIDAR DATA FOR OBJECT-BASED IMAGE CLASSIFICATION Ruonan Li 1, Tianyi Zhang 1, Ruozheng Geng 1, Leiguang Wang 2, * 1 School of Forestry, Southwest Forestry
More informationCar tracking in tunnels
Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern
More informationStochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen
Stochastic Road Shape Estimation, B. Southall & C. Taylor Review by: Christopher Rasmussen September 26, 2002 Announcements Readings for next Tuesday: Chapter 14-14.4, 22-22.5 in Forsyth & Ponce Main Contributions
More informationColor and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception
Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both
More informationAccurate Motion Estimation and High-Precision 3D Reconstruction by Sensor Fusion
007 IEEE International Conference on Robotics and Automation Roma, Italy, 0-4 April 007 FrE5. Accurate Motion Estimation and High-Precision D Reconstruction by Sensor Fusion Yunsu Bok, Youngbae Hwang,
More informationPrecise temperature distribution measurement and visualization with our IR-TCM HD Infrared Cameras. SHARING EXCELLENCE
Industry and Mechanical Engineering Precise temperature distribution measurement and visualization with our IR-TCM HD Infrared Cameras. SHARING EXCELLENCE The Jenoptik IR-TCM HD thermal camera series for
More informationBlood Microscopic Image Analysis for Acute Leukemia Detection
I J C T A, 9(9), 2016, pp. 3731-3735 International Science Press Blood Microscopic Image Analysis for Acute Leukemia Detection V. Renuga, J. Sivaraman, S. Vinuraj Kumar, S. Sathish, P. Padmapriya and R.
More informationAutomatic visual recognition for metro surveillance
Automatic visual recognition for metro surveillance F. Cupillard, M. Thonnat, F. Brémond Orion Research Group, INRIA, Sophia Antipolis, France Abstract We propose in this paper an approach for recognizing
More informationPOTENTIAL ACTIVE-VISION CONTROL SYSTEMS FOR UNMANNED AIRCRAFT
26 TH INTERNATIONAL CONGRESS OF THE AERONAUTICAL SCIENCES POTENTIAL ACTIVE-VISION CONTROL SYSTEMS FOR UNMANNED AIRCRAFT Eric N. Johnson* *Lockheed Martin Associate Professor of Avionics Integration, Georgia
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationGoldeye CL-032. Description. Goldeye CL all purpose SWIR camera
Goldeye CL-032 Camera Link SWIR camera Compact industrial design, no fan Simple camera configuration via GenCP Description Goldeye CL-032 - all purpose SWIR camera The Goldeye CL-032 is a very versatile
More informationImage segmentation. Václav Hlaváč. Czech Technical University in Prague
Image segmentation Václav Hlaváč Czech Technical University in Prague Center for Machine Perception (bridging groups of the) Czech Institute of Informatics, Robotics and Cybernetics and Faculty of Electrical
More informationThe ultimate in thermographic evolution
Optical Systems I Lasers & Material Processing I Industrial Metrology I Traffic Solutions I Defense & Civil Systems The ultimate in thermographic evolution Sensor Systems Improved visibility. Improved
More information