Spatio temporal Segmentation using Laserscanner and Video Sequences
|
|
- Abraham Webster
- 6 years ago
- Views:
Transcription
1 Spatio temporal Segmentation using Laserscanner and Video Sequences Nico Kaempchen, Markus Zocholl and Klaus C.J. Dietmayer Department of Measurement, Control and Microtechnology University of Ulm, D Ulm, Germany nico.kaempchen@e-technik.uni-ulm.de Abstract. Reliable object detection and segmentation is crucial for active safety driver assistance applications. In urban areas where the object density is high, a segmentation based on a spatial criterion often fails due to small object distances. Therefore, optical flow estimates are combined with distance measurements of a Laserscanner in order to separate objects with different motions even if their distance is vanishing. Results are presented on real measurements taken in potentially harmful traffic scenarios. 1 Introduction The ARGOS project at the University of Ulm aims at a consistent dynamic description of the vehicles environment for future advanced safety applications such as automatic emergency braking, PreCrash and pedestrian safety. A Laserscanner and a video camera mounted on the test vehicle retrieve the necessary measurements of the vehicles environment [7]. The Laserscanner acquires a distance profile of the vehicles environment. Each measurement represents an object detection in 3d space. Because of the high reliability of object detection and the accurate distance measurements at a high angular resolution, the Laserscanner is well suited for object detection, tracking and classification [4]. However there are scenarios, especially in dense urban traffic where the algorithms fail. The Laserscanner tracking and classification algorithms are based on a segmentation of the measurements. The measurements are clustered with respect to their distance. Objects which are close together are therefore wrongly recognised as a single segment. Thus, object tracking and classification are bound to be incorrect. A similar problem arises in stereo vision. In [5] stereo vision is combined with optical flow estimates in order to detect moving objects even if they are close to other stationary objects. However, the approach can not differentiate between two dissimilarly moving objects. Dang et al. developed an elegant Kalman Filter implementation for object tracking using stereo vision and optical flow [3]. This algorithm uses a feature tracking approach and can be used for image segmentation based on the object dynamics. Our approach aims at a correct Laserscanner based segmentation of objects even they are close together by analysing their motion pattern in the video image
2 2 N. Kaempchen et al. domain. The segmentation criterion is therefore based on the distance between Laserscanner measurements and additionally the difference of the associated optical flow estimates in the video images. 2 Sensors A Laserscanner and a monocular camera are combined in order to enable a reliable environment recognition in distances of up to 80 m. The multi layer Laserscanner ALASCA (Automotive LAserSCAnner) of the company IBEO Automobile Sensor GmbH (Fig. 1) acquires distance profiles of the vehicles environment of up to 270 horizontal field of view at a variable scan frequency of Hz. At 10 Hz the angular resolution is 0.25 with a single shot measurement standard deviation of σ = 3 cm, thus enabling a precise distance profile of the vehicles environment. It uses four scan planes in order to compensate for the pitch angle of the ego vehicle. The Laserscanner ALASCA has been optimised for automotive application and performs robustly even in adverse weather conditions. The multi layer Laserscanner is mounted at the front bumper of the test vehicle which reduces the horizontal field of view to 180. Fig. 1. The multi layer Laserscanner ALASCA (Automotive LAserSCAnner) of the company IBEO Automobile Sensor GmbH. The monocular camera is mounted behind the windscreen beside the inner rear mirror. The camera is equipped with a 1/2 CCD chip which has a standard VGA resolution of 640x480 pixel. With a 8 mm lens a horizontal view of 44 is realised at an average angular resolution of 0.07 per pixel. In order to synchronise the sensors, the camera is triggered, when the rotating Laserscanner head is aligned with the direction of the optical axis of the camera. The sensors are calibrated in order to enable not only a temporal alignment given by the synchronisation but also a spatial alignment. By means of an accurate synchronisation and calibration, image regions can be associated directly with Laserscanner measurements. Therefore it is possible to assign certain image parts a distance, which is a major advantage of this fusion approach.
3 Spatio temporal Segmentation using Laserscanner and Video Sequences 3 3 Laserscanner based Segmentation In order to reduce the amount of data which has to be processed, the Laserscanner measurements are combined to segments. The aim of the segmentation is to generate clusters which each represent an object in reality. Optimally there is exactly one segment per object and only one object per segment. This is, however, not always possible to realise. The segments are created based on a distance criterion. Measurements with a small distance to neighbouring measurements are included in the same segment. Both the x and y components of the distance d x and d y have to be below a certain threshold θ 0. For urban scenarios a sensible choice is θ 0 = 0.7 m. Especially in urban areas where the object density is high, two objects might be so close together that all measurements on these objects are combined to a single segment. This is critical for object tracking and classification algorithms which are based on the segmentation. If the measurements of two objects are combined to one single segment the object tracking can not estimate the true velocity of the two objects which is especially severe if the objects exhibit different velocities (Fig. 2). Additionally a classification of the object type (car, truck, pedestrian, small stationary objects and large stationary object) based on the segment dimensions is bound to be incorrect. However, reducing the threshold θ 0 results in an increase of objects which are represented by several segments. This object disintegration is difficult to handle using only Laserscanner measurements. To the authors knowledge there has not yet been suggested any real time Laserscanner object tracking algorithm which is robust against a strong object disintegration in urban scenarios. Fig.2. Laserscanner based segmentation of a parking scenario at two time instances.
4 4 N. Kaempchen et al. 4 Spatio temporal Segmentation using Optical Flow In order to improve the Laserscanner based segmentation which uses a distance criterion, an additional criterion is introduced. Considering two consecutive images the optical flow can be calculated for image regions which are associated with Laserscanner measurements. Using the optical flow as an additional segmentation criterion enables the differentiation between objects of diverging lateral motions even if they are close together. The optical flow f = ( f u f v ) is calculated with a gradient based method [1, 8,6]. In automotive applications, the ego motion component of the optical flow can be high even when using short measurement intervals. Therefore, a pyramidal optical flow estimation is applied in order to account for large displacements [2]. Two spatio temporal segmentation algorithms have been developed the constructive and destructive segmentation. 4.1 Constructive Segmentation The constructive approach changes the segmentation distance threshold θ 0 depending on the similarity of the assigned optical flow. Extending the optical flow vector without loss of generality with the time dimension 1 ˆf = f 2 u + fv f u f v 1, (1) the similarity of two optical flow vectorsˆf 1 andˆf 2 is given by the angle ψ between the vectors [1] ) ψ = arccos (ˆf1 ˆf 2, with ψ [0, π]. (2) This similarity measure ψ is, however, biased towards large optical flow vectors f. Therefore the optical flow vectors are normalised, with f = 2 f, (3) f 1 + f 2 before applying equation (1) and (2). The segmentation process is performed as in the Laserscanner based approach. Two Laserscanner measurements are assigned to the same segment if their distance components d x and d y are below the threshold θ 0. However the threshold is now a function of the similarity measure ψ, with θ(ψ) = θ 0 (aψ + b), (4) where a and b are parameters of a linear transformation of ψ. The parameters a and b are chosen so that θ(ψ) is increased for similar optical flow vectors
5 Spatio temporal Segmentation using Laserscanner and Video Sequences 5 and decreased for dissimilar vectors. If there is no optical flow assigned to the Laserscanner measurement a threshold of θ(ψ) = θ 0 is chosen. This segmentation approach performs well if the optical flow vectors can be determined precisely even at the object boundaries where occlusions occur. As this could not be achieved with the chosen optical flow approach a second segmentation was developed which is more robust against inaccurate optical flow estimates. 4.2 Destructive Segmentation The destructive approach is based on the segmentation of Laserscanner measurements described in section 3. The threshold θ 0 is chosen so that the object disintegration is low. Therefore, measurements on objects which are close together, are often assigned to the same segment. In this approach the video images are used to perform a segmentation based on optical flow estimates. The Laserscanner and video based segmentation are performed individually. If the optical flow segmentation indicates the existence of several objects within the image region (c) fu (b) α (a) cameras field of view Fig. 3. Optical flow profile assigned to a Laserscanner segment. (a) shows Laserscanner measurements which are associated to the same Laserscanner segment, (b) the respective image region, (c) the horizontal optical flow component f u for the four scan layers as a function of the viewing angle α. The dotted horizontal lines indicate the α axis for the individual scan layers.
6 6 N. Kaempchen et al. fu Fig.4. Approximation of the optical flow profile by a set of linear functions. The detected object boundary is indicated with the vertical dashed lines. α of an associated Laserscanner segment, the Laserscanner segment is separated according to the optical flow segments. Fig. 3 shows a Laserscanner segment and the associated image region of a parking situation. The distant car backs out of a parking space. The optical flow estimation is attention driven and only calculated at image regions which are assigned to a Laserscanner measurement. The horizontal optical flow component f u for the four scan layers is shown in Fig. 3 (c). This optical flow profile is used for the optical flow based segmentation. The raw optical flow profile is corrupted by outliers caused by reflections or other effects which violate the assumptions of the brightness change constraint equation [6]. Therefore a median filter is applied to the optical flow estimates in order to reduce the number of outliers. The object boundaries appear in the optical flow profile as discontinuities. In order to detect these discontinuities, the profile is approximated by a set of linear functions (Fig. 4). Initially, the optical flow profile is represented by a single line segment L i. Recursively, a line segment is split into two if the maximal distance d(α, L i ) of the optical flow profile to a line segment exceeds a threshold κ, d(α, L i ) > κ( f ). (5) The threshold κ is motivated by the noise in the optical flow estimates which is a function of the magnitude of the optical flow vector f and the expected errors caused by violations of the brightness change constraint equation. The gradients m(l i, n) of the line segments L i of the individual scan layers n are combined to an averaged estimate m(l i ), after deletion of potential outliers m(l i ) = 1 m(l i, n), (6) N where N is the number of scan layers. Object boundaries are classified based on the averaged gradient of the line segments m(l i ), with n m(l i ) > m max, (7) where m max is the maximal allowable steepness for a line segment of a single rigid object.
7 Spatio temporal Segmentation using Laserscanner and Video Sequences 7 The destructive segmentation assumes objects to be rigid and that object boundaries are mainly vertical in the image domain. In the parking scenarios chosen for evaluation purposes these assumptions are perfectly met. 5 Results The presented segmentation algorithms were evaluated on parking scenarios. In all scenarios a car backs out of a parking lot. The speed of the ego vehicle varies across the scenarios, which introduces an additional optical flow component with increasing magnitude towards the image borders. Twelve scenarios were investigated with both segmentation approaches and compared to the Laserscanner segmentation. The focus was on the car which backs out and its neighbouring car. The time stamp and the position of the moving car were recorded when the two objects were first continuously separated by the Laserscanner approach. Then, the time stamps and positions for the two other approaches were noted. The average of the differences between time and position of the optical flow based approaches with respect to the Laserscanner segmentation are concluded in Table 1. Table 1. Gained time and the respective covered distance of the car backing out of the parking lot. Constructive Destructive Time [sec] Distance [m] Time [sec] Distance [m] In average the optical flow based segmentations detect the moving car as an individual object 2.2 sec (2.3 sec) earlier than the Laserscanner segmentation. This gained time corresponds to a covered distance of the car backing out of the parking lot of 1.5 m (1.6 m). The two spatio temporal segmentations perform similar in terms of an early separation of the two objects of different lateral speeds. However, the more general constructive approach exhibits a higher degree of object disintegrations. The two objects are often represented by more than two segments. This is due to inaccuracies in the optical flow estimation especially at object borders. The destructive approach is less general, as is takes only the horizontal optical flow component into account. This is, however, the main motion component of cars moving lateral to the sensors viewing direction and therefore sufficient to consider with respect to the application. The filtering and region based linear approximation of optical flow estimates enables the algorithm to be more robust against inaccuracies in the optical flow estimation. The result is a very low degree of object disintegration.
8 8 N. Kaempchen et al. Further examination of the results exhibited that the performance depends on two main factors independently of the chosen algorithm. First, the performance decreases with increasing velocity of the ego vehicle as the optical flow artefacts and the noise raises and therefore the SNR decreases. Second, the performance depends on the velocity difference tangential to the viewing direction between the close objects. The higher the velocity difference the better the performance. In the scenario of a car backing out of a parking lot, the performance depends directly on its velocity which varies in the experiments between 1 and 9 km/h. 6 Conclusion Two spatio temporal segmentation approaches have been presented. Based on Laserscanner measurements and optical flow estimates of associated image regions a robust segmentation of objects is enabled even if objects are close together. In potentially harmful situations the correct segmentation allows a precise tracking of moving objects. The accurate segmentation and therefore tracking is an essential prerequisite for a reliable prediction of objects in dynamic scenarios for active safety systems in future cars such as automatic emergency braking. References 1. J. Barron, D. Fleet, S. Beauchemin, and T. Burkitt, Performance of optical flow techniques, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, 1992, pp J.-Y. Bouguet, Pyramidal implementation of the lucas kanade feature tracker, Intel Corporation, Microprocessor Research Labs, Tech. Rep., T. Dang, C. Hoffmann, and C. Stiller, Fusing optical flow and stereo disparity for object tracking, in IEEE 5th International Conference on Intelligent Transportation Systems, Singapore, September 2002, pp K. C. Fuerstenberg and K. C. Dietmayer, Object tracking and classification for multiple active safety and comfort applications using multilayer laserscanners, in Proceedings of IV 2004, IEEE Intelligent Vehicles Symposium, Parma, Italy, June 2004, accepted. 5. S. Heinrich, Real time fusion of motion and stereo using flow/depth constraint for fast obstacle detection, in Pattern Recognition, 24th DAGM Symposium, ser. Lecture Notes in Computer Science, L. J. V. Gool, Ed., no Zurich, Switzerland: Springer 2002, ISBN X, September 2002, pp B. Jähne, H. Haußecker, and P. Geißler, Eds., Handbook of Computer Vision and Applications. Academic Press, ISBN , May N. Kaempchen, K. Fuerstenberg, A. Skibicki, and K. Dietmayer, Sensor fusion for multiple automotive active safety and comfort applications, in 8th International Forum on Advanced Microsystems for Automotive Applications, Berlin, Germany, March 2004, pp J. Shi and C. Tomasi, Good features to track, in IEEE Conference on Computer Vision and Pattern Recognition, 1994, pp
Laserscanner Based Cooperative Pre-Data-Fusion
Laserscanner Based Cooperative Pre-Data-Fusion 63 Laserscanner Based Cooperative Pre-Data-Fusion F. Ahlers, Ch. Stimming, Ibeo Automobile Sensor GmbH Abstract The Cooperative Pre-Data-Fusion is a novel
More information6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception
6D-Vision: Fusion of Stereo and Motion for Robust Environment Perception Uwe Franke, Clemens Rabe, Hernán Badino, and Stefan Gehrig DaimlerChrysler AG, 70546 Stuttgart, Germany {uwe.franke,clemens.rabe,hernan.badino,stefan.gehrig}@daimlerchrysler.com
More informationDYNAMIC STEREO VISION FOR INTERSECTION ASSISTANCE
FISITA 2008 World Automotive Congress Munich, Germany, September 2008. DYNAMIC STEREO VISION FOR INTERSECTION ASSISTANCE 1 Franke, Uwe *, 2 Rabe, Clemens, 1 Gehrig, Stefan, 3 Badino, Hernan, 1 Barth, Alexander
More informationQUASI-3D SCANNING WITH LASERSCANNERS
QUASI-3D SCANNING WITH LASERSCANNERS V. Willhoeft, K. Ch. Fuerstenberg, IBEO Automobile Sensor GmbH, vwi@ibeo.de INTRODUCTION: FROM 2D TO 3D Laserscanners are laser-based range-finding devices. They create
More informationEgomotion Estimation by Point-Cloud Back-Mapping
Egomotion Estimation by Point-Cloud Back-Mapping Haokun Geng, Radu Nicolescu, and Reinhard Klette Department of Computer Science, University of Auckland, New Zealand hgen001@aucklanduni.ac.nz Abstract.
More informationNEW CONCEPT FOR JOINT DISPARITY ESTIMATION AND SEGMENTATION FOR REAL-TIME VIDEO PROCESSING
NEW CONCEPT FOR JOINT DISPARITY ESTIMATION AND SEGMENTATION FOR REAL-TIME VIDEO PROCESSING Nicole Atzpadin 1, Serap Askar, Peter Kauff, Oliver Schreer Fraunhofer Institut für Nachrichtentechnik, Heinrich-Hertz-Institut,
More informationVision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy
Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Gideon P. Stein Ofer Mano Amnon Shashua MobileEye Vision Technologies Ltd. MobileEye Vision Technologies Ltd. Hebrew University
More information2 OVERVIEW OF RELATED WORK
Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method
More informationSensory Augmentation for Increased Awareness of Driving Environment
Sensory Augmentation for Increased Awareness of Driving Environment Pranay Agrawal John M. Dolan Dec. 12, 2014 Technologies for Safe and Efficient Transportation (T-SET) UTC The Robotics Institute Carnegie
More informationReal-time Stereo Vision for Urban Traffic Scene Understanding
Proceedings of the IEEE Intelligent Vehicles Symposium 2000 Dearborn (MI), USA October 3-5, 2000 Real-time Stereo Vision for Urban Traffic Scene Understanding U. Franke, A. Joos DaimlerChrylser AG D-70546
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationUsing Optical Flow for Stabilizing Image Sequences. Peter O Donovan
Using Optical Flow for Stabilizing Image Sequences Peter O Donovan 502425 Cmpt 400 Supervisor: Dr. Mark Eramian April 6,2005 1 Introduction In the summer of 1999, the small independent film The Blair Witch
More informationFully Automatic Endoscope Calibration for Intraoperative Use
Fully Automatic Endoscope Calibration for Intraoperative Use Christian Wengert, Mireille Reeff, Philippe C. Cattin, Gábor Székely Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland {wengert,
More informationColour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation
ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology
More informationUsing temporal seeding to constrain the disparity search range in stereo matching
Using temporal seeding to constrain the disparity search range in stereo matching Thulani Ndhlovu Mobile Intelligent Autonomous Systems CSIR South Africa Email: tndhlovu@csir.co.za Fred Nicolls Department
More informationDominant plane detection using optical flow and Independent Component Analysis
Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,
More informationPerformance Evaluation Metrics and Statistics for Positional Tracker Evaluation
Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk
More informationMotion Estimation. There are three main types (or applications) of motion estimation:
Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationPedestrian Detection Using Multi-layer LIDAR
1 st International Conference on Transportation Infrastructure and Materials (ICTIM 2016) ISBN: 978-1-60595-367-0 Pedestrian Detection Using Multi-layer LIDAR Mingfang Zhang 1, Yuping Lu 2 and Tong Liu
More informationMoving Object Tracking in Video Using MATLAB
Moving Object Tracking in Video Using MATLAB Bhavana C. Bendale, Prof. Anil R. Karwankar Abstract In this paper a method is described for tracking moving objects from a sequence of video frame. This method
More informationPreceding vehicle detection and distance estimation. lane change, warning system.
Preceding vehicle detection and distance estimation for lane change warning system U. Iqbal, M.S. Sarfraz Computer Vision Research Group (COMVis) Department of Electrical Engineering, COMSATS Institute
More informationProceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives
Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives
More informationRobust Camera Pan and Zoom Change Detection Using Optical Flow
Robust Camera and Change Detection Using Optical Flow Vishnu V. Makkapati Philips Research Asia - Bangalore Philips Innovation Campus, Philips Electronics India Ltd. Manyata Tech Park, Nagavara, Bangalore
More informationSpatio-Temporal Stereo Disparity Integration
Spatio-Temporal Stereo Disparity Integration Sandino Morales and Reinhard Klette The.enpeda.. Project, The University of Auckland Tamaki Innovation Campus, Auckland, New Zealand pmor085@aucklanduni.ac.nz
More informationSubpixel Corner Detection for Tracking Applications using CMOS Camera Technology
Subpixel Corner Detection for Tracking Applications using CMOS Camera Technology Christoph Stock, Ulrich Mühlmann, Manmohan Krishna Chandraker, Axel Pinz Institute of Electrical Measurement and Measurement
More informationInternational Journal of Advance Engineering and Research Development
Scientific Journal of Impact Factor (SJIF): 4.72 International Journal of Advance Engineering and Research Development Volume 4, Issue 11, November -2017 e-issn (O): 2348-4470 p-issn (P): 2348-6406 Comparative
More informationFocus of Expansion Estimation for Motion Segmentation from a Single Camera
Focus of Expansion Estimation for Motion Segmentation from a Single Camera José Rosa Kuiaski, André Eugênio Lazzaretti and Hugo Vieira Neto Graduate School of Electrical Engineering and Applied Computer
More informationAircraft Tracking Based on KLT Feature Tracker and Image Modeling
Aircraft Tracking Based on KLT Feature Tracker and Image Modeling Khawar Ali, Shoab A. Khan, and Usman Akram Computer Engineering Department, College of Electrical & Mechanical Engineering, National University
More informationOptical Flow Estimation versus Motion Estimation
Optical Flow Estimation versus Motion Estimation A. Sellent D. Kondermann S. Simon S. Baker G. Dedeoglu O. Erdler P. Parsonage C. Unger W. Niehsen August 9, 2012 1 Image based motion estimation Optical
More informationFuzzy Estimation and Segmentation for Laser Range Scans
2th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Fuzzy Estimation and Segmentation for Laser Range Scans Stephan Reuter, Klaus C. J. Dietmayer Institute of Measurement,
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationA Cooperative Approach to Vision-based Vehicle Detection
2001 IEEE Intelligent Transportation Systems Conference Proceedings - Oakland (CA), USA - August 25-29, 2001 A Cooperative Approach to Vision-based Vehicle Detection A. Bensrhair, M. Bertozzi, A. Broggi,
More informationMotion Estimation for Video Coding Standards
Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression
More informationMotion Analysis. Motion analysis. Now we will talk about. Differential Motion Analysis. Motion analysis. Difference Pictures
Now we will talk about Motion Analysis Motion analysis Motion analysis is dealing with three main groups of motionrelated problems: Motion detection Moving object detection and location. Derivation of
More informationOptical Flow-Based Person Tracking by Multiple Cameras
Proc. IEEE Int. Conf. on Multisensor Fusion and Integration in Intelligent Systems, Baden-Baden, Germany, Aug. 2001. Optical Flow-Based Person Tracking by Multiple Cameras Hideki Tsutsui, Jun Miura, and
More informationImplementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking
Implementation of Optical Flow, Sliding Window and SVM for Vehicle Detection and Tracking Mohammad Baji, Dr. I. SantiPrabha 2 M. Tech scholar, Department of E.C.E,U.C.E.K,Jawaharlal Nehru Technological
More informationSURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD
SURVEY OF LOCAL AND GLOBAL OPTICAL FLOW WITH COARSE TO FINE METHOD M.E-II, Department of Computer Engineering, PICT, Pune ABSTRACT: Optical flow as an image processing technique finds its applications
More informationMoving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation
IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial
More informationComputer Vision I - Basics of Image Processing Part 1
Computer Vision I - Basics of Image Processing Part 1 Carsten Rother 28/10/2014 Computer Vision I: Basics of Image Processing Link to lectures Computer Vision I: Basics of Image Processing 28/10/2014 2
More informationImage processing and features
Image processing and features Gabriele Bleser gabriele.bleser@dfki.de Thanks to Harald Wuest, Folker Wientapper and Marc Pollefeys Introduction Previous lectures: geometry Pose estimation Epipolar geometry
More informationA System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera
A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera Claudio Caraffi, Tomas Vojir, Jiri Trefny, Jan Sochman, Jiri Matas Toyota Motor Europe Center for Machine Perception,
More informationHigh-Level Sensor Data Fusion Architecture for Vehicle Surround Environment Perception
High-Level Sensor Data Architecture for Vehicle Surround Environment Perception Michael Aeberhard, Nico Kaempchen ConnectedDrive BMW Group Research and Technology Munich, Germany michael.aeberhard@bmw.de,
More informationImproving Vision-Based Distance Measurements using Reference Objects
Improving Vision-Based Distance Measurements using Reference Objects Matthias Jüngel, Heinrich Mellmann, and Michael Spranger Humboldt-Universität zu Berlin, Künstliche Intelligenz Unter den Linden 6,
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 11 140311 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Motion Analysis Motivation Differential Motion Optical
More informationPedestrian counting in video sequences using optical flow clustering
Pedestrian counting in video sequences using optical flow clustering SHIZUKA FUJISAWA, GO HASEGAWA, YOSHIAKI TANIGUCHI, HIROTAKA NAKANO Graduate School of Information Science and Technology Osaka University
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationRuch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS
Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual
More informationLIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION
F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,
More informationCar tracking in tunnels
Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern
More informationDETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA. Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI
DETECTION OF STREET-PARKING VEHICLES USING LINE SCAN CAMERA Kiyotaka HIRAHARA, Mari MATSUDA, Shunsuke KAMIJO Katsushi IKEUCHI Institute of Industrial Science, University of Tokyo 4-6-1 Komaba, Meguro-ku,
More informationLeow Wee Kheng CS4243 Computer Vision and Pattern Recognition. Motion Tracking. CS4243 Motion Tracking 1
Leow Wee Kheng CS4243 Computer Vision and Pattern Recognition Motion Tracking CS4243 Motion Tracking 1 Changes are everywhere! CS4243 Motion Tracking 2 Illumination change CS4243 Motion Tracking 3 Shape
More informationPedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016
edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract
More informationOptical Flow Estimation
Optical Flow Estimation Goal: Introduction to image motion and 2D optical flow estimation. Motivation: Motion is a rich source of information about the world: segmentation surface structure from parallax
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationOptical flow and tracking
EECS 442 Computer vision Optical flow and tracking Intro Optical flow and feature tracking Lucas-Kanade algorithm Motion segmentation Segments of this lectures are courtesy of Profs S. Lazebnik S. Seitz,
More informationTightly-Coupled LIDAR and Computer Vision Integration for Vehicle Detection
Tightly-Coupled LIDAR and Computer Vision Integration for Vehicle Detection Lili Huang, Student Member, IEEE, and Matthew Barth, Senior Member, IEEE Abstract In many driver assistance systems and autonomous
More informationA MOTION MODEL BASED VIDEO STABILISATION ALGORITHM
A MOTION MODEL BASED VIDEO STABILISATION ALGORITHM N. A. Tsoligkas, D. Xu, I. French and Y. Luo School of Science and Technology, University of Teesside, Middlesbrough, TS1 3BA, UK E-mails: tsoligas@teihal.gr,
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationCOMPUTER VISION > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE
COMPUTER VISION 2017-2018 > OPTICAL FLOW UTRECHT UNIVERSITY RONALD POPPE OUTLINE Optical flow Lucas-Kanade Horn-Schunck Applications of optical flow Optical flow tracking Histograms of oriented flow Assignment
More informationError Analysis of Feature Based Disparity Estimation
Error Analysis of Feature Based Disparity Estimation Patrick A. Mikulastik, Hellward Broszio, Thorsten Thormählen, and Onay Urfalioglu Information Technology Laboratory, University of Hannover, Germany
More informationStereo Scene Flow for 3D Motion Analysis
Stereo Scene Flow for 3D Motion Analysis Andreas Wedel Daniel Cremers Stereo Scene Flow for 3D Motion Analysis Dr. Andreas Wedel Group Research Daimler AG HPC 050 G023 Sindelfingen 71059 Germany andreas.wedel@daimler.com
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More informationELEC Dr Reji Mathew Electrical Engineering UNSW
ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion
More informationCS 565 Computer Vision. Nazar Khan PUCIT Lectures 15 and 16: Optic Flow
CS 565 Computer Vision Nazar Khan PUCIT Lectures 15 and 16: Optic Flow Introduction Basic Problem given: image sequence f(x, y, z), where (x, y) specifies the location and z denotes time wanted: displacement
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationFinally: Motion and tracking. Motion 4/20/2011. CS 376 Lecture 24 Motion 1. Video. Uses of motion. Motion parallax. Motion field
Finally: Motion and tracking Tracking objects, video analysis, low level motion Motion Wed, April 20 Kristen Grauman UT-Austin Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, and S. Lazebnik
More informationFeature Tracking and Optical Flow
Feature Tracking and Optical Flow Prof. D. Stricker Doz. G. Bleser Many slides adapted from James Hays, Derek Hoeim, Lana Lazebnik, Silvio Saverse, who 1 in turn adapted slides from Steve Seitz, Rick Szeliski,
More informationStructure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System
Structure Analysis Based Parking Slot Marking Recognition for Semi-automatic Parking System Ho Gi Jung 1, 2, Dong Suk Kim 1, Pal Joo Yoon 1, and Jaihie Kim 2 1 MANDO Corporation Central R&D Center, Advanced
More informationFree Space Detection on Highways using Time Correlation between Stabilized Sub-pixel precision IPM Images
Free Space Detection on Highways using Time Correlation between Stabilized Sub-pixel precision IPM Images Pietro Cerri and Paolo Grisleri Artificial Vision and Intelligent System Laboratory Dipartimento
More informationCalibration of a rotating multi-beam Lidar
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational
More informationAdvanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module
Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module www.lnttechservices.com Table of Contents Abstract 03 Introduction 03 Solution Overview 03 Output
More informationSelf-calibration of an On-Board Stereo-vision System for Driver Assistance Systems
4-1 Self-calibration of an On-Board Stereo-vision System for Driver Assistance Systems Juan M. Collado, Cristina Hilario, Arturo de la Escalera and Jose M. Armingol Intelligent Systems Lab Department of
More informationIN computer vision develop mathematical techniques in
International Journal of Scientific & Engineering Research Volume 4, Issue3, March-2013 1 Object Tracking Based On Tracking-Learning-Detection Rupali S. Chavan, Mr. S.M.Patil Abstract -In this paper; we
More information차세대지능형자동차를위한신호처리기술 정호기
차세대지능형자동차를위한신호처리기술 008.08. 정호기 E-mail: hgjung@mando.com hgjung@yonsei.ac.kr 0 . 지능형자동차의미래 ) 단위 system functions 운전자상황인식 얼굴방향인식 시선방향인식 졸음운전인식 운전능력상실인식 차선인식, 전방장애물검출및분류 Lane Keeping System + Adaptive Cruise
More informationCHAPTER 5 MOTION DETECTION AND ANALYSIS
CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series
More informationSpatial Outlier Detection
Spatial Outlier Detection Chang-Tien Lu Department of Computer Science Northern Virginia Center Virginia Tech Joint work with Dechang Chen, Yufeng Kou, Jiang Zhao 1 Spatial Outlier A spatial data point
More informationComparison Between The Optical Flow Computational Techniques
Comparison Between The Optical Flow Computational Techniques Sri Devi Thota #1, Kanaka Sunanda Vemulapalli* 2, Kartheek Chintalapati* 3, Phanindra Sai Srinivas Gudipudi* 4 # Associate Professor, Dept.
More informationAccurate and Dense Wide-Baseline Stereo Matching Using SW-POC
Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp
More informationAccurate 3D Face and Body Modeling from a Single Fixed Kinect
Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this
More informationMachine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices
Technical University of Cluj-Napoca Image Processing and Pattern Recognition Research Center www.cv.utcluj.ro Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving
More informationAcoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios
Sensor Fusion for Car Tracking Acoustic/Lidar Sensor Fusion for Car Tracking in City Traffic Scenarios, Daniel Goehring 1 Motivation Direction to Object-Detection: What is possible with costefficient microphone
More informationRealtime On-Road Vehicle Detection with Optical Flows and Haar-Like Feature Detectors
Realtime On-Road Vehicle Detection with Optical Flows and Haar-Like Feature Detectors Jaesik Choi Department of Computer Science University of Illinois at Urbana-Champaign Urbana, IL 61801 Abstract An
More informationOptical Flow-Based Motion Estimation. Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Optical Flow-Based Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides. 1 Why estimate motion? We live in a 4-D world Wide applications Object
More informationDEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION
2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent
More informationAdaptive Multi-Stage 2D Image Motion Field Estimation
Adaptive Multi-Stage 2D Image Motion Field Estimation Ulrich Neumann and Suya You Computer Science Department Integrated Media Systems Center University of Southern California, CA 90089-0781 ABSRAC his
More informationFACULTATEA DE AUTOMATICĂ ŞI CALCULATOARE. Ing. Silviu BOTA PHD THESIS MOTION DETECTION AND TRACKING IN 3D IMAGES
FACULTATEA DE AUTOMATICĂ ŞI CALCULATOARE Ing. Silviu BOTA PHD THESIS MOTION DETECTION AND TRACKING IN 3D IMAGES Comisia de evaluare a tezei de doctorat: Conducător ştiinţific, Prof.dr.ing. Sergiu NEDEVSCHI
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationMotion. 1 Introduction. 2 Optical Flow. Sohaib A Khan. 2.1 Brightness Constancy Equation
Motion Sohaib A Khan 1 Introduction So far, we have dealing with single images of a static scene taken by a fixed camera. Here we will deal with sequence of images taken at different time intervals. Motion
More informationComputer Vision and Pattern Recognition in Homeland Security Applications 1
Computer Vision and Pattern Recognition in Homeland Security Applications 1 Giovanni B. Garibotto Elsag spa Genova, Italy giovanni.garibotto@elsagdatamat.com Abstract. The tutorial will summarize the status
More informationON 3D-BEAMFORMING IN THE WIND TUNNEL
BeBeC-2016-S10 ON 3D-BEAMFORMING IN THE WIND TUNNEL Dirk Döbler 1, Jörg Ocker 2, Dr. Christof Puhle 1 1 GFaI e.v., Volmerstraße 3, 12489 Berlin, Germany 2 Dr. Ing. h.c.f. Porsche AG, 71287 Weissach, Germany
More informationEfficient L-Shape Fitting for Vehicle Detection Using Laser Scanners
Efficient L-Shape Fitting for Vehicle Detection Using Laser Scanners Xiao Zhang, Wenda Xu, Chiyu Dong, John M. Dolan, Electrical and Computer Engineering, Carnegie Mellon University Robotics Institute,
More informationAdaption of Robotic Approaches for Vehicle Localization
Adaption of Robotic Approaches for Vehicle Localization Kristin Schönherr, Björn Giesler Audi Electronics Venture GmbH 85080 Gaimersheim, Germany kristin.schoenherr@audi.de, bjoern.giesler@audi.de Alois
More informationW4. Perception & Situation Awareness & Decision making
W4. Perception & Situation Awareness & Decision making Robot Perception for Dynamic environments: Outline & DP-Grids concept Dynamic Probabilistic Grids Bayesian Occupancy Filter concept Dynamic Probabilistic
More informationLecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20
Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit
More informationAN ADAPTIVE MESH METHOD FOR OBJECT TRACKING
AN ADAPTIVE MESH METHOD FOR OBJECT TRACKING Mahdi Koohi 1 and Abbas Shakery 2 1 Department of Computer Engineering, Islamic Azad University, Shahr-e-Qods Branch,Tehran,Iran m.kohy@yahoo.com 2 Department
More informationEE 264: Image Processing and Reconstruction. Image Motion Estimation I. EE 264: Image Processing and Reconstruction. Outline
1 Image Motion Estimation I 2 Outline 1. Introduction to Motion 2. Why Estimate Motion? 3. Global vs. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical Flow Estimation
More information