OBSTACLE LOCALIZATION IN 3D SCENES FROM STEREOSCOPIC SEQUENCES
|
|
- Melinda Eaton
- 5 years ago
- Views:
Transcription
1 OBSTACLE LOCALIZATION IN 3D SCENES FROM STEREOSCOPIC SEQUENCES Piotr Skulimowski and Paweł Strumiłło Institute of Electronics, Technical Universit of Łódź 211/215 Wólcańska, , Łódź, Poland phone: + (48) , fa: + (48) , piotr.skulimowski@p.lod.pl, pawel.strumillo@p.lod.pl web: ABSTRACT A method for 3D scene segmentation from stereoscopic image sequences registered b a moving camera sstem is proposed. The method identifies 6DoF ego-motion parameters first. Then, a scene model is built b projecting a triangular mesh onto the dense disparit maps. The triangles are clustered so forming surfaces of separate scene objects that are assigned to obstacle categor or contet scene objects such as walls depending on their location versus the observer. The method was tested on real stereoscopic scenes and validated on OpenGL generated 3D virtual scenes. Its advantages are: low computing demand and capabilit of identifing location and shape parameters of the selected obstacles. The developed sstem is aimed to serve as a vision module part to the travel aid prototpe sstem for the blind. 1. INTRODUCTION Vision based analsis of 3D scenes is an important research topic in robotics, automatic navigation and surveillance sstems. It belongs to the class of inverse problems, aimed at inferring 3D scene geometr from 2D multiple-view projections (e.g. stereoscopic images). Recentl, a number of novel approaches that were devoted to solution of simpler 3D scene reconstruction tasks, like obstacle detection, tracking of moving objects, or route planning have been proposed. Combination of binocular stereoscop and scanning along parallel planes (to the ground surface) was adopted in [1] for real-time tracking of human silhouettes in 3D scenes. In [2] a similar objective was achieved b combining two vision modules: a monocular one, that retrieves feature elements and a binocular one for depth estimation. An invehicle sstem for detection of urban-traffic objects within a short range was proposed in [3]. It assumes planar road surface and proposes an efficient edge-indeed stereo matching algorithm. In [4] an original image segmentation method is proposed that combines depth information and object surface features for region-growing tpe segmentation. A segmentation technique for detection of scene objects was also proposed in [5]. An assumption was adopted, however, that all objects are located in common base plane and are of simple shapes. In [6] a scene segmentation algorithm in the disparit image (disparit value is inversel proportional to depth) is proposed that uses a trinocular stereovision sstem mounted on a vehicle. This kind of sstems features multiple baselines which allow achieving more precise results if compared to the two camera sstem. In an attempt of aiding blind people in navigation (the NAVI sstem) was constructed [7]. Object segmentation is carried out in image domain. Distances of the detected obstacles that are verball communicated to the blind are estimated b using disparit techniques. In our work we assume the stereovision sstem (further termed camera sstem) can move freel in a 3D environment. Segmentation of selected scene objects is performed in the disparit map domain. The proposed method allows for scene partitioning into obstacles and scene contet objects like walls and ground surface. An advantage of this object based approach is that stationar scene elements can be continuousl tracked even if the are occluded or move out of the field of view of the binocular camera sstem. 2. OVERVIEW OF THE METHOD The developed algorithm is aimed at scene segmentation for the auditor scene displa purposes. The camera sstem is assumed to move freel through a static environment. Thus, the 6DoF (6 Degrees of Freedom) camera motion parameters, known as ego-motion parameters, can be defined b T = U V W two vectors: the translational motion vector [ ] T ω = α β γ. and the rotational motion vector [ ] T Figure 1 Camera model in a right-handed coordinate sstem Our previous studies have shown, that it is possible to estimate the ego-motion parameters b means of the so called visual odometr, i.e. b using visual information onl [8][9]. These parameters are used in the novel algorithm proposed here for 3D scene segmentation. Its ke steps are illustrated in Figure 2.
2 Figure 2 Overview of the proposed 3D scene analsis method The concept of the algorithm is as follows. Firstl, the disparit map is computed b means of block matching methods. The Sum of Squared Differences (SAD) between the corresponding piels in the compared blocks is used as the matching criterion. The identified disparit value is verified b using a two step computation procedure in which the matchings are first computed b comparing blocks from the left image to blocks in the right image and vice versa. Another approach that has been found effective is analsis of the SAD function shape in the vicinit of the detected minimum [10]. Net, camera sstem ego-motion estimation is carried out [9]. Finall, stationar scene objects can be tracked in subsequent frames of the moving camera sstem. The ground and walls are mapped as non-obstacle scene objects if the fall within a predefined 3D orientation angle versus the observer. Scene objects that do not obe plane equations are considered as obstacles. The obstacles are labelled, their location and shape features are identified. 3. PLANE DETECTION In the first step of the algorithm the depth image is divided up into regular mesh of isosceles right triangles (see Fig. 3). Within the area of each triangle the criterion for correct depth estimation is verified as indicated in the previous section. This is an important step, because it eliminates triangles containing non-tetured surfaces and the ones that belong to different scene objects. Then, 3D coordinates of scene elementar points, indicated b triangle vertices are calculated. These points serve for writing surface equations defined b each of the isosceles triangles, which in turn allow for associating a normal vector with each of the triangular shaped surface as shown in Fig. 4. Figure 4 Normal vectors associated with the constructed tringles Initiall the triangles form regular mesh as shown in Fig. 3. Further, ever triangle is compared with each other, and if the triangles fulfil a given similarit measure, the are merged into larger geometric figures so forming seeds of the local planes regions for scene objects detection procedure. An inner product of normal vectors associated with the triangles is used as a similarit measure: s a b = a b + a b + a b = (1) If a given triangle does not match the plane equation (i.e., T T is a predetermined threshold) the s < s, where s triangle is removed from the plane triangle candidate list, but stored for further processing. After several iteration steps (3-5 iterations for the tested scene images) the triangles are labelled to the ones representing the detected planes. In each such step the number of labelled triangles is updated and concurrentl so is updated the plane equation so that the best fit (according to the least means squares criterion) to the preselected set of triangles is achieved. Finall, the most numerous sets of labeled triangles are considered to form fragments of scene planes. An algorithm for determining equations of the wall planes and for counting their number was implemented. Due to poor depth resolution of the disparit image (8 cm stereovision base, note the envisioned application as the travel aid for the blind) the sie of triangle grid should be chosen carefull. Too small a sie results in man identified triangles that are falsel positioned perpendicularl to the Z (depth) ais. On the other hand, too large a sie complicates plane detection due to large number of triangle vertices covering different objects and plane regions. Eperiments show that good estimate for grid sie is 16 piels, for depth map resolution of piels. 4. TRACKING OF PLANES IN 3D SCENE Let a plane eisting in a 3D scene is given b equation (2) with an assumption, that the origin of the coordinate sstem associated with the camera sstem does not belong to this plane. A (2) + B + C +1 = 0 Figure 3 Depth map generated for the scene shown in Fig. 8 and divided up into regular mesh of triangles
3 Our objective is to track the detected plane, i.e. update its i.e. update its coefficients. The plane is uniquel defined if its normal vector and the coordinate of a single point that belongs to the plane are given. An normal vector of the plane given in (2) is defined b three coefficients [ A,B,C] and points satisfing the plane equation, can be simpl found b using an straight line parallel to the normal vector hitting the plane. Having calculated camera motion parameters [8] it is possible to transform scene coordinates and vector coordinates in the old (for frame t) camera location correspond- =, to the camera coordinate ingl (,, ), u [ u ] T u, u in the new (for frame t+1) camera location ( ', ', ' ) [ u', u', u' ] T u '= : βu γu αu + γu + αu + βu ' = + Xdt & = U βz + γy ' = + Ydt & = V γx + αz ' = + Zdt & = W αy + βx, where V = [ X & Y& Z& ] T is the velocit of scene points in camera coordinate sstem [11]. Camera ego-motion parameters in each subsequent image frames, allow for skipping plane detection procedure, hence reducing the computing demand required for plane tracking. Additional advantage of the proposed plane detection procedure is that the detected planes can still be robustl tracked while their large parts become occluded b other voluminous obstacles or objects in movement. Note, however, that long term plane tracking the algorithm for plane detection needs to be reinitialied. 5. OBSTACLES DETECTION AND LABELLING Once scene planes are detected the disparit (depth) image is segmented into two sub-regions, i.e. planes and objects (obstacles). The segmentation relies on testing the following condition for ever piel ( ) map: (3) (4) d, in the dense disparit d, ) d plane (, ) Th ( (5) where d plane (, ) is the disparit value for points identified as belonging to an of the detected scene planes. The threshold value Th is a constant set eperimentall. If this threshold is too low then plane regions are falsel segmented as objects, conversel, for to high a threshold fragments of larger scene objects (obstacles) are detected onl. Images in Figs. 5 and 6 show the detected objects for different threshold values Th for the scene shown in Figure 7. Figure 5 Segmented scene image for Th = 0.7 Figure 6 Segmented scene image for Th = 1.2 Once image regions corresponding to scene objects (obstacles) are detected their location in 3D scene and their approimate sie is identified. Object labelling is carried out b appling a region growing procedure for each object region with a predicate defining the threshold T ob for disparit variations. If the disparit variations for the analsed region are lower than the threshold, region piels are assigned the same label, otherwise, new object label is created for the region. The regions are built from image blocks of sie piels. Results of this segmentation procedure for different threshold values T ob are shown in Figures 7 and 8. The lower the threshold the larger the number of false objects are detected. For higher threshold values, some objects are glued to the pre-detected planes. To solve this problem we propose to find objects seed points first b using higher T ob values and then lowering this threshold once the object is labelled. This method proved to generate stable labellings of scene objects. Please observe superior object labelling results shown in Figure 9 to the results shown in Figs. 7 and 8. Due to decreasing computational accurac of distant feature points detection of far awa objects is of poor qualit. This eplains wh object no 5 in Fig. 7 after parameter update is no longer detected. For the purpose of the considered application the assumed maimum object distance is intentionall limited to 4-5 m.
4 Figure 7 Object detection for threshold T ob =0.7 Figure 10 OpenGL 3D rendering of a scene from Fig. 8. Figures 11 and 12 show analsis results for the outdoor scenes. Figure 8 Object detection for threshold T ob =1.4 Figure 11 Outdoor scene segmentation result Figure 9 Objects labelling result obtained b using T ob =1.4 for initial object detection and T ob =0.7 for region growing procedure 6. RESULTS The proposed algorithm was tested on real scene stereovision sequences recorded inside and outside the universit building. The sequences were recorded using stereovision camera sstem (two PointGre colour digital cameras). Plane and obstacle detection results are visualised using OpenGL graphics libraries. Detected planes and obstacles are rendered using triangles identified b their corresponding normal vectors. OpenGL rendering for the image scene illustrated in Fig. 9 is shown in Figure 10. Figure 12 OpenGL 3D rendering of an outdoor scene The described scene segmentation procedure was verified using OpenGL simulated sequences. The OpenGL package allows not onl for generating stereovision sequences for a moving camera sstem, but also for reproducing disparit maps with subpiel accurac. We have used this tool for testing our scene obstacle localiation method.
5 8. ACKNOWLEDGEMENTS This work was supported in part b the Ministr of Science and Higher Education of Poland research grant no. 3T11B in ears and in part b the Mechanism for Support of Innovative Ph.D. Student Research (WIDDOK) grant no. Z/2.10/II/2.6/04/05/U/2/06 financed b the European Social Fund and the Polish Ministr of Econom. Figure 13 Test scene rendered using OpenGL package We have compared area of objects visible surfaces computed precisel from the OpenGL package and objects surface areas estimated from stereoscopic images and b emploing the proposed algorithm in which objects area is calculated as a sum of triangular areas forming the detected objects (see Figure 13). Results of the true and estimated object areas and distances are listed in Table 1. Table 1 Estimation results of objects true surface area and their distance from the camera sstem Object Distance (arbitrar units) Area (arbitrar units) True Estimated True Estimated Sphere on the left Sphere on the right Cone CONCLUSIONS A procedure for object detection from depth map sequences is implemented under the assumption of a freel moving stereovision sstem within a stationar scene (no objects in motion). The main result is objects (obstacles) localiation and their area estimation. Also, if there are planar elements within the scene the are appropriatel detected and their orientation versus the observer identified. This has an important practical consequence for navigating the sstem in the environment since the detected ground plane region indicates scene terrain free of obstacles. The aim of this research is to develop a passive navigation sstem, capable of obstacle detection that will serve as a model for the electronic travel aid for the blind. The concept is to convert the detected obstacles into spatialied auditor icons reflecting sie and location of the obstacle within the scene. Currentl, the work is under wa in which the ke elements of the presented scene segmentation procedures are being implemented on a DSP platform for real time sstem operation. REFERENCES [1] G. Garibotto, and C. Cibei, "Title 3D scene analsis b real-time stereovision,'' in Image Processing, ICIP IEEE International Conference, Ponań, Poland, September , pp. II [2] D. Lefee, S. Mousset, M. Bertoi, and A. Bensrhair, "Cooperation of passive vision sstems in detection and tracking of pedestrians,'' in Intelligent Vehicles Smposium, 2004 IEEE, Parma, Ital, June , pp [3] Y. Huang, "Obstacle detection in urban traffic using stereovision,'' in Proc. Intelligent Transportation Sstems, 2005, Vienna, Austria, September , pp [4] J. Fernande, and J. Aranda, "Image segmentation combining region depth and object features,'' in Proc. 15th International Conference on Pattern Recognition, 2000, September , pp vol. 1. [5] N. F. Kim, and Jai Song Park, "Segmentation of object regions using depth information,'' in Proc International Conference on Image Processing, ICIP '04, October , pp vol. 1. [6] B.K. Quek, J. Ibane-Guman, and K. W. Lim, "Featurebased perception for autonomous unmanned navigation,'' in Proc. Industrial Electronics Societ, IECON nd Annual Conference of IEEE, November , pages: 6 pp. [7] F. Wong, R. Nagarajan, and S. Yaacob, "Application of stereovision in a navigation aid for blind people,'' in Proc Joint Conference of the Fourth International Conference on Information, Communications and Signal Processing, 2003 and the Fourth Pacific Rim Conference on Multimedia, December , pp vol. 2. [8] P. Skulimowski, and P. Strumiłło, "Refinement of disparit map sequences from stereo camera ego-motion parameters", International Conference on Signals and Electronic Sstems, Łódź, 2006, vol. 1, pp , [9] P. Skulimowski, and P. Strumiłło, "Detekcja płascn w sekwencji obraów stereowijnch", V Smpojum Naukowe Techniki Pretwarania Obrau, Serock, 2006, pp (in Polish). [10] H. Muhlmann, D. Maier, R. Hesser, and R. Manner, "Calculating dense disparit maps from colour stereo images, an efficient implementation,'' in Proc. Workshop o Stereo and Multi-Baseline Vision, (SMBV 2001), Kauai, HI, USA, September 12 October , pp [11] A. R. Bruss, and B. K. P. Horn, "Passive Navigation,'' Computer Vision, Graphics, and Image Processing, vol. 21, No. 1, pp. 3-20, Jan
D-Calib: Calibration Software for Multiple Cameras System
D-Calib: Calibration Software for Multiple Cameras Sstem uko Uematsu Tomoaki Teshima Hideo Saito Keio Universit okohama Japan {u-ko tomoaki saito}@ozawa.ics.keio.ac.jp Cao Honghua Librar Inc. Japan cao@librar-inc.co.jp
More informationEpipolar Constraint. Epipolar Lines. Epipolar Geometry. Another look (with math).
Epipolar Constraint Epipolar Lines Potential 3d points Red point - fied => Blue point lies on a line There are 3 degrees of freedom in the position of a point in space; there are four DOF for image points
More information3D Reconstruction of a Human Face with Monocular Camera Based on Head Movement
3D Reconstruction of a Human Face with Monocular Camera Based on Head Movement Ben Yip and Jesse S. Jin School of Information Technologies The Universit of Sdne Sdne, NSW 26, Australia {benip; jesse}@it.usd.edu.au
More informationGLOBAL EDITION. Interactive Computer Graphics. A Top-Down Approach with WebGL SEVENTH EDITION. Edward Angel Dave Shreiner
GLOBAL EDITION Interactive Computer Graphics A Top-Down Approach with WebGL SEVENTH EDITION Edward Angel Dave Shreiner This page is intentionall left blank. 4.10 Concatenation of Transformations 219 in
More informationDisparity Fusion Using Depth and Stereo Cameras for Accurate Stereo Correspondence
Disparit Fusion Using Depth and Stereo Cameras for Accurate Stereo Correspondence Woo-Seok Jang and Yo-Sung Ho Gwangju Institute of Science and Technolog GIST 123 Cheomdan-gwagiro Buk-gu Gwangju 500-712
More informationRefinement of scene depth from stereo camera ego-motion parameters
Refinement of scene epth from stereo camera ego-motion parameters Piotr Skulimowski, Pawel Strumillo An algorithm for refinement of isparity (epth) map from stereoscopic sequences is propose. The metho
More informationVisual compensation in localization of a robot on a ceiling map
Scientific Research and Essas Vol. ), pp. -, Januar, Available online at http://www.academicjournals.org/sre ISSN - Academic Journals Full Length Research Paper Visual compensation in localiation of a
More informationVisual compensation in localization of a robot on a ceiling map
Scientific Research and Essas Vol. 6(1, pp. 131-13, 4 Januar, 211 Available online at http://www.academicjournals.org/sre DOI: 1.897/SRE1.814 ISSN 1992-2248 211 Academic Journals Full Length Research Paper
More informationComputer Vision Lecture 20
Computer Vision Lecture 2 Motion and Optical Flow Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de 28.1.216 Man slides adapted from K. Grauman, S. Seitz, R. Szeliski,
More informationLecture 19: Motion. Effect of window size 11/20/2007. Sources of error in correspondences. Review Problem set 3. Tuesday, Nov 20
Lecture 19: Motion Review Problem set 3 Dense stereo matching Sparse stereo matching Indexing scenes Tuesda, Nov 0 Effect of window size W = 3 W = 0 Want window large enough to have sufficient intensit
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationRobert Collins CSE486, Penn State. Robert Collins CSE486, Penn State. Image Point Y. O.Camps, PSU. Robert Collins CSE486, Penn State.
Stereo Vision Inerring depth rom images taken at the same time b two or more s. Lecture 08: Introduction to Stereo Reading: T&V Section 7.1 Scene Point Image Point p = (,,) O Basic Perspective Projection
More informationVisual motion. Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys
Visual motion Man slides adapted from S. Seitz, R. Szeliski, M. Pollefes Motion and perceptual organization Sometimes, motion is the onl cue Motion and perceptual organization Sometimes, motion is the
More informationSOFTWARE APPLICATION FOR CALIBRATION OF STEREOSCOPIC CAMERA SETUPS
Metrol. Meas. Sst., Vol. XIX (212), No. 4, pp. 85-816. METROLOGY AND MEASUREMENT SYSTEMS Inde 3393, ISSN 86-8229 www.metrolog.pg.gda.pl SOFTWARE APPLICATION FOR CALIBRATION OF STEREOSCOPIC CAMERA SETUPS
More informationPerspective Projection Transformation
Perspective Projection Transformation Where does a point of a scene appear in an image?? p p Transformation in 3 steps:. scene coordinates => camera coordinates. projection of camera coordinates into image
More informationCS 2770: Intro to Computer Vision. Multiple Views. Prof. Adriana Kovashka University of Pittsburgh March 14, 2017
CS 277: Intro to Computer Vision Multiple Views Prof. Adriana Kovashka Universit of Pittsburgh March 4, 27 Plan for toda Affine and projective image transformations Homographies and image mosaics Stereo
More informationDEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION
2012 IEEE International Conference on Multimedia and Expo Workshops DEPTH AND GEOMETRY FROM A SINGLE 2D IMAGE USING TRIANGULATION Yasir Salih and Aamir S. Malik, Senior Member IEEE Centre for Intelligent
More informationVision-based Real-time Road Detection in Urban Traffic
Vision-based Real-time Road Detection in Urban Traffic Jiane Lu *, Ming Yang, Hong Wang, Bo Zhang State Ke Laborator of Intelligent Technolog and Sstems, Tsinghua Universit, CHINA ABSTRACT Road detection
More informationE V ER-growing global competition forces. Accuracy Analysis and Improvement for Direct Laser Sintering
Accurac Analsis and Improvement for Direct Laser Sintering Y. Tang 1, H. T. Loh 12, J. Y. H. Fuh 2, Y. S. Wong 2, L. Lu 2, Y. Ning 2, X. Wang 2 1 Singapore-MIT Alliance, National Universit of Singapore
More informationOutdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera
Outdoor Scene Reconstruction from Multiple Image Sequences Captured by a Hand-held Video Camera Tomokazu Sato, Masayuki Kanbara and Naokazu Yokoya Graduate School of Information Science, Nara Institute
More informationA novel point matching method for stereovision measurement using RANSAC affine transformation
A novel point matching method for stereovision measurement using RANSAC affine transformation Naiguang Lu, Peng Sun, Wenyi Deng, Lianqing Zhu, Xiaoping Lou School of Optoelectronic Information & Telecommunication
More information3D X-ray Laminography with CMOS Image Sensor Using a Projection Method for Reconstruction of Arbitrary Cross-sectional Images
Ke Engineering Materials Vols. 270-273 (2004) pp. 192-197 online at http://www.scientific.net (2004) Trans Tech Publications, Switzerland Online available since 2004/08/15 Citation & Copright (to be inserted
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More informationTime-to-Contact from Image Intensity
Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract
More informationLines and Their Slopes
8.2 Lines and Their Slopes Linear Equations in Two Variables In the previous chapter we studied linear equations in a single variable. The solution of such an equation is a real number. A linear equation
More informationTransformations Between Two Images. Translation Rotation Rigid Similarity (scaled rotation) Affine Projective Pseudo Perspective Bi linear
Transformations etween Two Images Translation Rotation Rigid Similarit (scaled rotation) ffine Projective Pseudo Perspective i linear Fundamental Matri Lecture 13 pplications Stereo Structure from Motion
More informationTHE TASK of ground plane detection in images of 3D
Proceedings of the Federated Conference on Computer Science and Information Systems pp. 669 674 DOI: 10.15439/2017F40 ISSN 2300-5963 ACSIS, Vol. 11 Ground plane detection in 3D scenes for an arbitrary
More informationStereo Matching! Christian Unger 1,2, Nassir Navab 1!! Computer Aided Medical Procedures (CAMP), Technische Universität München, Germany!!
Stereo Matching Christian Unger 12 Nassir Navab 1 1 Computer Aided Medical Procedures CAMP) Technische Universität München German 2 BMW Group München German Hardware Architectures. Microprocessors Pros:
More informationFundamental Matrix. Lecture 13
Fundamental Matri Lecture 13 Transformations etween Two Images Translation Rotation Rigid Similarit (scaled rotation) ffine Projective Pseudo Perspective i linear pplications Stereo Structure from Motion
More informationMEM380 Applied Autonomous Robots Winter Robot Kinematics
MEM38 Applied Autonomous obots Winter obot Kinematics Coordinate Transformations Motivation Ultimatel, we are interested in the motion of the robot with respect to a global or inertial navigation frame
More informationMonocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily
More informationRange Sensors (time of flight) (1)
Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors
More informationColour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation
ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology
More informationModeling Transformations
Modeling Transformations Michael Kazhdan (601.457/657) HB Ch. 5 FvDFH Ch. 5 Overview Ra-Tracing so far Modeling transformations Ra Tracing Image RaTrace(Camera camera, Scene scene, int width, int heigh,
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationDense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera
Dense 3-D Reconstruction of an Outdoor Scene by Hundreds-baseline Stereo Using a Hand-held Video Camera Tomokazu Satoy, Masayuki Kanbaray, Naokazu Yokoyay and Haruo Takemuraz ygraduate School of Information
More informationFast 3D Reconstruction of Human Shape and Motion Tracking by Parallel Fast Level Set Method
2008 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, Ma 19-23, 2008 Fast 3D Reconstruction of Human Shape and Motion Tracking b Parallel Fast Level Set Method Yumi Iwashita
More informationImage Metamorphosis By Affine Transformations
Image Metamorphosis B Affine Transformations Tim Mers and Peter Spiegel December 16, 2005 Abstract Among the man was to manipulate an image is a technique known as morphing. Image morphing is a special
More informationA New Concept on Automatic Parking of an Electric Vehicle
A New Concept on Automatic Parking of an Electric Vehicle C. CAMUS P. COELHO J.C. QUADRADO Instituto Superior de Engenharia de Lisboa Rua Conselheiro Emídio Navarro PORTUGAL Abstract: - A solution to perform
More informationClosure Polynomials for Strips of Tetrahedra
Closure Polnomials for Strips of Tetrahedra Federico Thomas and Josep M. Porta Abstract A tetrahedral strip is a tetrahedron-tetrahedron truss where an tetrahedron has two neighbors ecept those in the
More informationTubes are Fun. By: Douglas A. Ruby Date: 6/9/2003 Class: Geometry or Trigonometry Grades: 9-12 INSTRUCTIONAL OBJECTIVES:
Tubes are Fun B: Douglas A. Rub Date: 6/9/2003 Class: Geometr or Trigonometr Grades: 9-2 INSTRUCTIONAL OBJECTIVES: Using a view tube students will conduct an eperiment involving variation of the viewing
More informationMultibody Motion Estimation and Segmentation from Multiple Central Panoramic Views
Multibod Motion Estimation and Segmentation from Multiple Central Panoramic Views Omid Shakernia René Vidal Shankar Sastr Department of Electrical Engineering & Computer Sciences Universit of California
More informationTracking of Dynamic Objects Based on Optical Flow
Tracking of Dnamic Objects Based on Optical Flow Torsten Radtke, Volker Zerbe Facult of Informatics and Automation Ilmenau Technical Universit P.O.Bo 10 05 65, 98684 Ilmenau German Abstract In this paper
More informationStereo: the graph cut method
Stereo: the graph cut method Last lecture we looked at a simple version of the Marr-Poggio algorithm for solving the binocular correspondence problem along epipolar lines in rectified images. The main
More information10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.
Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic
More informationDISTANCE MEASUREMENT USING STEREO VISION
DISTANCE MEASUREMENT USING STEREO VISION Sheetal Nagar 1, Jitendra Verma 2 1 Department of Electronics and Communication Engineering, IIMT, Greater Noida (India) 2 Department of computer science Engineering,
More informationProcessing 3D Surface Data
Processing 3D Surface Data Computer Animation and Visualisation Lecture 17 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing
More informationNATIONAL UNIVERSITY OF SINGAPORE. (Semester I: 1999/2000) EE4304/ME ROBOTICS. October/November Time Allowed: 2 Hours
NATIONAL UNIVERSITY OF SINGAPORE EXAMINATION FOR THE DEGREE OF B.ENG. (Semester I: 1999/000) EE4304/ME445 - ROBOTICS October/November 1999 - Time Allowed: Hours INSTRUCTIONS TO CANDIDATES: 1. This paper
More informationLast Lecture. Edge Detection. Filtering Pyramid
Last Lecture Edge Detection Filtering Pramid Toda Motion Deblur Image Transformation Removing Camera Shake from a Single Photograph Rob Fergus, Barun Singh, Aaron Hertzmann, Sam T. Roweis and William T.
More informationProcessing 3D Surface Data
Processing 3D Surface Data Computer Animation and Visualisation Lecture 12 Institute for Perception, Action & Behaviour School of Informatics 3D Surfaces 1 3D surface data... where from? Iso-surfacing
More informationA Circle Detection Method Based on Optimal Parameter Statistics in Embedded Vision
A Circle Detection Method Based on Optimal Parameter Statistics in Embedded Vision Xiaofeng Lu,, Xiangwei Li, Sumin Shen, Kang He, and Songu Yu Shanghai Ke Laborator of Digital Media Processing and Transmissions
More informationLens Screw Pad Arm (a) Glasses frames Screw (b) Parts Bridge Nose Pad Fig. 2 Name of parts composing glasses flames Temple Endpiece 2.2 Geometric mode
3D Fitting Simulation of Glasses Frames Using Individual s Face Model Noriaki TAMURA and Katsuhiro KITAJIMA Toko Universit of Agriculture and Technolog, Toko, Japan E-mail: 50008834305@st.tuat.ac.jp, kitajima@cc.tuat.ac.jp
More informationKINEMATICS STUDY AND WORKING SIMULATION OF THE SELF- ERECTION MECHANISM OF A SELF-ERECTING TOWER CRANE, USING NUMERICAL AND ANALYTICAL METHODS
The rd International Conference on Computational Mechanics and Virtual Engineering COMEC 9 9 OCTOBER 9, Brasov, Romania KINEMATICS STUY AN WORKING SIMULATION OF THE SELF- ERECTION MECHANISM OF A SELF-ERECTING
More informationX y. f(x,y,d) f(x,y,d) Peak. Motion stereo space. parameter space. (x,y,d) Motion stereo space. Parameter space. Motion stereo space.
3D Shape Measurement of Unerwater Objects Using Motion Stereo Hieo SAITO Hirofumi KAWAMURA Masato NAKAJIMA Department of Electrical Engineering, Keio Universit 3-14-1Hioshi Kouhoku-ku Yokohama 223, Japan
More informationDetermination of an Unmanned Mobile Object Orientation by Natural Landmarks
Determination of an Unmanned Mobile Object Orientation by Natural Landmarks Anton M. Korsakov, Ivan S. Fomin, Dmitry A. Gromoshinsky, Alexandr V. Bakhshiev, Dmitrii N. Stepanov, EkaterinaY. Smirnova 1
More information3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT
3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT V. M. Lisitsyn *, S. V. Tikhonova ** State Research Institute of Aviation Systems, Moscow, Russia * lvm@gosniias.msk.ru
More informationRobot localization method based on visual features and their geometric relationship
, pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department
More informationCalibration of a rotating multi-beam Lidar
The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract
More informationImage Warping, mesh, and triangulation CSE399b, Spring 07 Computer Vision
http://grail.cs.washington.edu/projects/rotoscoping/ Image Warping, mesh, and triangulation CSE399b, Spring 7 Computer Vision Man of the slides from A. Efros. Parametric (global) warping Eamples of parametric
More informationWhat and Why Transformations?
2D transformations What and Wh Transformations? What? : The geometrical changes of an object from a current state to modified state. Changing an object s position (translation), orientation (rotation)
More informationLUMS Mine Detector Project
LUMS Mine Detector Project Using visual information to control a robot (Hutchinson et al. 1996). Vision may or may not be used in the feedback loop. Visual (image based) features such as points, lines
More informationAn Optimized Sub-texture Mapping Technique for an Arbitrary Texture Considering Topology Relations
An Optimied Sub-teture Mapping Technique for an Arbitrar Teture Considering Topolog Relations Sangong Lee 1, Cheonshik Kim 2, and Seongah Chin 3,* 1 Department of Computer Science and Engineering, Korea
More informationA New Calibration Method and its Application for the Cooperation of Wide-Angle and Pan-Tilt-Zoom Cameras
Inform. Technol. J., 7 (8): 96-5, 8 Information Technolog Journal 7 (8): 96-5, 8 ISSN 8-568 8 Asian Network for Scientific Information A New Calibration Method and its Application for the Cooperation of
More informationHuman Upper Body Pose Estimation in Static Images
1. Research Team Human Upper Body Pose Estimation in Static Images Project Leader: Graduate Students: Prof. Isaac Cohen, Computer Science Mun Wai Lee 2. Statement of Project Goals This goal of this project
More information3D Photography: Epipolar geometry
3D Photograph: Epipolar geometr Kalin Kolev, Marc Pollefes Spring 203 http://cvg.ethz.ch/teaching/203spring/3dphoto/ Schedule (tentative) Feb 8 Feb 25 Mar 4 Mar Mar 8 Mar 25 Apr Apr 8 Apr 5 Apr 22 Apr
More informationPrecision Peg-in-Hole Assembly Strategy Using Force-Guided Robot
3rd International Conference on Machiner, Materials and Information Technolog Applications (ICMMITA 2015) Precision Peg-in-Hole Assembl Strateg Using Force-Guided Robot Yin u a, Yue Hu b, Lei Hu c BeiHang
More informationA Statistical Consistency Check for the Space Carving Algorithm.
A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper
More information3D-2D Laser Range Finder calibration using a conic based geometry shape
3D-2D Laser Range Finder calibration using a conic based geometry shape Miguel Almeida 1, Paulo Dias 1, Miguel Oliveira 2, Vítor Santos 2 1 Dept. of Electronics, Telecom. and Informatics, IEETA, University
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationVIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD. Ertem Tuncel and Levent Onural
VIDEO OBJECT SEGMENTATION BY EXTENDED RECURSIVE-SHORTEST-SPANNING-TREE METHOD Ertem Tuncel and Levent Onural Electrical and Electronics Engineering Department, Bilkent University, TR-06533, Ankara, Turkey
More informationComputing F class 13. Multiple View Geometry. Comp Marc Pollefeys
Computing F class 3 Multiple View Geometr Comp 90-089 Marc Pollefes Multiple View Geometr course schedule (subject to change) Jan. 7, 9 Intro & motivation Projective D Geometr Jan. 4, 6 (no class) Projective
More informationModeling Transformations
Modeling Transformations Michael Kazhdan (601.457/657) HB Ch. 5 FvDFH Ch. 5 Announcement Assignment 2 has been posted: Due: 10/24 ASAP: Download the code and make sure it compiles» On windows: just build
More informationReal-time Stereo Vision for Urban Traffic Scene Understanding
Proceedings of the IEEE Intelligent Vehicles Symposium 2000 Dearborn (MI), USA October 3-5, 2000 Real-time Stereo Vision for Urban Traffic Scene Understanding U. Franke, A. Joos DaimlerChrylser AG D-70546
More informationA rigid body free to move in a reference frame will, in the general case, have complex motion, which is simultaneously a combination of rotation and
050389 - Analtical Elements of Mechanisms Introduction. Degrees of Freedom he number of degrees of freedom (DOF) of a sstem is equal to the number of independent parameters (measurements) that are needed
More information= (z cos( ) + y sin( )) cos( ) + x sin( )
Plane Segment Finder : Algorithm, Implementation and Applications Kei Okada Satoshi Kagami Masauki Inaba Hirochika Inoue Dept. of Mechano-Informatics, Univ. of Toko. 7{3{1, Hongo, Bunko-ku, Toko, 113{8656,
More information1st frame Figure 1: Ball Trajectory, shadow trajectory and a reference player 48th frame the points S and E is a straight line and the plane formed by
Physics-based 3D Position Analysis of a Soccer Ball from Monocular Image Sequences Taeone Kim, Yongduek Seo, Ki-Sang Hong Dept. of EE, POSTECH San 31 Hyoja Dong, Pohang, 790-784, Republic of Korea Abstract
More informationOBJECTS RECOGNITION BY MEANS OF PROJECTIVE INVARIANTS CONSIDERING CORNER-POINTS.
OBJECTS RECOGNTON BY EANS OF PROJECTVE NVARANTS CONSDERNG CORNER-PONTS. Vicente,.A., Gil,P. *, Reinoso,O., Torres,F. * Department of Engineering, Division of Sstems Engineering and Automatic. iguel Hernandez
More informationQuasi-Euclidean Uncalibrated Epipolar Rectification
Dipartimento di Informatica Università degli Studi di Verona Rapporto di ricerca Research report September 2006 RR 43/2006 Quasi-Euclidean Uncalibrated Epipolar Rectification L. Irsara A. Fusiello Questo
More information#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS
#65 MONITORING AND PREDICTING PEDESTRIAN BEHAVIOR AT TRAFFIC INTERSECTIONS Final Research Report Luis E. Navarro-Serment, Ph.D. The Robotics Institute Carnegie Mellon University Disclaimer The contents
More informationEECS 556 Image Processing W 09
EECS 556 Image Processing W 09 Motion estimation Global vs. Local Motion Block Motion Estimation Optical Flow Estimation (normal equation) Man slides of this lecture are courtes of prof Milanfar (UCSC)
More informationAnnouncements. Stereo
Announcements Stereo Homework 2 is due today, 11:59 PM Homework 3 will be assigned today Reading: Chapter 7: Stereopsis CSE 152 Lecture 8 Binocular Stereopsis: Mars Given two images of a scene where relative
More informationTraffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System
Sept. 8-10, 010, Kosice, Slovakia Traffic Signs Recognition Experiments with Transform based Traffic Sign Recognition System Martin FIFIK 1, Ján TURÁN 1, Ľuboš OVSENÍK 1 1 Department of Electronics and
More informationA Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion
A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion Paper ID:086 Abstract Multi-view approach has been proposed to solve occlusion and lack of visibility
More informationACTIVITY: Representing Data by a Linear Equation
9.2 Lines of Fit How can ou use data to predict an event? ACTIVITY: Representing Data b a Linear Equation Work with a partner. You have been working on a science project for 8 months. Each month, ou measured
More informationFundamental Technologies Driving the Evolution of Autonomous Driving
426 Hitachi Review Vol. 65 (2016), No. 9 Featured Articles Fundamental Technologies Driving the Evolution of Autonomous Driving Takeshi Shima Takeshi Nagasaki Akira Kuriyama Kentaro Yoshimura, Ph.D. Tsuneo
More informationHow is project #1 going?
How is project # going? Last Lecture Edge Detection Filtering Pramid Toda Motion Deblur Image Transformation Removing Camera Shake from a Single Photograph Rob Fergus, Barun Singh, Aaron Hertzmann, Sam
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationLIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION
F2008-08-099 LIGHT STRIPE PROJECTION-BASED PEDESTRIAN DETECTION DURING AUTOMATIC PARKING OPERATION 1 Jung, Ho Gi*, 1 Kim, Dong Suk, 1 Kang, Hyoung Jin, 2 Kim, Jaihie 1 MANDO Corporation, Republic of Korea,
More informationLocal Image Registration: An Adaptive Filtering Framework
Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,
More informationSuguden functions that enable a user. These original functions were first installed in the 2016 summer models of. NTT DOCOMO smartphones.
Tapless Phone Operations b Natural Actions! Suguden Functions Suguden Sensor Motion NTT DOCOMO has developed Suguden functions that enable a user to operate a phone using onl natural actions without having
More informationMarcel Worring Intelligent Sensory Information Systems
Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video
More informationNONCONGRUENT EQUIDISSECTIONS OF THE PLANE
NONCONGRUENT EQUIDISSECTIONS OF THE PLANE D. FRETTLÖH Abstract. Nandakumar asked whether there is a tiling of the plane b pairwise non-congruent triangles of equal area and equal perimeter. Here a weaker
More informationRobot Localization based on Geo-referenced Images and G raphic Methods
Robot Localization based on Geo-referenced Images and G raphic Methods Sid Ahmed Berrabah Mechanical Department, Royal Military School, Belgium, sidahmed.berrabah@rma.ac.be Janusz Bedkowski, Łukasz Lubasiński,
More informationNighttime Pedestrian Ranging Algorithm Based on Monocular Vision
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 16 No 5 Special Issue on Application of Advanced Computing and Simulation in Information Sstems Sofia 016 Print ISSN: 1311-970;
More informationTracking Multiple Objects in 3D. Coimbra 3030 Coimbra target projection in the same image position (usually
Tracking Multiple Objects in 3D Jo~ao P. Barreto Paulo Peioto Coimbra 33 Coimbra 33 Jorge Batista Helder Araujo Coimbra 33 Coimbra 33 Abstract In this paper a system for tracking multiple targets in 3D
More informationMAN-522: COMPUTER VISION SET-2 Projections and Camera Calibration
MAN-522: COMPUTER VISION SET-2 Projections and Camera Calibration Image formation How are objects in the world captured in an image? Phsical parameters of image formation Geometric Tpe of projection Camera
More informationThree-Dimensional Image Security System Combines the Use of Smart Mapping Algorithm and Fibonacci Transformation Technique
Three-Dimensional Image Securit Sstem Combines the Use of Smart Mapping Algorithm and Fibonacci Transformation Technique Xiao-Wei Li 1, Sung-Jin Cho 2, In-Kwon Lee 3 and Seok-Tae Kim * 4 1,4 Department
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationMulti-View Stereo for Static and Dynamic Scenes
Multi-View Stereo for Static and Dynamic Scenes Wolfgang Burgard Jan 6, 2010 Main references Yasutaka Furukawa and Jean Ponce, Accurate, Dense and Robust Multi-View Stereopsis, 2007 C.L. Zitnick, S.B.
More information3.1 Functions. The relation {(2, 7), (3, 8), (3, 9), (4, 10)} is not a function because, when x is 3, y can equal 8 or 9.
3. Functions Cubic packages with edge lengths of cm, 7 cm, and 8 cm have volumes of 3 or cm 3, 7 3 or 33 cm 3, and 8 3 or 5 cm 3. These values can be written as a relation, which is a set of ordered pairs,
More information