A dioptric stereo system for robust real-time people tracking
|
|
- Sherilyn Lang
- 5 years ago
- Views:
Transcription
1 Proceedings of the IEEE ICRA 2009 Workshop on People Detection and Tracking Kobe, Japan, May 2009 A dioptric stereo system for robust real-time people tracking Ester Martínez and Angel P. del Pobil Robotic Intelligence Lab Engineering and Computer Science Department Jaume-I University Castellón Spain { emartine, pobil Abstract We address and solve a number of problems in the context of a robot surveillance system based on a pair of dioptric (fisheye) cameras. These cameras provide a hemispherical field of view that covers the whole robot workspace, with some advantages over catadioptric systems, but there is little previous work about them. Then, we had to devise and implement a number of novel techniques to achieve robust tracking of moving objects in dynamic, unknown environments from color image sequences in real time. In particular, we present a new two-phase adaptive background model that exhibits a robust performance when there are unexpected changes in the scene such as sudden illumination changes, blinking of computer screens, shadows or changes induced by camera motion or sensor noise. The system is also capable of tracking the detected objects when they are not in movement. We also deal with fisheye camera calibration to estimate both intrinsic and extrinsic parameters, as well as the estimation of the distance between the system and the detected objects with our dioptric stereo system. Experimental results are reported. Robotics research, from its begining, has been always focused on building robots which help human beings in their daily tasks while both of them coexist in the same environment. That means new robot generations have to deal with dynamic, unknown environments, unlike industrial robots which act in a restricted, controlled, well-known environment. For that reason, one of the key issues in this context is to be aware of what is happening around. In fact, robot performance in any real environment requires to detect people and/or other objects, particularly if they are moving, in the robot s workspace. On the one hand, interaction tasks require detection and identification of the objects with which to interact. On the other hand, the safety of all elements present in the robot workspace should be guaranteed at any time, specially when they are human beings. Thus, it is important that the robot quickly detects the presence of any moving element to be able to properly react to the element movements. So, among the available robot sensors, cameras might be suitable for this goal, since they are an important source of information. Nevertheless, it is not straightforward to successfully deal with a non-constrained environment by using traditional cameras due to its limited field of view. That constrain could not be removed by combining several images captured by rotating a camera or strategically positioning a set of them, because it is necessary to establish any feature correspondence between many images at any time. This processing entails a high computational cost which makes them fail for real-time tasks. An effective way is to combine mirrors with conventional imaging systems [1] [2] [3]. The obtained devices are called catadioptric systems. Moreover, if there is a single viewpoint, they are referred as central catadioptric systems [4]. This is a desired feature in such imaging systems since it describes world-image mapping. In fact, a single viewpoint implies that all rays go through a 3D point and its projection on the image plane goes through a single point in the 3D space. Conventional perspective cameras are devices of a single viewpoint, for example. Although the central catadioptric imaging can be highly advantageous, they unfortunately exhibit a dead area in the centre of the image what can be an important drawback in some applications. With the aim of overcoming all the above drawbacks, a dioptric system was used. Dioptric systems, also called fisheye cameras, are systems which combine a fisheye lens with a conventional camera [4] [5]. Thus, a conventional lens is changed by one of these lenses which has a short focal length what allows cameras to see objects in an hemisphere. Although fisheye devices present several advantages in front of catadioptric sensors such as no presence of dead areas in the captured images, a unique model for this kind of cameras does not exist unlike central catadioptric ones [6]. In this work, we have focused on dioptric systems to implement a robot surveillance application for fast and robust tracking of moving objects in dynamic, unknown environments. Although our final goal is to design an autonomous, mobile manipulation robot system, here we present the first stage: novel techniques for robust tracking of moving objects in dynamic, unknown environments from color image sequences such that manipulation tasks could be safely performed in real time when the robot system is not moving. For that, three different related problems have been tackled: moving object detection object tracking distance estimation from the system to the detected objects
2 First of all, a new robust adaptive background model has been designed. It allows the system to adapt to different unexpected changes in the scene such as sudden illumination changes, blinking of computer screens, shadows or changes induced by camera motion or sensor noise. Then, tracking process from two omnidirectional images takes place. Finally, the estimation of the distance between the system and the detected objects must be done by using an additional method. In this case, information about the 3D localization of the detected objects with respect to the system was obtained from a dioptric stereo system. Thus, the structure of this paper is as follows: the new robust adaptive background model is described in Section 2, while in Section 3 the tracking process is introduced. An epipolar geometry study of a dioptric stereo system is presented in Section 4. Some experimental results are presented in Section 5, and discussed in Section 6. I. MOVING OBJECT DETECTION Research in human and object detection has taken a number of forms. Well-known segmentation techniques from a taken image are thresholding or frame subtraction. However, on the one hand, it is difficult to deal with threshold selection when it is working with an unknown, dynamic environment and targets to track can have different features. Actually, the uncertainty provided by those specified work conditions also makes that automatic threshold search methods, mainly based on histogram properties, fail [7]. In that way, other experiments to obtain a robotic assistant in which a person is detected and then followed by a mobile robot [8] [9] [10] [11] have been carried out. Nevertheless, in spite of the fact that existing algorithms are very fast and easy to use, image processing for object identification is very poor since it is color- and/or face-based. This restricts their utility because it is not viable to track objects of a particular color, which has also to be significantly different from the background, or it constrains people to always face the vision system. On the other hand, although the image difference method provides a good detection of changing regions in an image, it is important to pay attention to several uncontrolled changes in the system environment which can produce multiple false negatives and make the system fail. These dynamic, uncontrolled changes can be divided into: minor dynamic factors, such as, for example, blinking of computer screens, shadows, mirror images on the glass windows, curtain movement or waving trees, as well as changes induced by camera motion, sensor noise, nonuniform attenuation or atmospheric absorption, among other factors sudden changes in illumination such as switching on/off a light or opening/closing a window Different research has been developed to adapt to this changes. One of the most common is the background subtraction approach, which has been proposed by several researchers [12] [13] [14] [15] [16]. Basically, a background model, which is built after observing the scene several seconds, is used to identify moving objects by thresholding the new frame with respect to the built background model. However, this approach presents two important drawbacks: everything observed when the background model is being built is considered background no sudden change in illumination occurs during the whole experiment It is important to take into account that, unlike most of them which each background pixel is represented by a Gaussian distribution, Stauffer and Grimson [17] presented adaptive background mixture models. However, as it was pointed out in [18], some issues have to be solved. Therefore, a novel algorithm is proposed here. It is divided into two different phases, as can be seen in Figs. 1-2: 1) In the first phase, an initial background model is obtained by observing the scene during several seconds. However, unlike most background estimation algorithms, another technique for controlling the activity within the robot workspace is performed. With the aim of reducing the computational and time cost, this control is performed by means of a simple difference technique. In that way, there is no danger to damage people who approach the robot while this initial model is being built. Thus, basically, in this phase, a simple frame-difference approach is performed in order to detect moving objects within the robot workspace. Then, two consecutive morphological operations are applied to erase isolated points or lines caused by the dynamic factors mentioned above. In this point, two different tasks are carried out: On the one hand, adaptive background model is updated with the values of the pixels classified as background in order to adapt it to some small changes which do not represent targets On the other hand, a tracking process, which is explained in the next sections, is performed 2) In the second phase, detection and identification moving object process starts. When a human or another moving object enters in a room where the robot is, it is detected by means of a two-level processing: pixel level, in which the adaptative background model, initially built in the previous phase, is used to classify pixels as foreground or background. It is possible because each pixel belonging to the moving object has an intensity value which does not fit into the background model. That is, the used background model associates a statistical distribution (defined by its mean color value and its variance) to each pixel of the image. Then, when an interest object enters and/or moves around the robot workspace, there will be a difference between the background model values and object s pixel values. Actually, a criterion based on stored statistical information is defined to deal with this classification and it can be expressed as follows:
3 b (r, c) = { 1 if i (r, c) µ r,c > k σ r,c 0 otherwise (1) where b (r, c) is the binary value of the pixel (r, c) to be calculated, i (r, c) represents pixel brightness in the current frame, µ r,c and σ r,c are the mean and standard deviation values calculated by the background model respectively and k is a constant value which depends on the point distribution frame level, whereby the raw classification based on the background model is improved as well as the model is adapted when a global change in illumination occurs. A proper combination of substraction techniques has been implemented. In that way, a different segmentation process is applied at frame level and it is used to improve the segmentation carried out at pixel level. Furthermore, this processing allows the system to identify global illumination changes. That is, it is assumed that an significant illumination change has taken place when there is a change in more pixels than a half of the image size. When an event of this type occurs, a new adaptative background model is built because if it was not done, the application would detect background pixels as moving objects, since the model is based on intensity values and a change in illumination produces a variation of them. As in the previous phase, after properly segmenting an image, two consecutive morphological operations are applied to erase isolated points or lines caused by small dynamic factors. Later, pixels classified as background are incorporated to the adaptive background model, while foreground pixels are processed by applying a tracking method. II. TRACKING MOVING TARGETS Once targets to be tracked have been identified, the next step is to track them. For that, first of all, a connected-component labeling algorithm is performed. However, due to segmentation errors, it might be obtained more than one labeled component for the same target. Thus, a merge algorithm, based on neighbourhood and feature similarity, is applied. Then, a minimum rounded rectangles are generated. After that, with the aim of performing the corresponding tracking, a pattern is built from each of them. In this case, a pattern is intended as the data structure such that allows the system to track the moving objects by means of matching an object in two consecutive frames even when it suffers a partial or whole occlusion. Thus, in our case, a pattern is composed of two different things: a representative image of the target, that is, it is not possible to directly compare two images of the same object when they are provided by an omnidirectional image. It is due to the fact that every object has different CAPTURE AN IMAGE SEGMENTATION REINITIALIZE BACKGROUND MODEL GLOBAL YES ILLUMINATION CHANGE? NO MORPHOLOGICAL OPERATIONS CONNECTED COMP. LABELING TARGET TRACKING BRIGHTNESS INCORPORATION INITIAL BACKGROUND MODEL NO YES FINISH? Fig. 1. Phase-1 flowchart of implemented two-phase algorithm for moving object detection in unknown, dynamic environments CAPTURE AN IMAGE FRAME LEVEL PIXEL LEVEL SUBSTRACT TECHNIQUES BACKGROUND MODEL APPLICATION REINITIALIZE GLOBAL YES BACKGROUND ILLUMINATION MODEL CHANGE? NO SEGMENTATION REFINMENT SEGMENTATION MORPHOLOGICAL OPERATIONS CONNECTED COMPONENT LABELING TARGET TRACKING BACKGROUND MODEL UPDATING Fig. 2. Phase-2 flowchart of implemented two-phase algorithm for moving object detection in unknown, dynamic environments
4 orientation depending on its position inside the scene. So, several rotations would be necessary in order to correctly match the images of the same object in two different frames. Thus, it is necessary to apply a transformation from the circular omnidireccional image to a perspective one (see Fig. 3. This is done only for each region detected as object of interest since transformation of the whole image could become very high time-consuming. a feature array whose elements contain information about brightness and blob width and height, among other things, used to properly match to images of the same object in two consecutive frames as well as two stereo images Fig. 3. image Representative perspective image from the labeled omnidirectional Therefore, on the one hand, representative images are compared with the extracted from the previous frame or the another stereo image. In this way, a pixel-similarity likehood between representative images is obtained. On the other hand, a featuresimilarity likehood is generated from feature array comparison. Both likehoods are properly combined to match two images from the same object in consecutive frames or frames taken from a dioptric stereo system. we have implemented a process to estimate the distance from the dioptric stereo system to the detected objects. The guidelines of the distance estimation method are as follows: a matching process between images taken by different cameras is done. First, the adaptive background model at two levels is independently performed for the images captured by each camera. Then, each detected blob is described by means of a feature array whose elements contain information about brightness and blob width and height, among other things. Next, each feature array is compared with all the detected blobs in the frame taken by the other camera in the system, while the matching process included in the implemented moving object detection method is simultaneously performed. Thus, a similarity likelihood is calculated and a matching decision is made based on it. detection of the other camera in each frame, as it is depicted in Fig. 4. In fact, this step is necessary to be performed only once because the baseline of the stereo system is fixed. estimation of the distance with respect to each camera from triangulation geometry, as it is shown in Fig. 4. The triangle to solve is determined by the projection ray of a 3D point of the real world on each plane image as well as the projection ray of the center of the other camera of the system. It is possible thanks to the calibration step since the projection rays can be estimated, as mentioned above. Fig. 4. Camera detection in images captured by the two cameras III. STEREO SYSTEM For approximately determining the distance from the system to an objective we need to estimate the correspondence between omnidirectional images, that is, the epipolar equation. From the point of view of stereo vision, an epipole is defined as the projection of the camera center on the image plane of another camera. Unlike traditional cameras, two epipoles are visible, since any camera is within the field of view of each other. For that reason, in the case of omnidirectional cameras, it is not necessary to use a third external object for stereo calibration. This is the idea in which Zhu et al [15] [16] based to implement a virtual stereovision system with a flexible baseline in order to detect, track and localize moving human subjects in unknown indoor environments. In the literature, other approaches developed for catadiotric systems can be found [16] [19] [20] [21]. However, even though there is almost no work with dioptric stereo systems, IV. EXPERIMENTAL RESULTS Two different experiments have been carried out to check the designed application performance. First, the performance of the moving object detection was evaluated by using only one fisheye camera. After that, the obtained estimation of the detected object distance through our dioptric stereo system was analysed. In this section, some of these results are provided. A. Experimental set up For both kinds of experiments carried out, a mobile manipulator which incorporates a visual system composed of 2 fisheye cameras mounted on the robot base, pointing upwards to the ceiling, to guarantee the safety in its whole workspace. Figs. 5 depicts our experimental setup, which consists of a mobile Nomadic XR4000 base, a Mitsubishi PA10 arm, and two fisheye cameras ( SSC-DC330P third-inch color cameras [22]
5 Manipulator Fisheye cameras Mobile base Fig. 5. Experimental setup: external view of the arm and cameras. with fish-eye vari-focal lenses YV2.2x1.4A-2, which provide 185 degree field of view). Thus, on the one hand, a single fisheye camera was used to evaluate the moving object detection performance. The robot system was located in the center of our laboratory where almost all the space was covered and where most of the uncontrolled, dynamic factors named above were present (e.g. blinking of computer screens, shadows, mirror images on the glass windows or variations in illumination due to the different time of the day or the switch on/off a light). On the other hand, the dioptric system was used. In both cases, the images to process were acquired in 24-bit RGB color space with a 640x480 resolution. Fig. 6. Results of applying the novel adaptative background model with different subjects B. Moving Object Detection Evaluation As it was pointed out, the first series of experiments were to evaluate the performance of the novel adaptative, robust background model. For that, illumination conditions and object positions were changed. Two sequences of the images as a result of applying the novel updated background model under the same illumination conditions is depicted in Fig. 6. As it can be seen, the method is able to visually track moving objects without constraints such as clothes color or illumination. In a similar way, illumination conditions were changed and, as it is shown in Fig. 7, the obtained results were also successful. V. CONCLUSIONS In this paper, a robust visual application to detect and track moving objects within a robot workspace has been presented based on a pair of fisheye cameras. These cameras have the clear advantage of covering the whole workspace without resulting in a time consuming application, but there is little previous work about this kind of devices. Consequently, we had to implement novel techniques to achieve our goal. Thus, the first subgoal was to design a process to detect moving objects within the observed scene. After studying several factors which can affect the detection process, a novel adaptive background model has been implemented where contraints such as waiting a period of time to build the initial background or illumination conditions do not exist. In a similar Fig. 7. Results of applying the novel adaptative background model under different illumination conditions
6 way, it is also capable of tracking the detected objects when they are not in movement. In addition, the designed method includes a matching process between two consecutive frames. The next step is to estimate the distance from the detected objects to the system. For that, a stereo dioptric system with fixed baseline has been built. Therefore, it was necessary to perform a calibration process in order to obtain the fundamental matrix. Three different toolboxes were tested, but only two were used in the end. Finally, a method to estimate distance from the objects to the system was implemented. In this case, a triangulation technique is used. It is possible to perform because the cameras can see each other. It must be taken into account that epipolar geometry of the stereo dioptric systems was not used, although the combination of that with the current implementation in order to improve the accuracy in the matching process is part of our future work. ACKNOWLEDGMENTS This paper describes research carried out at the Robotic Intelligence Laboratory of Universitat Jaume I. Support for this laboratory is provided in part by the European Commission under project EYESHOTS (FP7 ICT ), by Fundacio Caixa-Castello under project P1-1B and by Ministerio de Ciencia under project DPI and FPI grant BES REFERENCES [1] T. Svoboda, T. Pajdla, and V. Hlavác, Epipolar geometry for panoramic cameras, in European Conf. on Computer Vision (ECCV 98), Freiburt Germany, July 1998, pp [2] S. C. Wei, Y. Yagi, and M. Yachida, Building local floor map by use of ultrasonic and omni-directional vision sensor, in Int. Conf. on Robotics and Automation, Leuven, Belgium, May 1998, pp [3] S. Baker and S. K. Nayar, A theory of single-viewpoint catadiptric image formation, Int. Journal of Computer Vision, vol. 35, no. 2, pp , [4], A theory of catadioptric image formation, in Int. Conf. on Computer Vision (ICCV 98), Bombay, India, 5 8 January 1998, pp [5] R. W. Wood, Fish-eye views, and vision under water, Philosophical Magazine, vol. 12, no. Series 6, pp , [6] C. Geyer and K. Daniilidis, A unifying theory for central panoramic systems and practical applications, in European Conf. on Computer Vision (ECCV 2000), Dublin, Ireland, 26th June 1st July 2000, pp [7] S. L. G. and S. G. C, Computer vision, U. S. River, Ed. Prentice Hall, [8] B. Kwolek, Color vision based person following with a mobile robot, in Third Int. Workshop on Robot Motion and Control (RoMoCo 02), November 2002, pp [9] M. Tarokh and P. Ferrari, Case study: Robotic person following using fuzzy control and image segmentation, Robotic Systems, vol. 20, no. 9, pp , [10] M. Kobilarov, G. Sukhatme, J. Hyams, and P. Batavia, People tracking and following with mobile robot using an omnidirectional camera and a laser, in 2006 IEEE Int. Conf. on Robotics and Automation (ICRA 06), Orlando, Florida, May 2006, pp [11] T. Yoshimi, M. Nishiyama, T. Sonoura, H. Nakamoto, S. Tokura, H. Sato, F. Ozaki, N. Matsuhira, and H. Mizoguchi, Development of a person following robot with vision based target detection, in 2006 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Beijing, China, October 2006, pp [12] K. Toyama, J. Krum, B. Brumitt, and B. Meyers, Wallflower: Principles and practice of background maintenance, in Seventh IEEE Int. Conf. on Computer Vision, vol. 1, Kerkyra, Greece, 1999, pp [13] I. Haritaoglu, D. Harwood, and L. S. Davis, W4: Real-time surveillance of people and their activities, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp , August [14] H. Liu, W. Pi, and H. Zha, Motion detection for multiple moving targets by using an omnidirectional camera, in IEEE Int. Conf. on Robotics, Intelligent Systems and Signal Processing, vol. 1, Changsha, China, October 2003, pp [15] Z. Zhu, K. D. Rajasekar, E. M. Riseman, and A. R. Hanson, Panoramic virtual stereo vision of cooperative mobile robots for localizing 3d moving objects, in IEEE Workshop on Omnidirectional Vision, 12th June 2000, pp [16] Z. Zhu, D. R. Karuppiah, E. M. Riseman, and A. R. Hanson, Keeping smart, omnidirectional eyes on you. adaptive panoramic stereovision for human tracking and localization with cooperative robots, IEEE Robotics and Automation Magazine, pp , December 2004, special Issue on Panoramic Robotics. [17] S. C. and G. W.E.L., Adaptive background mixture models for real-time tracking, in Conference on Computer Vision and Pattern Recognition (CVPR 99), vol. 2, 23rd 25th June 1999, pp [18] K. P. and B. R., An improved adaptive background mixture model for real-time tracking with shadow detection, in 2nd European Workshop on Advanced Video Based Surveillance Systems (AVBS 01), VIDEO BASED SURVEILLANCE SYSTEMS: Computer Vision and Distributed Processing, K. A. Publisher, Ed., September [19] C. Geyer and K. Daniilidis, Properties of the catadioptric fundamental matrix, in The 7th European Conf. on Computer Vision (ECCV2002), vol. 2, LNCS 2351, Copenhagen, Denmark, 27th May 2nd June 2002, pp [Online]. Available: service/series/0558/tocs/t2351.htm [20] Y. Negishi, J. Miura, and Y. Shirai, Calibration of omnidirectional stereo for mobile robots, 2004 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2004), vol. 3, pp , 28th September 2nd October [21] S. Li and K. Fukumori, Spherical stereo for the construction of immersive vr environment, in IEEE Virtual Reality (VR 05), 12th 16th March 2005, pp [22]
Towards Safe Interactions with a Sturdy Humanoid Robot
Towards Safe Interactions with a Sturdy Humanoid Robot Ester Martínez and Angel P. del Pobil Abstract On the way to autonomous robots, interaction plays a main role since it allows to adequately perform
More informationOmni Stereo Vision of Cooperative Mobile Robots
Omni Stereo Vision of Cooperative Mobile Robots Zhigang Zhu*, Jizhong Xiao** *Department of Computer Science **Department of Electrical Engineering The City College of the City University of New York (CUNY)
More informationCatadioptric camera model with conic mirror
LÓPEZ-NICOLÁS, SAGÜÉS: CATADIOPTRIC CAMERA MODEL WITH CONIC MIRROR Catadioptric camera model with conic mirror G. López-Nicolás gonlopez@unizar.es C. Sagüés csagues@unizar.es Instituto de Investigación
More information3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera
3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric
More informationMotion Detection Algorithm
Volume 1, No. 12, February 2013 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Motion Detection
More informationA Moving Object Segmentation Method for Low Illumination Night Videos Soumya. T
Proceedings of the World Congress on Engineering and Computer Science 28 WCECS 28, October 22-24, 28, San Francisco, USA A Moving Object Segmentation Method for Low Illumination Night Videos Soumya. T
More informationEstimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera
Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera Ryosuke Kawanishi, Atsushi Yamashita and Toru Kaneko Abstract Map information is important
More informationBACKGROUND MODELS FOR TRACKING OBJECTS UNDER WATER
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320 088X IMPACT FACTOR: 5.258 IJCSMC,
More informationHuman Motion Detection and Tracking for Video Surveillance
Human Motion Detection and Tracking for Video Surveillance Prithviraj Banerjee and Somnath Sengupta Department of Electronics and Electrical Communication Engineering Indian Institute of Technology, Kharagpur,
More informationDetecting motion by means of 2D and 3D information
Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,
More informationPrecise Omnidirectional Camera Calibration
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh Carnegie Mellon University {dstrelow, jmishler, dkoes, ssingh}@cs.cmu.edu Abstract Recent omnidirectional
More informationRealtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments
Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems pp. 31-36, Maui, Hawaii, Oct./Nov. 2001. Realtime Omnidirectional Stereo for Obstacle Detection and Tracking in Dynamic Environments Hiroshi
More informationAutomatic Shadow Removal by Illuminance in HSV Color Space
Computer Science and Information Technology 3(3): 70-75, 2015 DOI: 10.13189/csit.2015.030303 http://www.hrpub.org Automatic Shadow Removal by Illuminance in HSV Color Space Wenbo Huang 1, KyoungYeon Kim
More informationCamera Calibration for a Robust Omni-directional Photogrammetry System
Camera Calibration for a Robust Omni-directional Photogrammetry System Fuad Khan 1, Michael Chapman 2, Jonathan Li 3 1 Immersive Media Corporation Calgary, Alberta, Canada 2 Ryerson University Toronto,
More informationA Fast Moving Object Detection Technique In Video Surveillance System
A Fast Moving Object Detection Technique In Video Surveillance System Paresh M. Tank, Darshak G. Thakore, Computer Engineering Department, BVM Engineering College, VV Nagar-388120, India. Abstract Nowadays
More informationA Street Scene Surveillance System for Moving Object Detection, Tracking and Classification
A Street Scene Surveillance System for Moving Object Detection, Tracking and Classification Huei-Yung Lin * and Juang-Yu Wei Department of Electrical Engineering National Chung Cheng University Chia-Yi
More informationEvaluation of Moving Object Tracking Techniques for Video Surveillance Applications
International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347 5161 2015INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Research Article Evaluation
More informationClustering Based Non-parametric Model for Shadow Detection in Video Sequences
Clustering Based Non-parametric Model for Shadow Detection in Video Sequences Ehsan Adeli Mosabbeb 1, Houman Abbasian 2, Mahmood Fathy 1 1 Iran University of Science and Technology, Tehran, Iran 2 University
More informationMoving Object Detection for Video Surveillance
International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) Moving Object Detection for Video Surveillance Abhilash K.Sonara 1, Pinky J. Brahmbhatt 2 1 Student (ME-CSE), Electronics and Communication,
More informationCalibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern
Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook
More informationSimultaneous surface texture classification and illumination tilt angle prediction
Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona
More informationGate-to-gate automated video tracking and location
Gate-to-gate automated video tracing and location Sangyu Kang*, Jooni Pai**, Besma R. Abidi***, David Shelton, Mar Mitces, and Mongi A. Abidi IRIS Lab, Department of Electrical & Computer Engineering University
More informationROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL
ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte
More informationMathematics of a Multiple Omni-Directional System
Mathematics of a Multiple Omni-Directional System A. Torii A. Sugimoto A. Imiya, School of Science and National Institute of Institute of Media and Technology, Informatics, Information Technology, Chiba
More informationDYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN. Gengjian Xue, Jun Sun, Li Song
DYNAMIC BACKGROUND SUBTRACTION BASED ON SPATIAL EXTENDED CENTER-SYMMETRIC LOCAL BINARY PATTERN Gengjian Xue, Jun Sun, Li Song Institute of Image Communication and Information Processing, Shanghai Jiao
More informationSURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES
SURVEY PAPER ON REAL TIME MOTION DETECTION TECHNIQUES 1 R. AROKIA PRIYA, 2 POONAM GUJRATHI Assistant Professor, Department of Electronics and Telecommunication, D.Y.Patil College of Engineering, Akrudi,
More informationFast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm
Fast Denoising for Moving Object Detection by An Extended Structural Fitness Algorithm ALBERTO FARO, DANIELA GIORDANO, CONCETTO SPAMPINATO Dipartimento di Ingegneria Informatica e Telecomunicazioni Facoltà
More informationPairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement
Pairwise Threshold for Gaussian Mixture Classification and its Application on Human Tracking Enhancement Daegeon Kim Sung Chun Lee Institute for Robotics and Intelligent Systems University of Southern
More informationComputer Vision Lecture 17
Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics 13.01.2015 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar in the summer semester
More informationPartial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems
Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Nuno Gonçalves and Helder Araújo Institute of Systems and Robotics - Coimbra University of Coimbra Polo II - Pinhal de
More informationOMNIDIRECTIONAL STEREOVISION SYSTEM WITH TWO-LOBE HYPERBOLIC MIRROR FOR ROBOT NAVIGATION
Proceedings of COBEM 005 Copyright 005 by ABCM 18th International Congress of Mechanical Engineering November 6-11, 005, Ouro Preto, MG OMNIDIRECTIONAL STEREOVISION SYSTEM WITH TWO-LOBE HYPERBOLIC MIRROR
More informationComputer Vision Lecture 17
Announcements Computer Vision Lecture 17 Epipolar Geometry & Stereo Basics Seminar in the summer semester Current Topics in Computer Vision and Machine Learning Block seminar, presentations in 1 st week
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationAutomatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera
Automatic Generation of Indoor VR-Models by a Mobile Robot with a Laser Range Finder and a Color Camera Christian Weiss and Andreas Zell Universität Tübingen, Wilhelm-Schickard-Institut für Informatik,
More informationVisual Tracking of Planes with an Uncalibrated Central Catadioptric Camera
The 29 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 29 St. Louis, USA Visual Tracking of Planes with an Uncalibrated Central Catadioptric Camera A. Salazar-Garibay,
More informationConnected Component Analysis and Change Detection for Images
Connected Component Analysis and Change Detection for Images Prasad S.Halgaonkar Department of Computer Engg, MITCOE Pune University, India Abstract Detection of the region of change in images of a particular
More informationFlexible Calibration of a Portable Structured Light System through Surface Plane
Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured
More informationA Real Time Human Detection System Based on Far Infrared Vision
A Real Time Human Detection System Based on Far Infrared Vision Yannick Benezeth 1, Bruno Emile 1,Hélène Laurent 1, and Christophe Rosenberger 2 1 Institut Prisme, ENSI de Bourges - Université d Orléans
More informationA Robust Two Feature Points Based Depth Estimation Method 1)
Vol.31, No.5 ACTA AUTOMATICA SINICA September, 2005 A Robust Two Feature Points Based Depth Estimation Method 1) ZHONG Zhi-Guang YI Jian-Qiang ZHAO Dong-Bin (Laboratory of Complex Systems and Intelligence
More informationMULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES
MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada
More informationThree-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera
Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationAn Approach for Real Time Moving Object Extraction based on Edge Region Determination
An Approach for Real Time Moving Object Extraction based on Edge Region Determination Sabrina Hoque Tuli Department of Computer Science and Engineering, Chittagong University of Engineering and Technology,
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationDRC A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking. Viboon Sangveraphunsiri*, Kritsana Uttamang, and Pongsakon Pedpunsri
The 23 rd Conference of the Mechanical Engineering Network of Thailand November 4 7, 2009, Chiang Mai A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking Viboon Sangveraphunsiri*, Kritsana Uttamang,
More informationMultiple View Geometry
Multiple View Geometry CS 6320, Spring 2013 Guest Lecture Marcel Prastawa adapted from Pollefeys, Shah, and Zisserman Single view computer vision Projective actions of cameras Camera callibration Photometric
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationVEHICLE DETECTION FROM AN IMAGE SEQUENCE COLLECTED BY A HOVERING HELICOPTER
In: Stilla U et al (Eds) PIA. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 8 (/W) VEHICLE DETECTION FROM AN IMAGE SEQUENCE COLLECTED BY A HOVERING HELICOPTER
More informationDetecting and Identifying Moving Objects in Real-Time
Chapter 9 Detecting and Identifying Moving Objects in Real-Time For surveillance applications or for human-computer interaction, the automated real-time tracking of moving objects in images from a stationary
More informationA Framework for Multiple Radar and Multiple 2D/3D Camera Fusion
A Framework for Multiple Radar and Multiple 2D/3D Camera Fusion Marek Schikora 1 and Benedikt Romba 2 1 FGAN-FKIE, Germany 2 Bonn University, Germany schikora@fgan.de, romba@uni-bonn.de Abstract: In this
More informationAdaptive Background Mixture Models for Real-Time Tracking
Adaptive Background Mixture Models for Real-Time Tracking Chris Stauffer and W.E.L Grimson CVPR 1998 Brendan Morris http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Motivation Video monitoring and surveillance
More informationA Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA
Proceedings of the 3rd International Conference on Industrial Application Engineering 2015 A Simple Automated Void Defect Detection for Poor Contrast X-ray Images of BGA Somchai Nuanprasert a,*, Sueki
More informationPanoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints
Panoramic 3D Reconstruction Using Rotational Stereo Camera with Simple Epipolar Constraints Wei Jiang Japan Science and Technology Agency 4-1-8, Honcho, Kawaguchi-shi, Saitama, Japan jiang@anken.go.jp
More informationSpatio-Temporal Nonparametric Background Modeling and Subtraction
Spatio-Temporal onparametric Background Modeling and Subtraction Raviteja Vemulapalli R. Aravind Department of Electrical Engineering Indian Institute of Technology, Madras, India. Abstract Background
More informationCollecting outdoor datasets for benchmarking vision based robot localization
Collecting outdoor datasets for benchmarking vision based robot localization Emanuele Frontoni*, Andrea Ascani, Adriano Mancini, Primo Zingaretti Department of Ingegneria Infromatica, Gestionale e dell
More informationCOMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION
COMPARATIVE STUDY OF DIFFERENT APPROACHES FOR EFFICIENT RECTIFICATION UNDER GENERAL MOTION Mr.V.SRINIVASA RAO 1 Prof.A.SATYA KALYAN 2 DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING PRASAD V POTLURI SIDDHARTHA
More informationSynchronized Ego-Motion Recovery of Two Face-to-Face Cameras
Synchronized Ego-Motion Recovery of Two Face-to-Face Cameras Jinshi Cui, Yasushi Yagi, Hongbin Zha, Yasuhiro Mukaigawa, and Kazuaki Kondo State Key Lab on Machine Perception, Peking University, China {cjs,zha}@cis.pku.edu.cn
More informationOmni-directional Multi-baseline Stereo without Similarity Measures
Omni-directional Multi-baseline Stereo without Similarity Measures Tomokazu Sato and Naokazu Yokoya Graduate School of Information Science, Nara Institute of Science and Technology 8916-5 Takayama, Ikoma,
More informationDepth from two cameras: stereopsis
Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture
More informationMeasurement of Pedestrian Groups Using Subtraction Stereo
Measurement of Pedestrian Groups Using Subtraction Stereo Kenji Terabayashi, Yuki Hashimoto, and Kazunori Umeda Chuo University / CREST, JST, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551, Japan terabayashi@mech.chuo-u.ac.jp
More informationQueue based Fast Background Modelling and Fast Hysteresis Thresholding for Better Foreground Segmentation
Queue based Fast Background Modelling and Fast Hysteresis Thresholding for Better Foreground Segmentation Pankaj Kumar Surendra Ranganath + Weimin Huang* kumar@i2r.a-star.edu.sg elesr@nus.edu.sg wmhuang@i2r.a-star.edu.sg
More informationReal-Time Human Detection using Relational Depth Similarity Features
Real-Time Human Detection using Relational Depth Similarity Features Sho Ikemura, Hironobu Fujiyoshi Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai, Aichi, 487-8501 Japan. si@vision.cs.chubu.ac.jp,
More informationDetection of Small-Waving Hand by Distributed Camera System
Detection of Small-Waving Hand by Distributed Camera System Kenji Terabayashi, Hidetsugu Asano, Takeshi Nagayasu, Tatsuya Orimo, Mutsumi Ohta, Takaaki Oiwa, and Kazunori Umeda Department of Mechanical
More informationAuto-focusing Technique in a Projector-Camera System
2008 10th Intl. Conf. on Control, Automation, Robotics and Vision Hanoi, Vietnam, 17 20 December 2008 Auto-focusing Technique in a Projector-Camera System Lam Bui Quang, Daesik Kim and Sukhan Lee School
More informationAn ICA based Approach for Complex Color Scene Text Binarization
An ICA based Approach for Complex Color Scene Text Binarization Siddharth Kherada IIIT-Hyderabad, India siddharth.kherada@research.iiit.ac.in Anoop M. Namboodiri IIIT-Hyderabad, India anoop@iiit.ac.in
More informationA New Method and Toolbox for Easily Calibrating Omnidirectional Cameras
A ew Method and Toolbox for Easily Calibrating Omnidirectional Cameras Davide Scaramuzza 1 and Roland Siegwart 1 1 Swiss Federal Institute of Technology Zurich (ETHZ) Autonomous Systems Lab, CLA-E, Tannenstrasse
More informationBackground Image Generation Using Boolean Operations
Background Image Generation Using Boolean Operations Kardi Teknomo Ateneo de Manila University Quezon City, 1108 Philippines +632-4266001 ext 5660 teknomo@gmail.com Philippine Computing Journal Proceso
More informationDepth from two cameras: stereopsis
Depth from two cameras: stereopsis Epipolar Geometry Canonical Configuration Correspondence Matching School of Computer Science & Statistics Trinity College Dublin Dublin 2 Ireland www.scss.tcd.ie Lecture
More informationHySCaS: Hybrid Stereoscopic Calibration Software
HySCaS: Hybrid Stereoscopic Calibration Software Guillaume Caron, Damien Eynard To cite this version: Guillaume Caron, Damien Eynard. HySCaS: Hybrid Stereoscopic Calibration Software. SPIE newsroom in
More informationReal-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images
Real-time Generation and Presentation of View-dependent Binocular Stereo Images Using a Sequence of Omnidirectional Images Abstract This paper presents a new method to generate and present arbitrarily
More informationA Calibration Algorithm for POX-Slits Camera
A Calibration Algorithm for POX-Slits Camera N. Martins 1 and H. Araújo 2 1 DEIS, ISEC, Polytechnic Institute of Coimbra, Portugal 2 ISR/DEEC, University of Coimbra, Portugal Abstract Recent developments
More informationTracking Under Low-light Conditions Using Background Subtraction
Tracking Under Low-light Conditions Using Background Subtraction Matthew Bennink Clemson University Clemson, South Carolina Abstract A low-light tracking system was developed using background subtraction.
More informationA Computer Vision Sensor for Panoramic Depth Perception
A Computer Vision Sensor for Panoramic Depth Perception Radu Orghidan 1, El Mustapha Mouaddib 2, and Joaquim Salvi 1 1 Institute of Informatics and Applications, Computer Vision and Robotics Group University
More informationCalibration of a fish eye lens with field of view larger than 180
CENTER FOR MACHINE PERCEPTION CZECH TECHNICAL UNIVERSITY Calibration of a fish eye lens with field of view larger than 18 Hynek Bakstein and Tomáš Pajdla {bakstein, pajdla}@cmp.felk.cvut.cz REPRINT Hynek
More informationNon-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs and Adaptive Motion Frame Method
Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Non-rigid body Object Tracking using Fuzzy Neural System based on Multiple ROIs
More informationMeasurement of 3D Foot Shape Deformation in Motion
Measurement of 3D Foot Shape Deformation in Motion Makoto Kimura Masaaki Mochimaru Takeo Kanade Digital Human Research Center National Institute of Advanced Industrial Science and Technology, Japan The
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationTowards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training
Towards a Calibration-Free Robot: The ACT Algorithm for Automatic Online Color Training Patrick Heinemann, Frank Sehnke, Felix Streichert, and Andreas Zell Wilhelm-Schickard-Institute, Department of Computer
More informationBehavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism
Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate
More informationAdaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision
Adaptive Zoom Distance Measuring System of Camera Based on the Ranging of Binocular Vision Zhiyan Zhang 1, Wei Qian 1, Lei Pan 1 & Yanjun Li 1 1 University of Shanghai for Science and Technology, China
More informationIdle Object Detection in Video for Banking ATM Applications
Research Journal of Applied Sciences, Engineering and Technology 4(24): 5350-5356, 2012 ISSN: 2040-7467 Maxwell Scientific Organization, 2012 Submitted: March 18, 2012 Accepted: April 06, 2012 Published:
More informationAdaptive Panoramic Stereo Vision for Human Tracking and Localization with Cooperative Robots *
Adaptive Panoramic Stereo Vision for Human Tracking and Localization with Cooperative Robots * Zhigang Zhu Department of Computer Science, City College of New York, New York, NY 10031 zhu@cs.ccny.cuny.edu
More informationContinuous Multi-Views Tracking using Tensor Voting
Continuous Multi-Views racking using ensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA 90089-073.
More informationStructure from Small Baseline Motion with Central Panoramic Cameras
Structure from Small Baseline Motion with Central Panoramic Cameras Omid Shakernia René Vidal Shankar Sastry Department of Electrical Engineering & Computer Sciences, UC Berkeley {omids,rvidal,sastry}@eecs.berkeley.edu
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely, Zhengqi Li Stereo Single image stereogram, by Niklas Een Mark Twain at Pool Table", no date, UCR Museum of Photography Stereo Given two images from different viewpoints
More informationMonitoring surrounding areas of truck-trailer combinations
Monitoring surrounding areas of truck-trailer combinations Tobias Ehlgen 1 and Tomas Pajdla 2 1 Daimler-Chrysler Research and Technology, Ulm tobias.ehlgen@daimlerchrysler.com 2 Center of Machine Perception,
More informationBackground Subtraction Techniques
Background Subtraction Techniques Alan M. McIvor Reveal Ltd PO Box 128-221, Remuera, Auckland, New Zealand alan.mcivor@reveal.co.nz Abstract Background subtraction is a commonly used class of techniques
More informationObject Detection in Video Streams
Object Detection in Video Streams Sandhya S Deore* *Assistant Professor Dept. of Computer Engg., SRES COE Kopargaon *sandhya.deore@gmail.com ABSTRACT Object Detection is the most challenging area in video
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationDepth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy
Depth Measurement and 3-D Reconstruction of Multilayered Surfaces by Binocular Stereo Vision with Parallel Axis Symmetry Using Fuzzy Sharjeel Anwar, Dr. Shoaib, Taosif Iqbal, Mohammad Saqib Mansoor, Zubair
More informationIndoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot
Indoor Positioning System Based on Distributed Camera Sensor Networks for Mobile Robot Yonghoon Ji 1, Atsushi Yamashita 1, and Hajime Asama 1 School of Engineering, The University of Tokyo, Japan, t{ji,
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationMulti-Camera Calibration, Object Tracking and Query Generation
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationA Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion
A Novel Multi-Planar Homography Constraint Algorithm for Robust Multi-People Location with Severe Occlusion Paper ID:086 Abstract Multi-view approach has been proposed to solve occlusion and lack of visibility
More informationIntegration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement
Integration of Multiple-baseline Color Stereo Vision with Focus and Defocus Analysis for 3D Shape Measurement Ta Yuan and Murali Subbarao tyuan@sbee.sunysb.edu and murali@sbee.sunysb.edu Department of
More information3D Spatial Layout Propagation in a Video Sequence
3D Spatial Layout Propagation in a Video Sequence Alejandro Rituerto 1, Roberto Manduchi 2, Ana C. Murillo 1 and J. J. Guerrero 1 arituerto@unizar.es, manduchi@soe.ucsc.edu, acm@unizar.es, and josechu.guerrero@unizar.es
More informationContinuous Multi-View Tracking using Tensor Voting
Continuous Multi-View Tracking using Tensor Voting Jinman Kang, Isaac Cohen and Gerard Medioni Institute for Robotics and Intelligent Systems University of Southern California {jinmanka, icohen, medioni}@iris.usc.edu
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More information