Towards an appearance-based approach to the leader-follower formation problem

Size: px
Start display at page:

Download "Towards an appearance-based approach to the leader-follower formation problem"

Transcription

1 Towards an appearance-based approach to the leader-follower formation problem James Oliver Frédéric Labrosse Department of Computer Science University of Wales, Aberystwyth Aberystwyth SY23 3DB, United Kingdom Published in: Proceedings of Towards Autonomous Systems, pages (2007), University of Wales, Aberystwyth, United Kingdom,

2 Towards an appearance-based approach to the leader-follower formation problem James Oliver Frédéric Labrosse Department of Computer Science, University of Wales Aberystwyth, Ceredigion, SY23 3ET, UK [jto2 Abstract We present in this paper an application of previously developed techniques to the leader-follower formation problem, techniques that exclusively use vision. Contrary to other vision-based methods, the only information needed to perform this task is a set of images of the back of the leader robot that will be tracked by the follower robot. For this, the follower robot controls its (translational and rotational) speed by performing only pixel-wise comparisons between the tracked images and the current image of its surroundings. Results are presented that quantify and qualify the performance of the method in a number of situations, all using real robots in our research lab. Assumptions and limitations are discussed. 1 Introduction Convoys of vehicles are used in a large variety of situations. For example, the army uses convoys to bring resources to remotely stationed units, and the mining industry uses convoys of dumper trucks from excavation sites to processing sites. Often, these convoys operate in dangerous situations and involve many human drivers. Automating such convoys is therefore a valuable application for robotics. In such an application, a leader robot could be either tele-operated or equipped with the necessary sensors and intelligence to make it autonomous and a number of follower robots would then just have to follow that leader (or follow each other) to create the convoy. This problem has been tackled by many researchers, in particular on the aspects of controlling the geometrical formation (Das et al., 2002, Mariottini et al., 2005, Renaud et al., 2004). Most of these methods use vision, possibly combined with some other sources of relative positioning of the robots and some explicit communication. In this paper we concentrate on the vision aspects of the problem and in particular on the localisation and tracking in images of the leader robot. Most of the related research uses omni-directional cameras because they provide images in which the leader robot is potentially always visible. There are exceptions, such as (Chiem and Cervera, 2004) where a panning camera is used. The visual processing also varies between researchers and is usually simplified to the extreme. For example, a known coloured target installed on the leader is tracked in (Chiem and Cervera, 2004) to estimate distance and orientation of the leader. In (Schneiderman et al., 1995), a geometrical model of a 2D target painted on the back of the leader is used. Requiring less engineering of the leader, the transformation between desired view and current view using a set of matched feature points on the leader is found in (Malis and Chaumette, 2000). The optical flow computed in omni-directional images is used in a control law to reach and keep a desired configuration in (Vidal et al., 2003). However, these methods using features require the detection and matching of these features, which is usually an expensive and not necessarily reliable process in general situations. Computing the optical flow shares most of these problems too. (Eklund et al., 1994) use a correlation filter-based comparison of the target and current views, using optimal filters, and adapt the filters to accommodate for 3D variations of the target. This method is appealing but requires the non-obvious step of determining the filters. Moreover, adapting the filters to cope with visual changes of the target could lead to drift in the used filter that could result in detection failure. In most cases, vision is used to explicitly find the position and possibly orientation of the leader with respect to the follower so that a good (maybe optimal) control law can be devised. On the contrary, we do not detect either the explicit distance between leader and follower, or their relative orientation. Moreover, we only perform low level image, pixel by pixel, comparisons of the current view obtained by the follower and images of the leader seen by the follower from various positions around the ideal relative position. The camera used is of the omni-directional type mostly because of its interesting projection properties (see below). We do not address the problem of planning an accurate

3 trajectory and only use a simple control loop and only consider the formation where the follower is behind the leader at a fixed ideal distance. The reminder of the paper is organised as follows. Section 2 describes the method proposed in this paper with, in Section 2.2, the image comparison method and, in Section 2.3, the control loop. The result of experiments are presented in Section 3 and a discussion and conclusion are given in Section 4. (a) Leader at the correct distance (b) Leader too close 2 The method In this work, in line with our previous work, we do not explicitly extract features from the images that are grabbed. Indeed, such feature extraction and subsequent matching is often computationally expensive and/or requires many assumptions often violated in most but contrived situations. Instead, we perform a simple pixel-wise comparison between images of the back of the leader robot and the current image of the surroundings of the follower robot to localise the leader in that image. This uses a principle that is similar to that we use to compute the orientation of a robot (Labrosse, 2006) or to navigate to targets (Labrosse, 2007). The camera used is omni-directional and the images are unwrapped into panoramic images without any geometrical corrections (see (Labrosse, 2006) for a description of the process and Figure 1 for examples). Closely following our previous work, we first describe how images are compared and show that this cannot be directly used in the context of localising a moving target. We then propose a better method and show how it is used in the control loop of the follower. 2.1 A first naive method An image of the leader (I l ) is compared to a part of the image grabbed by the follower centred at position (x, y) (I f (x, y)) using the Euclidean distance between corresponding pixel values as follows: d(i l, I f (x, y)) = h w c (I l (k, i) I f (x, y, k, i)) 2, k=1 i=1 (1) where I l (k, i) and I f (x, y, k, i) are the i th colour component of the k th pixel of images I l and I f (x, y) respectively, each of size h w and having c colour components. Pixels are enumerated in both images, without loss of generality, in raster order from top-left corner to bottom-right corner. We used for this work the RGB (Red-Green-Blue) colour space, thus having three components per pixel. The combination of Euclidean distance and RGB space is not necessarily the best to use but it is sufficient for our purposes (see (Labrosse, 2006) for a discussion). This function should show a minimum for the position (c) Leader too far away (d) Leader Figure 1: Typical images grabbed by the follower and image of the leader (x l, y l ) where the image of the leader is in the current image of the follower. Figure 1 shows three typical images grabbed by the follower as well as an image of the back of the leader. Figure 2 shows the distance between the image of the leader and the images grabbed by the follower shown in Figure 1. It is clear that when the follower is at the correct distance from the leader, then the distance in Equation (1) presents a minimum at the correct position (approximately at position x = 260). This is also the case when the follower is too close to the leader, although the minimum is not as well marked (and the absolute minimum in that case is not exactly where it should be). However, when the leader-follower distance is too large, the distance between the corresponding images not only does not present a minimum at the desired position, but the absolute minimum corresponds to another event in the image, leading to a wrong estimation of the position of the leader in the image. This is because the part of the image corresponding to the leader is now not larger than other events of the background, that could be considered as noise when compared to the reference image of the leader. This implies that a multi-resolution scheme must be used. Another remark to be made is that the valley surrounding the absolute minimum in Figure 2(a) is very narrow, in particular much narrower than in our previous work (Labrosse, 2006, Labrosse, 2007). Moreover, its size is related to the size of the leader in the images. A better, multi-resolution method is therefore needed to localise the leader in the images grabbed by the follower.

4 (a) Leader at the correct distance (b) Leader too close (c) Leader too far away Figure 2: Distance as a function of (x, y) between the image of the leader and the grabbed images in Figure Localisation of the leader Informal experiments showed that a reasonably good estimate of the position of the leader in images can be found even if the distance follower-to-leader does not exactly correspond to that of the image of the leader (Figure 1(c) shows a rather extreme situation). Moreover, Figure 2(b) shows that when the leader is too close to the follower a good estimate of the leader s position can still be found. Three images of the leader were therefore used; the largest corresponds to the leader at the correct distance (I l1, of size 23 21), the smallest (I l3, of size 10 9) was taken from an image grabbed with the leader at what was considered a maximum distance (approximately 3 m in the reported experiments, due to space constraints imposed by our lab, the correct distance being approximately 1.5 m) and one created by down sampling the large image to a size approximately midway between the large and small images (I l2, of size 18 16). The images are given in Figure 3. Because the three images are now of different sizes, the Euclidean distance in Equation (1) must be normalised Figure 3: The images of the leader used for the tracking: from left to right I l1, I l2, I l3 and magnified so that it can be compared between the three images: d n (I l, I f (x, y)) 1 = h w h w k=1 i=1 c (I l (k, i) I f (x, y, k, i)) 2. (2) The three images are used in turn to find three absolute minima of Equation (2) and the minimum value of the three is used as corresponding to the correct match. The position of the corresponding minimum gives the location of the leader in the images. Not all three images need to (and should) be used

5 however. Indeed, the projection of the world surrounding the follower onto the panoramic image plane is such that the position of the leader in the images moves up and down as a function of the distance leader-to-follower, as can be seen in Figure 1. This implies that the choice of which image of the leader to use in Equation (2) can be directed by where the leader was last located. When the leader s vertical position was last found to be below an experimentally defined threshold, i.e. close to the follower, then all three images were used (using the small image was not considered to be expensive). However, if the leader s vertical position was found to be above the threshold then only images I l2 and I l3 were used. This first reduces the amount of computation to be done and second prevents the detection of false absolute minima as in Figure 2(c). Finally, the search for the minimum can also be guided by where the leader was last found. Assuming that the relative position in Cartesian space of the leader and follower does not change too fast, with respect to the processing speed, the search can be limited to a small area of the grabbed images around the last known position of the leader (twice the size of I l2 in all reported experiments). In that restricted area, an exhaustive search is performed. When the minimum provided by the localisation of the leader was judged too high, i.e. corresponding to a poor match, the leader was reported as not being found and the follower was stopped and an exhaustive search was performed until successful localisation. This is also what is done at the initialisation stage. This was not triggered in any of the experiments reported here, apart from when starting the experiment, and in practise proved to be not very successful because by the time the leader could have been localised, it had usually moved enough to be too small in the images. 2.3 Control loop The result of the localisation of the leader gives the 2D position of the leader in the grabbed images (which image of the leader was found to be the best match is too coarse a scale to be used in the control of the follower and is redundant with the vertical positioning). This position is compared with the desired position and the difference is used in a proportional controller to adjust the speed of the follower: horizontal difference for the steering and vertical difference for the speed. Indeed, the horizontal error indicates a heading misalignment while the vertical error is indicative of the error in the distance between the two robots. This is because the camera, in fact the mirror, is positioned above the two robots, therefore projecting the leader at a height in the panoramic image that depends on the distance to the leader. However, for the height error to be useful, one has to assume that the two robots remain in the same plane, in other words that the floor is flat. This is obviously constraining in outdoor situations but is acceptable in most office-like environments. A proportional controller proved to be enough for the type of motion the leader was performing in our experiments. The gains of the controller were determined manually by experimentation. Sonars were also integrated in the system as a failsafe mechanism but this was not triggered during any of the reported experiments. 3 Experiments We performed a number of experiments to assess the accuracy and repeatability of the system. The results are reported, after the description of the setup. 3.1 Experimental setup The experiments were conducted in our Lab using two Pioneer 2DXe robots. One, as the leader, was teleoperated on trajectories that were made as similar as possible for the repeatability experiments by using markings on the floor. The second was the follower and was equipped with the omni-directional camera. The software was running on the robot s on-board computer (a Pentium III running at 800 MHz). Quantitative measurements were performed using the Vicon 512 motion tracking system that can track in realtime the position and orientation of pre-determined objects using reflective markers. The accuracy of the system is of the order of the millimetre. The Vicon data was obtained from a server running on the Vicon machine by the robot and saved locally on the robot. The experiments were conducted in the Lab during normal hours, when people move about in the Lab. Finally, the same images of the leader were used for all experiments but had been grabbed well before the actual experiments took place, therefore in probably different illumination conditions. 3.2 Results In all the reported experiments, the follower was placed behind the leader at approximately the correct distance and orientation. The first step was to acquire the leader and to align with it. After that, the leader was moved. All distances are given in metres. The starting point of the robots in the paths diagrams is shown as a circle Accuracy The first set of results concerns the accuracy of the system and quantitative results are given in Table 1. The first experiment is that of the simple straight line case. Figure 4 shows the path of the two robots and the distance between them for each frame. The leader s

6 (a) Paths Distance (m) Frames (b) Distance (a) Paths Figure 4: Straight line 1.8 path was not quite straight and the follower closely replicates this. The distance between the two robots is not constant but does not vary significantly (standard deviation of 9 cm). The mean distance is too high, which is visible on the graph. This is due to the follower not quite catching up while the leader was moving, up to frame 150, at which point the follower did catch up and overshoot slightly. The second experiment is for a circular path followed twice by the leader and the results are shown in Figure 5. It is clear that the follower replicates the path of the leader. It is also clear that the follower is systematically inside the circle of the leader, as a trailer would be. This is because in effect the implemented method creates an as rigid as possible link between the two robots. The performance is similar to that of the straight line case. It is to be noted that the glitch visible on the path of the leader around position ( 1, 3) was due to the robot being at the limit of the area covered by the Vicon system and therefore slightly erroneously placed. This obviously reflects on the distance measurements, around frame 130. This does not change significantly the statistics of the distance measurements. A less trivial, given the available space, path was used in the next experiment: the leader was tele-operated on Table 1: Quantitative results for the accuracy of the system Experiment Min Max Mean StdDev Straight line Circle Figure of eight Distance (m) Frames (b) Distance Figure 5: Circle a figure of eight path. Results are shown in Figure 6. Again, the follower is inside the path of the leader, which is well replicated by the follower. The mean distance in that case is closer to the desired distance but its standard deviation is larger, compared to the other experiments. This is because the leader was not moving as fast and the follower did not have to move as fast because of the higher curvature of the path. It is interesting to note that where the curvature was the highest, the leader as seen from the follower was significantly different from the leader when it is aligned with the follower in the correct position, in particular showing a lot more black (of the wheels) and a lot less red (of the body). Despite this, the system worked. Informal experiments showed that when the leader follows a tight circular path of radius close to 1.5 m, therefore such that the follower only needs to turn on the spot, the system still works, with occasional localisation failures. Finally, an almost straight path is followed by the leader once moving forward, then moving backwards,

7 (a) Paths (a) Paths (b) Distance (b) Distance Figure 7: Forward and backward Figure 6: Figure of eight Figure 7 (the statistics for the distance are not given because of the binary character of the results). The trailer effect is very visible here when the leader reverses (and the leader would certainly have been lost by the follower if the experiment had not been interrupted due to space constraints). It is also clear, looking at Figure 7(b) that the desired distance leader-to-follower is never reached; in the first half of the experiment, it is too high (when the leader moves forward, therefore pulling the follower) and it is too low in the second half of the experiment (when the follower is pushed by the reversing leader). (a) Paths Repeatability The system s repeatability was quantified by running the same experiment ten times, each time recording the position of the leader and follower, this for two different paths (straight line and circle). However, the paths followed by the leader were not exactly the same due to mechanical imperfections (the leader was initially manually positioned as close as possible to a fixed starting pose and given the same command). The case of the straight line is shown in Figure 8 1 and Table 2 gives the statistics of the distance for each straight line and overall (all distances together). 1 The straight line experiment in Section is the same as the first straight line here. (b) Distances Figure 8: Repeatability of the straight line It is clear that in both cases all results are within a few centimetres of each other. Note that the circles performed by the leader in

8 Table 2: Quantitative results for the repeatability of the system for the straight line Experiment Min Max Mean StdDev Overall Table 3: Quantitative results for the repeatability of the system for the circle Experiment Min Max Mean StdDev Overall (a) Paths this case are smaller than the circle performed in Section (approximately 3.40 m against 4.30 m) and that the distance leader-to-follower in the former is shorter than in the latter. This confirms the results obtained for the figure of eight in Section that in some cases, the distance was too large due to the follower not catching up with the leader. It is interesting to note that the distance for each frame in the case of the circle is also qualitatively self-similar with a pattern that repeats itself in all experiments. We can only explain this by an oscillating over-shooting behaviour that is the same in all experiments due to the similar paths followed by the leader. This didn t happen in the case of the straight line probably because in that case the follower was never quite able to catch up with the leader. 4 Discussion and conclusion The results show a good performance of the system, despite the crude way of dealing with the changes in size of the leader in grabbed images and the lack of explicit dealing of the changes in appearance when the leader is not aligned with the follower. However, when the leader s (b) Distances Figure 9: Repeatability of the circle appearance becomes dramatically different because of its non-correct orientation, the detection of the leader can fail 2. To solve this problem, more images of the leader need to be used in the matching process. However, this would tend to slow down the process and would certainly not be an elegant solution. A better solution would be to incorporate 3D transformations of the images in the optimisation procedure. The algorithm to localise the leader can also be improved. The limited exhaustive search used here is still expensive and some better prediction of the po- 2 In fact, it often succeeds because the robots used in these experiments are mostly circular and self-similar from all around them, apart from the wheels that do present a different visual appearance.

9 sition of the target could be done using for example a polynomial fit of the previous position as in (Schneiderman et al., 1995) or a Kalman filter. However, for this to be of any use, the leader s motion must be known to be somehow constrained such as when driving on a road, which is not always possible or desirable. The control of the follower as it is implemented does not allow for different configurations of the formation and would not work for a non-holonomic follower in any other configuration. Also, planning trajectories, as in (Chiem and Cervera, 2004), could be another area for future work. This however, requires the estimation of the relative position and orientation of the leader. The method presented here requires a fair amount of calibration: three images of the leader must be available along with their desired position in the images. The method is robust to small changes of these values, but any dramatic change of, for example, the geometry of the follower, such as the alignment of the camera, would result in a failure of the method. Only using one image and the 3D transformations mentioned above would help but not solve all these problems. However, the robustness to changes such as the desired height in the images of the leader was informally tested and used to alter the desired distance leader-to-follower without changing the images, which proved to be successful to some extent. We also assume that the appearance of the leader does not change, e.g. following changes in illumination. Re-acquisition of images would be a solution, but again not an elegant one. A different colour space would certainly help (Woodland and Labrosse, 2005). This does need more work. The flat floor assumption is obviously not desirable. Breaking it would result in a wrong distance between the two robots, which could have dramatic consequences. However, should the localisation of the leader in the images be done using 3D transformations (see above) rather than with an explicit coarse multi-resolution approach, then the distance to the leader would be evaluated from the 3D transformation, thus removing the assumption. Another assumption made in this work is that the leader s appearance is sufficiently different from other visual features of the environment. In a scenario such as convoy formation involving many similar vehicles, this obviously would be a problem as all the vehicles in front of any given vehicle might be visible and could therefore fool the system. However, because of the restricted search domain, both in position and scale, the wrong target would never be selected. In fact, we have informally tested this situation in our lab by driving the leader close to other similar static robots to try to make the follower latch onto these. This has never happened. Despite these limitations, our simple method, using pixel-based comparisons, of localisation of the leader provides a good performance with very good repeatability and accuracy. References Chiem, S. Y. and Cervera, E. (2004). Vision-based robot formations with Bézier trajectories. In Proceedings of the International Conference on Intelligent Robots and Systems, pages Das, A. K., Fierro, R., Kumar, V., Ostrowski, J. P., Spletzer, J., and Taylor, C. J. (2002). A vision-based formation control framework. IEEE Transactions on Robotics and Automation, 18(5): Eklund, M. W., Ravichandran, G., Trivedi, M. M., and Marapane, S. B. (1994). Real-time visual tracking using correlation techniques. In Proceedinds of the IEEE Workshop on Applications of Computer Vision, pages Labrosse, F. (2006). The visual compass: Performance and limitations of an appearance-based method. Journal of Field Robotics, 23(10): Labrosse, F. (2007). Short and long-range visual navigation using warped panoramic images. Robotics and Autonomous Systems. Accepted for publication. Malis, E. and Chaumette, F. (2000). 2 1/2 D visual servoing with respect to unknown objects through a new estimation scheme of camera displacement. International Journal of Computer Vision, 37(1): Mariottini, G. L., Pappas, G., Prattichizzo, D., and Daniilidis, K. (2005). Vision-based localization of leader-follower formations. In Proceedings of the IEEE Conference on Decision and Control, pages Renaud, P., Cervera, E., and Martinet, P. (2004). Towards a reliable vision-based mobile robot formation control. In Proceedings of the International Conference on Intelligent Robots and Systems, volume 4, pages Schneiderman, H., Nashman, M., Wavering, A. J., and Lumia, R. (1995). Vision-based robotic convoy driving. Machine Vision and Applications, 8(6): Vidal, R., Shakernia, O., and Sastry, S. (2003). Formation control of nonholonomic mobile robots with omnidirectional visual servoing and motion segmentation. In Proceedings of the IEEE International Conference on Robotics and Automation, volume 1, pages Woodland, A. and Labrosse, F. (2005). On the separation of luminance from colour in images. In Proceedings of the International Conference on Vision, Video, and Graphics, pages 29 36, University of Edinburgh, UK.

Visual local navigation using warped panoramic images

Visual local navigation using warped panoramic images Visual local navigation using warped panoramic images Dave Binding Frédéric Labrosse Department of Computer Science University of Wales, Aberystwyth Aberystwyth SY23 3DB, United Kingdom e-mail: ffl@aber.ac.uk

More information

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007

Robotics Project. Final Report. Computer Science University of Minnesota. December 17, 2007 Robotics Project Final Report Computer Science 5551 University of Minnesota December 17, 2007 Peter Bailey, Matt Beckler, Thomas Bishop, and John Saxton Abstract: A solution of the parallel-parking problem

More information

Keeping features in the camera s field of view: a visual servoing strategy

Keeping features in the camera s field of view: a visual servoing strategy Keeping features in the camera s field of view: a visual servoing strategy G. Chesi, K. Hashimoto,D.Prattichizzo,A.Vicino Department of Information Engineering, University of Siena Via Roma 6, 3 Siena,

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images

Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Uncalibrated Video Compass for Mobile Robots from Paracatadioptric Line Images Gian Luca Mariottini and Domenico Prattichizzo Dipartimento di Ingegneria dell Informazione Università di Siena Via Roma 56,

More information

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism

Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Behavior Learning for a Mobile Robot with Omnidirectional Vision Enhanced by an Active Zoom Mechanism Sho ji Suzuki, Tatsunori Kato, Minoru Asada, and Koh Hosoda Dept. of Adaptive Machine Systems, Graduate

More information

CS 4758 Robot Navigation Through Exit Sign Detection

CS 4758 Robot Navigation Through Exit Sign Detection CS 4758 Robot Navigation Through Exit Sign Detection Aaron Sarna Michael Oleske Andrew Hoelscher Abstract We designed a set of algorithms that utilize the existing corridor navigation code initially created

More information

Supplementary Material for: Road Detection using Convolutional Neural Networks

Supplementary Material for: Road Detection using Convolutional Neural Networks Supplementary Material for: Road Detection using Convolutional Neural Networks Aparajit Narayan 1, Elio Tuci 2, Frédéric Labrosse 1, Muhanad H. Mohammed Alkilabi 1 1 Aberystwyth University, 2 Middlesex

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Ground Plane Motion Parameter Estimation For Non Circular Paths

Ground Plane Motion Parameter Estimation For Non Circular Paths Ground Plane Motion Parameter Estimation For Non Circular Paths G.J.Ellwood Y.Zheng S.A.Billings Department of Automatic Control and Systems Engineering University of Sheffield, Sheffield, UK J.E.W.Mayhew

More information

Chapter 5. Conclusions

Chapter 5. Conclusions Chapter 5 Conclusions The main objective of the research work described in this dissertation was the development of a Localisation methodology based only on laser data that did not require any initial

More information

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan

Dept. of Adaptive Machine Systems, Graduate School of Engineering Osaka University, Suita, Osaka , Japan An Application of Vision-Based Learning for a Real Robot in RoboCup - A Goal Keeping Behavior for a Robot with an Omnidirectional Vision and an Embedded Servoing - Sho ji Suzuki 1, Tatsunori Kato 1, Hiroshi

More information

Chapter 12 3D Localisation and High-Level Processing

Chapter 12 3D Localisation and High-Level Processing Chapter 12 3D Localisation and High-Level Processing This chapter describes how the results obtained from the moving object tracking phase are used for estimating the 3D location of objects, based on the

More information

Exam in DD2426 Robotics and Autonomous Systems

Exam in DD2426 Robotics and Autonomous Systems Exam in DD2426 Robotics and Autonomous Systems Lecturer: Patric Jensfelt KTH, March 16, 2010, 9-12 No aids are allowed on the exam, i.e. no notes, no books, no calculators, etc. You need a minimum of 20

More information

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras

UAV Position and Attitude Sensoring in Indoor Environment Using Cameras UAV Position and Attitude Sensoring in Indoor Environment Using Cameras 1 Peng Xu Abstract There are great advantages of indoor experiment for UAVs. Test flights of UAV in laboratory is more convenient,

More information

Homography-Based Multi-Robot Control with a Flying Camera

Homography-Based Multi-Robot Control with a Flying Camera Citation: G. López-Nicolás, Y. Mezouar, and C. Sagüés. Homography-based multi-robot control with a flying camera. In IEEE International Conference on Robotics and Automation, pp. 4492-4497, Shangai, China,

More information

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle

Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Waypoint Navigation with Position and Heading Control using Complex Vector Fields for an Ackermann Steering Autonomous Vehicle Tommie J. Liddy and Tien-Fu Lu School of Mechanical Engineering; The University

More information

Measuring Geometrical Parameters of Involute Spur Gears to Sub-pixel Resolution.

Measuring Geometrical Parameters of Involute Spur Gears to Sub-pixel Resolution. Measuring Geometrical Parameters of Involute Spur Gears to Sub-pixel Resolution. Mark J Robinson * John P Oakley Dept. of Electrical Engineering University of Manchester Manchester M13 9PL email mark-rspeco.ee.man.ac.uk

More information

Calibration of a rotating multi-beam Lidar

Calibration of a rotating multi-beam Lidar The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Calibration of a rotating multi-beam Lidar Naveed Muhammad 1,2 and Simon Lacroix 1,2 Abstract

More information

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot

Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Canny Edge Based Self-localization of a RoboCup Middle-sized League Robot Yoichi Nakaguro Sirindhorn International Institute of Technology, Thammasat University P.O. Box 22, Thammasat-Rangsit Post Office,

More information

Marcel Worring Intelligent Sensory Information Systems

Marcel Worring Intelligent Sensory Information Systems Marcel Worring worring@science.uva.nl Intelligent Sensory Information Systems University of Amsterdam Information and Communication Technology archives of documentaries, film, or training material, video

More information

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation

Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation Chris J. Needham and Roger D. Boyle School of Computing, The University of Leeds, Leeds, LS2 9JT, UK {chrisn,roger}@comp.leeds.ac.uk

More information

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T

10/11/07 1. Motion Control (wheeled robots) Representing Robot Position ( ) ( ) [ ] T 3 3 Motion Control (wheeled robots) Introduction: Mobile Robot Kinematics Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Localisation using Automatically Selected Landmarks from Panoramic Images

Localisation using Automatically Selected Landmarks from Panoramic Images Localisation using Automatically Selected Landmarks from Panoramic Images Simon Thompson, Toshihiro Matsui and Alexander Zelinsky Abstract Intelligent Systems Division Electro-Technical Laboratory --4

More information

Time-to-Contact from Image Intensity

Time-to-Contact from Image Intensity Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract

More information

Simultaneous surface texture classification and illumination tilt angle prediction

Simultaneous surface texture classification and illumination tilt angle prediction Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona

More information

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report

ifp Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report Universität Stuttgart Performance of IGI AEROcontrol-IId GPS/Inertial System Final Report Institute for Photogrammetry (ifp) University of Stuttgart ifp Geschwister-Scholl-Str. 24 D M. Cramer: Final report

More information

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA

REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA REPRESENTATION REQUIREMENTS OF AS-IS BUILDING INFORMATION MODELS GENERATED FROM LASER SCANNED POINT CLOUD DATA Engin Burak Anil 1 *, Burcu Akinci 1, and Daniel Huber 2 1 Department of Civil and Environmental

More information

A VISION METHOD PROPOSED FOR TRACKING CONTINUOUS PLANE CURVES

A VISION METHOD PROPOSED FOR TRACKING CONTINUOUS PLANE CURVES A VISION METHOD PROPOSED FOR TRACKING CONTINUOUS PLANE CURVES M. STOICA 1 G.A. CALANGIU 1 F. ŞIŞAK 1 Abstract: This paper presents a vision method for tracking continuous plane curves. The purpose of the

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Horus: Object Orientation and Id without Additional Markers

Horus: Object Orientation and Id without Additional Markers Computer Science Department of The University of Auckland CITR at Tamaki Campus (http://www.citr.auckland.ac.nz) CITR-TR-74 November 2000 Horus: Object Orientation and Id without Additional Markers Jacky

More information

CHAPTER 5 MOTION DETECTION AND ANALYSIS

CHAPTER 5 MOTION DETECTION AND ANALYSIS CHAPTER 5 MOTION DETECTION AND ANALYSIS 5.1. Introduction: Motion processing is gaining an intense attention from the researchers with the progress in motion studies and processing competence. A series

More information

Practical Robotics (PRAC)

Practical Robotics (PRAC) Practical Robotics (PRAC) A Mobile Robot Navigation System (1) - Sensor and Kinematic Modelling Nick Pears University of York, Department of Computer Science December 17, 2014 nep (UoY CS) PRAC Practical

More information

DEVELOPMENT OF A TRACKING AND GUIDANCE SYSTEM FOR A FIELD ROBOT

DEVELOPMENT OF A TRACKING AND GUIDANCE SYSTEM FOR A FIELD ROBOT DEVELOPMENT OF A TRACKING AND GUIDANCE SYSTEM FOR A FIELD ROBOT J.W. Hofstee 1, T.E. Grift 2, L.F. Tian 2 1 Wageningen University, Farm Technology Group, Bornsesteeg 59, 678 PD Wageningen, Netherlands

More information

Color Tracking Robot

Color Tracking Robot Color Tracking Robot 1 Suraksha Bhat, 2 Preeti Kini, 3 Anjana Nair, 4 Neha Athavale Abstract: This Project describes a visual sensor system used in the field of robotics for identification and tracking

More information

Accurate 3D Face and Body Modeling from a Single Fixed Kinect

Accurate 3D Face and Body Modeling from a Single Fixed Kinect Accurate 3D Face and Body Modeling from a Single Fixed Kinect Ruizhe Wang*, Matthias Hernandez*, Jongmoo Choi, Gérard Medioni Computer Vision Lab, IRIS University of Southern California Abstract In this

More information

Monitoring surrounding areas of truck-trailer combinations

Monitoring surrounding areas of truck-trailer combinations Monitoring surrounding areas of truck-trailer combinations Tobias Ehlgen 1 and Tomas Pajdla 2 1 Daimler-Chrysler Research and Technology, Ulm tobias.ehlgen@daimlerchrysler.com 2 Center of Machine Perception,

More information

EE368 Project: Visual Code Marker Detection

EE368 Project: Visual Code Marker Detection EE368 Project: Visual Code Marker Detection Kahye Song Group Number: 42 Email: kahye@stanford.edu Abstract A visual marker detection algorithm has been implemented and tested with twelve training images.

More information

A METHOD OF MAP MATCHING FOR PERSONAL POSITIONING SYSTEMS

A METHOD OF MAP MATCHING FOR PERSONAL POSITIONING SYSTEMS The 21 st Asian Conference on Remote Sensing December 4-8, 2000 Taipei, TAIWA A METHOD OF MAP MATCHIG FOR PERSOAL POSITIOIG SSTEMS Kay KITAZAWA, usuke KOISHI, Ryosuke SHIBASAKI Ms., Center for Spatial

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method

Robert Collins CSE598G. Intro to Template Matching and the Lucas-Kanade Method Intro to Template Matching and the Lucas-Kanade Method Appearance-Based Tracking current frame + previous location likelihood over object location current location appearance model (e.g. image template,

More information

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras

Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras Real-time Image-based Reconstruction of Pipes Using Omnidirectional Cameras Dipl. Inf. Sandro Esquivel Prof. Dr.-Ing. Reinhard Koch Multimedia Information Processing Christian-Albrechts-University of Kiel

More information

Compositing a bird's eye view mosaic

Compositing a bird's eye view mosaic Compositing a bird's eye view mosaic Robert Laganiere School of Information Technology and Engineering University of Ottawa Ottawa, Ont KN 6N Abstract This paper describes a method that allows the composition

More information

Featureless omnidirectional vision-based control of non-holonomic mobile robot

Featureless omnidirectional vision-based control of non-holonomic mobile robot The 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI 2015) October 28 30, 2015 / KINTEX, Goyang city, Korea Featureless omnidirectional vision-based control of non-holonomic

More information

Fully Automatic Endoscope Calibration for Intraoperative Use

Fully Automatic Endoscope Calibration for Intraoperative Use Fully Automatic Endoscope Calibration for Intraoperative Use Christian Wengert, Mireille Reeff, Philippe C. Cattin, Gábor Székely Computer Vision Laboratory, ETH Zurich, 8092 Zurich, Switzerland {wengert,

More information

Evaluating the Performance of a Vehicle Pose Measurement System

Evaluating the Performance of a Vehicle Pose Measurement System Evaluating the Performance of a Vehicle Pose Measurement System Harry Scott Sandor Szabo National Institute of Standards and Technology Abstract A method is presented for evaluating the performance of

More information

Autonomous Vehicle Navigation Using Stereoscopic Imaging

Autonomous Vehicle Navigation Using Stereoscopic Imaging Autonomous Vehicle Navigation Using Stereoscopic Imaging Project Proposal By: Beach Wlaznik Advisors: Dr. Huggins Dr. Stewart December 7, 2006 I. Introduction The objective of the Autonomous Vehicle Navigation

More information

Elastic Bands: Connecting Path Planning and Control

Elastic Bands: Connecting Path Planning and Control Elastic Bands: Connecting Path Planning and Control Sean Quinlan and Oussama Khatib Robotics Laboratory Computer Science Department Stanford University Abstract Elastic bands are proposed as the basis

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

Product information. Hi-Tech Electronics Pte Ltd

Product information. Hi-Tech Electronics Pte Ltd Product information Introduction TEMA Motion is the world leading software for advanced motion analysis. Starting with digital image sequences the operator uses TEMA Motion to track objects in images,

More information

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video

Vehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

Motion Estimation for Video Coding Standards

Motion Estimation for Video Coding Standards Motion Estimation for Video Coding Standards Prof. Ja-Ling Wu Department of Computer Science and Information Engineering National Taiwan University Introduction of Motion Estimation The goal of video compression

More information

A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera

A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera A System for Real-time Detection and Tracking of Vehicles from a Single Car-mounted Camera Claudio Caraffi, Tomas Vojir, Jiri Trefny, Jan Sochman, Jiri Matas Toyota Motor Europe Center for Machine Perception,

More information

Gesture Recognition Aplication based on Dynamic Time Warping (DTW) FOR Omni-Wheel Mobile Robot

Gesture Recognition Aplication based on Dynamic Time Warping (DTW) FOR Omni-Wheel Mobile Robot Gesture Recognition Aplication based on Dynamic Time Warping (DTW) FOR Omni-Wheel Mobile Robot Indra Adji Sulistijono, Gama Indra Kristianto Indra Adji Sulistijono is with the Department of Mechatronics

More information

3D Model Acquisition by Tracking 2D Wireframes

3D Model Acquisition by Tracking 2D Wireframes 3D Model Acquisition by Tracking 2D Wireframes M. Brown, T. Drummond and R. Cipolla {96mab twd20 cipolla}@eng.cam.ac.uk Department of Engineering University of Cambridge Cambridge CB2 1PZ, UK Abstract

More information

Vision. OCR and OCV Application Guide OCR and OCV Application Guide 1/14

Vision. OCR and OCV Application Guide OCR and OCV Application Guide 1/14 Vision OCR and OCV Application Guide 1.00 OCR and OCV Application Guide 1/14 General considerations on OCR Encoded information into text and codes can be automatically extracted through a 2D imager device.

More information

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe

AN EFFICIENT BINARY CORNER DETECTOR. P. Saeedi, P. Lawrence and D. Lowe AN EFFICIENT BINARY CORNER DETECTOR P. Saeedi, P. Lawrence and D. Lowe Department of Electrical and Computer Engineering, Department of Computer Science University of British Columbia Vancouver, BC, V6T

More information

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems

Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Partial Calibration and Mirror Shape Recovery for Non-Central Catadioptric Systems Abstract In this paper we present a method for mirror shape recovery and partial calibration for non-central catadioptric

More information

Robotics. Lecture 5: Monte Carlo Localisation. See course website for up to date information.

Robotics. Lecture 5: Monte Carlo Localisation. See course website  for up to date information. Robotics Lecture 5: Monte Carlo Localisation See course website http://www.doc.ic.ac.uk/~ajd/robotics/ for up to date information. Andrew Davison Department of Computing Imperial College London Review:

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

SketchUp. SketchUp. Google SketchUp. Using SketchUp. The Tool Set

SketchUp. SketchUp. Google SketchUp. Using SketchUp. The Tool Set Google Google is a 3D Modelling program which specialises in making computer generated representations of real-world objects, especially architectural, mechanical and building components, such as windows,

More information

Rectification and Distortion Correction

Rectification and Distortion Correction Rectification and Distortion Correction Hagen Spies March 12, 2003 Computer Vision Laboratory Department of Electrical Engineering Linköping University, Sweden Contents Distortion Correction Rectification

More information

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera

3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera 3D Environment Measurement Using Binocular Stereo and Motion Stereo by Mobile Robot with Omnidirectional Stereo Camera Shinichi GOTO Department of Mechanical Engineering Shizuoka University 3-5-1 Johoku,

More information

Broad field that includes low-level operations as well as complex high-level algorithms

Broad field that includes low-level operations as well as complex high-level algorithms Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and

More information

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies

Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of

More information

AP Physics: Curved Mirrors and Lenses

AP Physics: Curved Mirrors and Lenses The Ray Model of Light Light often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization, but is very useful for geometric

More information

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm

Dense Tracking and Mapping for Autonomous Quadrocopters. Jürgen Sturm Computer Vision Group Prof. Daniel Cremers Dense Tracking and Mapping for Autonomous Quadrocopters Jürgen Sturm Joint work with Frank Steinbrücker, Jakob Engel, Christian Kerl, Erik Bylow, and Daniel Cremers

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

Estimating Camera Position And Posture by Using Feature Landmark Database

Estimating Camera Position And Posture by Using Feature Landmark Database Estimating Camera Position And Posture by Using Feature Landmark Database Motoko Oe 1, Tomokazu Sato 2 and Naokazu Yokoya 2 1 IBM Japan 2 Nara Institute of Science and Technology, Japan Abstract. Estimating

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Alignment and Other Challenges in Reconstructing Cryotomograms with IMOD

Alignment and Other Challenges in Reconstructing Cryotomograms with IMOD Alignment and Other Challenges in Reconstructing Cryotomograms with IMOD Challenges in Cryotomography Alignment, alignment, alignment It can be hard to get fiducials onto/in the sample The low SNR makes

More information

Encoder applications. I Most common use case: Combination with motors

Encoder applications. I Most common use case: Combination with motors 3.5 Rotation / Motion - Encoder applications 64-424 Intelligent Robotics Encoder applications I Most common use case: Combination with motors I Used to measure relative rotation angle, rotational direction

More information

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern

Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Calibration of a Different Field-of-view Stereo Camera System using an Embedded Checkerboard Pattern Pathum Rathnayaka, Seung-Hae Baek and Soon-Yong Park School of Computer Science and Engineering, Kyungpook

More information

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS

CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing

More information

Appendix E: Software

Appendix E: Software Appendix E: Software Video Analysis of Motion Analyzing pictures (movies or videos) is a powerful tool for understanding how objects move. Like most forms of data, video is most easily analyzed using a

More information

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads

Monocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily

More information

Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony

Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony Bearing only visual servo control of a non-holonomic mobile robot. Robert Mahony Department of Engineering, Australian National University, Australia. email: Robert.Mahony@anu.edu.au url: http://engnet.anu.edu.au/depeople/robert.mahony/

More information

A TURNING SCHEME IN THE HEADLAND OF AGRICULTURAL FIELDS FOR AUTONOMOUS ROBOT

A TURNING SCHEME IN THE HEADLAND OF AGRICULTURAL FIELDS FOR AUTONOMOUS ROBOT A TURNING SCHEME IN THE HEADLAND OF AGRICULTURAL FIELDS FOR AUTONOMOUS ROBOT N. M. Thamrin, N. H. M. Arshad, R. Adnan, R. Sam, N. A. Razak, M. F. Misnan and S. F. Mahmud Faculty of Electrical Engineering,

More information

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS

Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska. Krzysztof Krawiec IDSS Ruch (Motion) Rozpoznawanie Obrazów Krzysztof Krawiec Instytut Informatyki, Politechnika Poznańska 1 Krzysztof Krawiec IDSS 2 The importance of visual motion Adds entirely new (temporal) dimension to visual

More information

ABOUT THE REPRODUCIBILITY OF TEXTURE PROFILES AND THE PROBLEM OF SPIKES

ABOUT THE REPRODUCIBILITY OF TEXTURE PROFILES AND THE PROBLEM OF SPIKES ABOUT THE REPRODUCIBILITY OF TEXTURE PROFILES AND THE PROBLEM OF SPIKES ABSTRACT L. GOUBERT & A. BERGIERS Belgian Road Research Centre, Belgium L.GOUBERT@BRRC.BE The ISO working group ISO/TC43/SC1/WG39

More information

ksa MOS Ultra-Scan Performance Test Data

ksa MOS Ultra-Scan Performance Test Data ksa MOS Ultra-Scan Performance Test Data Introduction: ksa MOS Ultra Scan 200mm Patterned Silicon Wafers The ksa MOS Ultra Scan is a flexible, highresolution scanning curvature and tilt-measurement system.

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

Roadmaps. Vertex Visibility Graph. Reduced visibility graph, i.e., not including segments that extend into obstacles on either side.

Roadmaps. Vertex Visibility Graph. Reduced visibility graph, i.e., not including segments that extend into obstacles on either side. Roadmaps Vertex Visibility Graph Full visibility graph Reduced visibility graph, i.e., not including segments that extend into obstacles on either side. (but keeping endpoints roads) what else might we

More information

OPTIMAL LANDMARK PATTERN FOR PRECISE MOBILE ROBOTS DEAD-RECKONING

OPTIMAL LANDMARK PATTERN FOR PRECISE MOBILE ROBOTS DEAD-RECKONING Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea May 21-26, 2001 OPTIMAL LANDMARK PATTERN FOR PRECISE MOBILE ROBOTS DEAD-RECKONING Josep Amat*, Joan Aranda**,

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

A New Class of Corner Finder

A New Class of Corner Finder A New Class of Corner Finder Stephen Smith Robotics Research Group, Department of Engineering Science, University of Oxford, Oxford, England, and DRA (RARDE Chertsey), Surrey, England January 31, 1992

More information

Figure 1 - Refraction

Figure 1 - Refraction Geometrical optics Introduction Refraction When light crosses the interface between two media having different refractive indices (e.g. between water and air) a light ray will appear to change its direction

More information

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK 1 Po-Jen Lai ( 賴柏任 ), 2 Chiou-Shann Fuh ( 傅楸善 ) 1 Dept. of Electrical Engineering, National Taiwan University, Taiwan 2 Dept.

More information

Real Time Motion Detection Using Background Subtraction Method and Frame Difference

Real Time Motion Detection Using Background Subtraction Method and Frame Difference Real Time Motion Detection Using Background Subtraction Method and Frame Difference Lavanya M P PG Scholar, Department of ECE, Channabasaveshwara Institute of Technology, Gubbi, Tumkur Abstract: In today

More information

TRAFFIC LIGHTS DETECTION IN ADVERSE CONDITIONS USING COLOR, SYMMETRY AND SPATIOTEMPORAL INFORMATION

TRAFFIC LIGHTS DETECTION IN ADVERSE CONDITIONS USING COLOR, SYMMETRY AND SPATIOTEMPORAL INFORMATION International Conference on Computer Vision Theory and Applications VISAPP 2012 Rome, Italy TRAFFIC LIGHTS DETECTION IN ADVERSE CONDITIONS USING COLOR, SYMMETRY AND SPATIOTEMPORAL INFORMATION George Siogkas

More information

A Statistical Consistency Check for the Space Carving Algorithm.

A Statistical Consistency Check for the Space Carving Algorithm. A Statistical Consistency Check for the Space Carving Algorithm. A. Broadhurst and R. Cipolla Dept. of Engineering, Univ. of Cambridge, Cambridge, CB2 1PZ aeb29 cipolla @eng.cam.ac.uk Abstract This paper

More information

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al.

Interactive comment on Quantification and mitigation of the impact of scene inhomogeneity on Sentinel-4 UVN UV-VIS retrievals by S. Noël et al. Atmos. Meas. Tech. Discuss., www.atmos-meas-tech-discuss.net/5/c741/2012/ Author(s) 2012. This work is distributed under the Creative Commons Attribute 3.0 License. Atmospheric Measurement Techniques Discussions

More information