(b) (a) (c) (d) (e) (f) Figure 1. (a) the ARGO experimental vehicle; (b) the cameras; (c) the electric engine installed on the steering column and the
|
|
- Morgan Cooper
- 5 years ago
- Views:
Transcription
1 The Experience of the ARGO Autonomous Vehicle Massimo Bertozzi, Alberto Broggi, Gianni Conte, and Alessandra Fascioli Dipartimento di Ingegneria dell'informazione Universita di Parma, I Parma, Italy ABSTRACT This paper presents and discusses the rst results obtained by the GOLD (Generic Obstacle and Lane Detection) system as an automatic driver of ARGO. ARGO is a Lancia Thema passenger car equipped with a vision-based system that allows to extract road and environmental information from the acquired scene. By means of stereo vision, obstacles on the road are detected and localized, while the processing of a single monocular image allows to extract the road geometry in front of the vehicle. The generality of the underlying approach allows to detect generic obstacles (without constraints on shape, color, or symmetry) and to detect lane markings even in dark and in strong shadow conditions. The hardware system consists of a PC Pentium 200Mhz with MMX technology and a frame-grabber board able to acquire 3 b/w images simultaneously; the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel, while debug information are presented to the user on an on-board monitor and a led-based control panel. Keywords: autonomous vehicle, computer vision, lane detection, obstacle detection 1. THE ARGO VEHICLE ARGO is the experimental autonomous vehicle developed at the Dipartimento di Ingegneria dell'informazione of the University of Parma, Italy. It integrates the main results of the research conducted over the last few years on the algorithms and the architectures for vision based automatic road vehicles guidance. Thanks to the availability of the ARGO vehicle, a number of dierent solutions for autonomous navigation have been developed, tested and tuned, particularly for the basic functionalities of Obstacle Detection and Lane Detection. The most promising approaches for both functionalities have been integrated into the GOLD (Generic Obstacle and Lane Detection) system 1 which acts currently as the automatic driver of ARGO. ARGO is a Lancia Thema 2000 passenger car (gure 1.a) equipped with a vision-based system that allows to extract road and environmental information from the acquired scene, and with dierent output devices used to test the automatic features The input Only passive sensors (cameras) are used on ARGO to sense the surrounding environment, since they oer the possibility to acquire data in a non-invasive way, namely without altering the environment. Because of the large number of vehicles that could be moving simultaneously, this is a prominent advantage with respect to invasive ways of perceiving the environment, which could lead to an unacceptable pollution of the environment. This work was partially supported by the Italian National Research Council (CNR) under the frame of the Progetto Finalizzato Trasporti 2. M. Bertozzi: bertozzi@ce.unipr.it, A. Broggi: broggi@ce.unipr.it, G. Conte: conte@ce.unipr.it, A. Fascioli: fascal@ce.unipr.it
2 (b) (a) (c) (d) (e) (f) Figure 1. (a) the ARGO experimental vehicle; (b) the cameras; (c) the electric engine installed on the steering column and the output monitor; (d) the segment of road used for calibration; (e) and (f) left and right views of the calibration grid from ARGO's stereo cameras The vision system The ARGO vehicle is equipped with a stereoscopic vision system (gure 1.b) consisting of two synchronized cameras able to acquire pairs of grey level images. The installed devices are small (3:2cm 3:2cm) low cost cameras featuring
3 a 6:0mm focal length and a 360 lines resolution which can receive the synchronism from an external signal. The cameras lie inside the car at the top corners of the windscreen, so that the longitudinal distance between the two cameras is maximum. The optical axes are parallel and, in order to handle also non at roads, part of the scene over the horizon is captured, even if the framing of a portion of the sky can be critical for the image brightness: in case of high contrast the sensor may acquire oversaturated images The acquisition system Images are acquired by a PCI Matrox board, which is able to grab three pixel images simultaneously. The images are directly stored into the main memory of the host computer thanks to the use of DMA. The acquisition can be performed in real time, at a 25Hz rate in case of full frames or at a 50Hz rate in case of single eld acquisition System calibration Since the processing is based on stereo vision, camera calibration plays a basic role for the success of the approach. It is divided in two steps. Supervised calibration: the rst part of the calibration process is an interactive step: a grid with known size (gure 1.d) has been painted onto the ground and two stereo images (gure 1.e and 1.f) are captured and used for the calibration. Thanks to an X-Window based graphical interface a user selects the intersections of the grid lines using a mouse; these intersections represent a small set of homologous points whose world coordinates are known to the system; this mapping is used to compute the calibration parameters. The set of homologous points is used to minimize dierent cost functions, such as, the distance between each point and its neighbors and line parallelism. This rst step is intended to be performed only once when the orientation of the cameras or the vehicle trim have changed. Since the set of homologous points is small and their coordinates may be aected by human imprecision, this calibration represents only a rough guess of the parameters, and a further process is required. Automatic parameters tuning: after the supervised phase, the computed calibration parameters have to be rened. Moreover small changes in the vision system setup or in the vehicle trim require a periodic tuning of the calibration. For this purpose an automatic procedure has been developed. 2 Since this step is only a renement, a structured environment, such as the grid, is no more required and a mere at road in front of the vision system suces. The parameters tuning consists of an iterative procedure based on the application of the IPM transform to stereo images (see section 3.2) and takes about 20 seconds Output The ARGO vehicle has autonomous steering capabilities: the result of the processing (position of obstacles and geometry of the road) is used to drive an actuator on the steering wheel (gure 1.c). More precisely, the output provided by the GOLD vision system, such as the vehicle lateral oset, the vehicle yaw relative to the road centerline and the upcoming road curvature, are combined to determine the lane center at a given distance ahead of the vehicle. The steering wheel is turned to head the vehicle toward that point ahead of the vehicle. In addition, the information coming from a speedometer will be integrated to handle also changes in the vehicle speed. For debug purposes, the result of the processing is also fed to the driver through a set of output devices installed on-board of the vehicle. An acoustical device warns the driver in case dangerous conditions are detected, e. g. when the distance to the leading vehicle is under a safety threshold or when the vehicle position within the lane is not safe. Moreover, a visual feedback is supplied to the driver by displaying the results both on an on-board monitor (gure 1.c) and on a led-based control panel: the monitor presents the acquired left image with markers highlighting the lane markings as well as the position of the eventual obstacles, while the leds encode the oset of the vehicle with respect to the road center line.
4 2.1. Architectural Issues 2. THE PROCESSING SYSTEM Two dierent architectural solutions have been pointed out and evaluated: special-purpose and standard processing system. The advantages oered by the rst solution, such as an ad-hoc design of both the processing paradigm and the overall system architecture, are diminished by the necessity of managing the complete project, starting from the hardware level (design of the ASICs) up to the design of the architecture, of the programming language along with an optimizing compiler, and nally to the development of applications using the specic computational paradigm. Conversely the latter takes advantage of standard development tools and environments but suers from a less specic instruction set and a less oriented system architecture. In addition also the following technological aspects need to be considered: the fast technological improvements, which tend to reduce the life time of the system; the costs of the system design and engineering, which are justied for productions based on large volumes only. For these reasons the architectural solution currently under evaluation on the ARGO vehicle is based on a standard 200 MHz MMX Pentium processor The MMX Technology MMX technology represents an enhancement of the Intel processor family, adding instructions, registers, and data types specically designed for multimedia data processing. Namely software performance are boosted exploiting a SIMD technique: multiple data elements can be processed in parallel using a single instruction. The new generalpurpose instructions supported by MMX technology perform arithmetic and logical operations on multiple data elements packed into 64-bit quantities. These instructions accelerate the performance of applications based on compute-intensive algorithms that perform localized recurring operations on small native data. More specically in the processing of gray level images, data is represented in 8 bit quantities, hence an MMX instruction can operate on 8 pixels simultaneously. Basically the MMX extensions provide the programmers with the following new features: MMX Registers: the MMX technology provides eight general-purpose 64-bit new registers. MMX registers have been overlapped to the oating point registers to assure the backward compatibility with the existing software and specically with the multitasking operating systems. 3 Unfortunately this solution has two drawbacks: the programmer is expected to not mix MMX instructions and oating point code in any way, but is forced to use a specic instruction (EMMS) at the end of every MMX enhanced routine. The EMMS instruction empties the oating point tag word, thus allowing the correct execution of oating point operations; frequent transitions between the MMX and oating-point instructions may cause signicant performance degradation. MMX Data Types: the MMX instructions can handle four dierent 64-bit data types: 8 bytes packed into one 64-bit quantity, 4 words packed into one 64-bit quantity, 2 double-words packed into one 64-bit quantity, or 1 quadword (64-bit). This allows to process multiple data using a single instruction or to directly manage 64-bit data. MMX arithmetics: the main innovation of the MMX technology consists in the two dierent methods used to process the data:
5 saturation arithmetic and wraparound mode. Their dierence depends on how the overow or underow caused by mathematical operations is managed. In both cases MMX instructions do not generate exceptions nor set ags, but in wraparound mode, it results that overow or underow are truncated and only the least signicant part of the result is returned; conversely, the saturation approach consists in setting the result of an operation that overows to the maximum value of the range, as well as the result of an operation that underows is set to the minimum value. For example packed unsigned bytes for results that overow or underow are saturated to 0FF or to 000 respectively. The latter approach is very useful for grey-level image processing, in fact, saturation brings grey value to pure black or pure white, without allowing for an inversion as in the former approach. MMX instructions: MMX processors are featured by 57 new instructions, which may be grouped into the following functional categories: arithmetic instructions, comparison instructions, conversion instructions, logical instructions, shift instructions, data transfer instructions, and the EMMS instruction. 3. THE GOLD SYSTEM 3.1. The Inverse Perspective Mapping (IPM) The angle of view under which a scene is acquired and the distance of the objects from the camera (namely the perspective eect) contribute to associate a dierent information content to each pixel of an image. The perspective eect in fact must be taken into account when processing images in order to weigh each pixel according to its information content; this dierentiate processing turns the use of a SIMD machine, such as the MMX based computers, to a knotty problem. To cope with this problem a geometrical transform (Inverse Perspective Mapping, 4 IPM) has been introduced; it allows to remove the perspective eect from the acquired image, remapping it into a new 2-dimensional domain (the remapped domain) in which the information content is homogeneously distributed among all pixels, thus allowing the ecient implementation of the following processing steps with a SIMD paradigm. Obviously the application of the IPM transform requires the knowledge of the specic acquisition conditions (camera position, orientation, optics,...) and some assumption on the scene represented in the image (here dened as a-priori knowledge). Thus the IPM transform can be of use in structured environments, 5 where, for example, the camera is mounted in a xed position or in situations where the calibration of the system and the surrounding environment can be sensed via other kind of sensors. 6 Assuming the road in front of the vision system as planar, the use of IPM allows to obtain a bird's eye view of the scene (g. 2) Extension of IPM to Stereo Vision As a consequence of the depth loss caused by the acquisition process, the use of a single two-dimensional image does not allow a three dimensional reconstruction of the world without the use of any a-priori knowledge. In addition, when the target is the reconstruction of the 3D space, the solution gets more and more complex due to the larger amount of computation required by well-known approaches, such as the processing of stereo images. The traditional approach to stereo vision 7 can be divided into four steps: 1. calibration of the vision system; 2. localization of a feature in an image; 3. identication and localization of the same feature in the other image; 4. 3D reconstruction of the scene. The problem of three dimensional reconstruction can be solved by the use of triangulations between points that correspond to the same feature (homologous points). Unfortunately, the determination of homologous points is a dicult task, however the introduction of some domain specic constraints (such as the assumption of a at road in front of the cameras) can simplify it. In particular, when a complete 3D reconstruction is not required and the
6 (a) (b) (c) Figure 2. IPM applied to a road environment: (a) 3D representation of the environment, (b) the acquired image, (c) the remapped image verication of the match with a given surface model suces, the application of IPM to stereo images plays a strategic role. More precisely, since IPM can be used to recover the texture of a specic surface (the road plane in the previous discussion), when it is applied to both stereo images (with dierent parameters reecting the dierent acquisition setup of the two cameras) it provides two instances of the given surface, namely two partially overlapping patches. These two patches, thanks to the knowledge of the vision system setup, can be brought to correspondence, so that the homologous points share the same coordinates in the two remapped images Lane Detection by means of IPM This section presents a possible solution to the problem of lane detection in images acquired from a camera installed on a mobile vehicle which is reduced to the detection of lane markings. In this case the a-priori knowledge exploited by the IMP transform is the assumption of a at road in front of the vehicle. The advantage oered by the use of the IPM is that in the remapped image (see gure 2.c) the road markings width is almost invariant within the whole image. This simplies the following detection steps and allows its implementation with a traditional pattern matching technique on a SIMD system. The basic assumption lane detection relies on is that road markings after the IPM transform are represented by quasi-vertical constant width lines, brighter than their surrounding region. Hence the rst step of road markings detection is a low-level processing aimed to detect the pixels that have a higher brightness value than their horizontal neighbors at a given distance. The following processing is in charge of the reconstruction of the road geometry Lane Markings Detection Thanks to the removal of the perspective eect, in a remapped image lane markings are represented by almost vertical bright lines of constant width, surrounded by a darker background. Thus the rst phase of lane detection is based on the search for dark-bright-dark horizontal patterns with a given size. Every pixel is compared to its left and right horizontal neighbors at a given distance and a new grey-level image is computed. This image encodes the horizontal brightness transitions and the presence of lane markings. Dierent illumination conditions, such as shadows or sunny blobs, cause road markings to assume dierent brightness values; anyway the pixels corresponding to the lane markings maintain a brightness value higher than their horizontal neighbors. In addition, taking advantage of lane markings vertical correlation, the image is enhanced (gure 3.a) through few iterations of a geodesic morphological dilation. 8 Dierent illumination conditions and the nonuniformity of painted road signs require the use of an adaptive threshold that works on a 3 3 pixel neighborhood.
7 (a) (b) (c) (d) (e) (f) Figure 3. The dierent steps of Lane Detection: (a) enhanced image; (b) binarized image; (c) concatenation of pixels; (d) segmentation and construction of polylines (e) identication of the centre of the lane; (f) superimposition of the previous result onto a brighter version of the original image for displaying purposes only Road Geometry Reconstruction The binary image is thinned and scanned row by row in order to build chains of non-zero pixels. Each chain is approximated with a polyline made of one or more segments, by means of an iterative process; at rst the two extrema of the polyline are determined At each step of the process the segment being considered is kept as a part of the polyline if the horizontal distance between its middle point and the chain is suciently small; otherwise two consecutive segments are examined in its place. To get rid of possible occlusions or errors caused by noise, two or more polylines are joined into longer ones if they satisfy some criteria such as small distance between the nearest extrema and similar orientation of the ending segments. When more solutions are possible in joining the polylines, all of them are considered. A road model is used to select the polyline which most likely matches the center road line. Initially the vehicle is assumed to be a specic position (center of the lane) on the road, which, at the same time, is assumed to be almost straight. In this situation the road center line in the remapped image is a straight vertical line that is expected to be found in a circumscribed area of the remapped image. Each computed polyline is matched against this model using several parameters such as distance, parallelism, orientation, and length. The polyline that ts better these required parameters is selected (gure 3.e). Finally a new road model is computed using the selected polyline, thus enabling the system to track the road in image sequences and to adapt the road model also to non-straight roads. Since the model assumed for the external environment (at road) allows to determine the spatial relationship between image pixels and the 3D world, 4 from the previous result it is possible to derive both the road geometry and the vehicle position within the lane. Fig. 3 shows the steps of lane detection for the image shown in gure 2. Thanks to the IPM transform, the approach has demonstrated its robustness also in the case of shadows, which represents a typical critical condition Obstacle Detection by means of Stereo IPM As mentioned in paragraph 3.2, when obstacle detection means the mere localization of objects that can obstruct the vehicle's path without their complete identication or recognition, stereo IPM can be used in conjunction with a
8 (a) (b) (c) (d) Number of non-zero pixels (normalized) Angle of view (degrees) (e) (f) (g) (h) Figure 4. Obstacle detection: (a) left and (b) right stereo images, (c) and (d) the remapped images, (e) the dierence image, (f) the angles of view overlapped with the dierence image, (g) the polar histogram, and (h) the result of obstacle detection using a black marker superimposed on a brighter version of the acquired left image; the light gray area represents the road region visible from both cameras geometrical model of the road in front of the vehicle. 2 Assuming the at road hypothesis introduced in the previous section, IPM is performed using the same relations. This is of basic importance since in a system aimed to both obstacle and lane detection the IPM transform can be performed only once and its result can be shared by the two processes. The at road model is checked through a pixel-wise dierence between the two remapped images: in correspondence to a generic obstacle in front of the vehicle, namely anything rising up from the road surface, the dierence image features suciently large clusters of non-zero pixels that have a particular shape. Due to the dierent angles of view of the stereo cameras, an ideal homogeneous square obstacle produces two clusters of pixels with a triangular shape in the dierence image, in correspondence to its vertical edges. Unfortunately triangles found in real cases (see gure 4) are not so clearly dened and often not clearly disjoint because of the texture, irregular shape, and non-homogeneous color of real obstacles. Nevertheless clusters of pixels having an almost triangular shape are anyway recognizable in the dierence image (see gure 4.e). The obstacle detection process is thus based on the localization of these triangles. Moreover, this process is complicated by the possible presence of two or more obstacles in front of the vehicle at the same time, thus producing more than one pair of triangles, or partially visible obstacles, thus producing a single triangle; a further processing step is thus needed in order to classify triangles that belong to the same obstacle Obstacle Localization A polar histogram is used for the detection of triangles: it is obtained scanning the dierence image with respect to a focus, considering every straight line originating from the focus itself and counting the number of overthreshold pixels lying on that line (gure 4.f). The values of the polar histogram are then normalized and a low-pass lter is applied in order to decrease the inuence of noise (gure 4.g). The polar histogram presents an appreciable peak corresponding to each triangle. Peaks may have dierent characteristics such as amplitude, sharpness, or width, depending on the obstacle distance, the angle of view, and the dierence in brightness and texture between the background and the obstacle itself. The position of a peak within the histogram determines the angle of view under which the obstacle edge is seen. Peaks generated by same obstacle, for example by its left and right edges, must be joined in order to consider the whole area in between as occluded.
9 (a) (b) (c) (d) Figure 5. Situations in which lane detection fails: (a) the road has a too high curvature (namely one of the road markings is not visible), thus producing (b) an incomplete remapped image; (c) the road is not at, thus producing (d) an deformed remapped image Starting from the analysis of a large number of dierent situations a criterion has been found, aimed to the grouping of peaks, that takes into account several characteristics such as the peaks amplitude and width, the area they subtend, as well as the interval between them. After the peaks joining phase, the angle of view under which the whole obstacle is seen is computed considering the peaks position, amplitude, and width. In addition, the obstacle distance can be estimated by a further analysis of the dierence image along the directions pointed out by the maxima of the polar histogram, in order to detect the triangles corners. In fact they represent the contact points between obstacles and the road plane and thus hold the information about the obstacle distance. For each peak of the polar histogram a radial histogram is computed scanning a specic sector of the dierence image whose width is determined as a function of the peak width. 1 The number of overthreshold pixels lying in the sector is computed for every distance from the focus and the result is normalized. A simple threshold applied to the radial histogram allows to detect the triangles corners position and thus the obstacle distance. The result is displayed with black markers superimposed on a brighter version of the left image; the markers position and size encode both the distance and width of obstacles (see gure 4.h). 4. DISCUSSION In this work the GOLD system for lane and obstacle detection has been presented. It was installed on ARGO and tested on a number of dierent highways, freeways and country roads in Italy. Dierent processing systems have been considered to be the hardware support for the GOLD system, both specialpurpose and general-purpose. The architecture that is currently installed on ARGO is a Pentium MMX 200 MHz, which delivers very high performances (table 1 compares dierent hardware congurations). All these congurations reach real-time performance: since the acquisition of a single eld takes 20 ms, as soon as the processing takes less than 20 ms it is considered to be working in real-time. In this case, the actual bottleneck of the system is the acquisition device. Obstacle Detection Lane Detection Low-level Total % Low-Level Low-level Total % Low-Level PentiumPro (200MHz) 3.6 ms 4.6 ms 78% 4.4 ms 6.4 ms 68% Pentium (200MHz) 7.6 ms 10.0 ms 76% 5.9 ms 8.7 ms 68% Pentium MMX(200MHz) 1.1 ms 5.1 ms 22% 0.8 ms 4.6 ms 17% Table 1. Performance evaluation of obstacle and lane detection algorithm on standard and MMX based architectures Regarding the qualitative performance of GOLD, obviously, when the initial assumptions are not met, namely when road markings are not completely visible due either to occlusions caused by obstacles or to a too high road curvature (g. 5.a) or when the road is not at (g. 5.c), lane detection cannot produce valid results.
10 ? ? Figure 6. Obstacle detection changing the inclination parameter: dierence image, polar histogram, and value of the inclination parameter h - 10 cm h - 5 cm h h + 5 cm h + 10 cm Figure 7. Obstacle detection changing the height parameter: dierence image, polar histogram, and value of the height parameter On the other hand, since obstacle detection is based on stereo vision, the quality of the results is tightly coupled also to the calibration of the vision system. Nevertheless, since in our case the nal target of obstacle detection is the determination of the free space in front of the vehicle and not the complete 3D reconstruction of the world, camera calibration becomes less critical. For this reason, even if the vehicle's movements inuence some of the calibration parameters (camera height h and inclination with respect of the road plane), a dynamic recalibration of the system is not required. For comparison purposes the ranging values for cameras height (h 10 cm) and inclination ( 1 ) larger than the ones estimated by Koller et al. 10 have been considered. Fig. 6 and 7 show the results of obstacle detection emulating the changes of cameras parameters caused by vehicle movements: due to the robustness of the approach based on polar histogram, the obstacle is always detected even if the dierence images are noisy. The major critical points of obstacle detection were found when: the obstacle is too far from the cameras (generally it happens in the range m), thus the polar histogram
11 (a) (b) (c) (d) (e) (f) (g) (h) Figure 8. Situations in which obstacle detection is critical presents only small and isolated peaks that can be hardly joined (g. 8.a, 8.b, and 8.c): anyway, when the obstacle distance is in the range 5 45 m, this problem has never been detected; the guard-rail is close to the obstacle and thus a single large obstacle is detected (g. 8.d); an obstacle is partially visible, and thus only one of its edges can be detected (g. 8.d and 8.e); some noisy peaks in the polar histogram are not ltered out, and thus they are considered as small obstacles (g. 8.f and 8.g) the detection of far obstacles sometimes fails when their brightnesses is similar to the brightnesses of the road (g. 8.h). The condence in the detection of obstacles is obviously dependent on their size, distance, and shape; specic parameters are used to tune the system sensitivity. Obstacle height: the obstacle height determines the amplitude of peaks in the polar histogram. The bandwidth of the LPF applied to the polar histogram is the parameter used as threshold to discard small peaks that could be caused by either noise or short obstacles: the smaller the bandwidth, the lower the inuence of noise (caused by incorrect camera calibration or vehicle movements), but the larger the minimum height of detectable obstacles (in g. 8.c and 8.d the guard-rail is detected even if it is not as tall as vehicles). Obstacle width: in the polar histogram two or more peaks are joined when they are suciently close to each other and present similar height. The threshold used in this phase modies the width of the wider correctly detectable object, and also the probability that peaks not generated by the same obstacle are joined (see g. 8.d). Obstacle distance: the farther the obstacle, the smaller the triangles generated in the dierence image, and thus the lower the amplitude of peaks in the polar histogram; nevertheless, for suciently tall obstacles (e.g. vehicles at about 50 m far from the cameras) the main problem is not the detection of peaks, but their joining, as shown in g. 8.a, b, and c. Obstacle shape: the algorithm was designed to detect obstacles with quasi-vertical edges; objects with non-vertical edges (e.g. pyramidal objects) generate twisted triangles that are hardly detected by the analysis of the polar histogram. Also the inter-camera spacing is a key parameter: the greater the distance between cameras, the stronger the disparities in the remapped images due to the presence of an obstacle. Nevertheless the inter-camera spacing is bounded
12 by the vehicle physical structure, thus the cameras were installed at the maximum allowed distance. Unfortunately, a too large separation leads to a higher sensitivity to vehicle movements, in particular rolling. During the tests, the system demonstrated to be robust and reliable: other vehicles were always detected and only in few cases (i.e. on paved or -more generally- rough roads) vehicle movements became so considerable that the processing of noisy remapped images led to the erroneous detection of false small sized obstacles. Nevertheless, since the vehicle's movements have a small frequency, these small-sized obstacles appear only in very few consecutive frames, and can be easily removed thanks to a temporal averaging lter. On the other hand, thanks to the remapping process, lane markings were located even in presence of shadows or other artifacts on the road surface; anyway, although it is hard to devise a method to evaluate the percentage of successful lane detection, some unocial tests showed that the system detects the correct position of the lane in about 95 % of the considered situations. An extension to the IPM technique is now under evaluation: thanks to the information obtained from pairs of stereo images, it is possible to derive the height of homologous points in the image using simple triangulations. The algorithm selects features of the image that belong to the road plane (in this implementation it selects road markings) and determines their height with respect to a at road model. In this way it is possible to measure the road slope and recalibrate the IPM procedure according to the new road model. The road model is updated once every frames (about once per second) Moreover, an extension to the GOLD system able to exploit temporal correlations and to perform a deeper datafusion between the two functionalities of lane detection and obstacle detection is currently under implementation 11 on ARGO. REFERENCES 1. M. Bertozzi and A. Broggi, \GOLD: a Parallel Real-Time Stereo Vision System for Generic Obstacle and Lane Detection," IEEE Transactions on Image Processing 7, pp. 62{81, January M. Bertozzi, A. Broggi, and A. Fascioli, \Stereo Inverse Perspective Mapping: Theory and Applications," Image and Vision Computing Journal, Intel Corporation, MMX Technology Programmers Reference Manual. Intel Corporation, avalaible at 4. H. A. Mallot, H. H. Bultho, J. J. Little, and S. Bohrer, \Inverse perspective mapping simplies optical ow computation and obstacle detection," Biological Cybernetics 64, pp. 177{185, D. A. Pomerleau, \RALPH: Rapidly Adapting Lateral Position Handler," in Proceedings IEEE Intelligent Vehicles'95, I. Masaky, ed., pp. 506{511, IEEE Computer Society, (Detroit), September K. Storjohann, T. Zielke, H. A. Mallot, and W. von Seelen, \Visual Obstacle Detection for Automatically Guided Vehicle," in Proceedings of IEEE International Conference on Robotics and Automation, vol. II, pp. 761{766, O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint, The MIT Press, J. Serra, Image Analysis and Mathematical Morphology, Academic Press, London, A. Broggi and S. Berte, \Vision-Based Road Detection in Automotive Systems: a Real-Time Expectation-Driven Approach," Journal of Articial Intelligence Research 3, pp. 325{348, December D. Koller, J. Malik, Q.-T. Luong, and J. Weber, \An integrated stereo-based approach to automatic vehicle guidance," in Proceedings of the Fifth ICCV, pp. 12{20, (Boston), M. Bertozzi, A. Broggi, and A. Fascioli, \Obstacle and Lane Detection on ARGO," in Proceedings IEEE Intelligent Transportation Systems Conference'97, (Boston, USA), November these results do not take into account an exhaustive set of road conditions.
Vision-based Automated Vehicle Guidance: the experience of the ARGO vehicle
Vision-based Automated Vehicle Guidance: the experience of the ARGO vehicle Massimo Bertozzi, Alberto Broggi, Gianni Conte, Alessandra Fascioli Dipartimento di Ingegneria dell Informazione Università di
More informationSTEREO-VISION SYSTEM PERFORMANCE ANALYSIS
STEREO-VISION SYSTEM PERFORMANCE ANALYSIS M. Bertozzi, A. Broggi, G. Conte, and A. Fascioli Dipartimento di Ingegneria dell'informazione, Università di Parma Parco area delle Scienze, 181A I-43100, Parma,
More informationStereo Inverse Perspective Mapping: Theory and Applications
Stereo Inverse Perspective Mapping: Theory and Applications Massimo Bertozzi, Alberto Broggi, Alessandra Fascioli Dipartimento di Ingegneria dell Informazione Università di Parma, Italy e-mail: fbertozzi,broggi,fascalg@ce.unipr.it
More informationReal-Time Lane and Obstacle Detection on the
Real-Time Lane and Obstacle on the System Massimo Bertozzi and Alberto Broggi Dipartimento di Ingegneria dell Informazione Università di Parma, I-43100 Parma, Italy e-mail: fbertozzi,broggig@ce.unipr.it
More informationStereo inverse perspective mapping: theory and applications
Image and Vision Computing 16 (1998) 585 590 Short Communication Stereo inverse perspective mapping: theory and applications Massimo Bertozzi*, Alberto Broggi, Alessandra Fascioli Dipartimento di Ingegneria
More informationparco area delle Scienze, 181A via Ferrata, , Parma 27100, Pavia
Proceedings of the IEEE Intelligent Vehicles Symposium 2000 Dearbon (MI), USA October 3-5, 2000 Stereo Vision-based Vehicle Detection M. Bertozzi 1 A. Broggi 2 A. Fascioli 1 S. Nichele 2 1 Dipartimento
More informationSelf-Calibration of a Stereo Vision System for Automotive Applications
Proceedings of the 2001 IEEE International Conference on Robotics & Automation Seoul, Korea - May 21-26, 2001 Self-Calibration of a Stereo Vision System for Automotive Applications Alberto Broggi, Massimo
More informationA Cooperative Approach to Vision-based Vehicle Detection
2001 IEEE Intelligent Transportation Systems Conference Proceedings - Oakland (CA), USA - August 25-29, 2001 A Cooperative Approach to Vision-based Vehicle Detection A. Bensrhair, M. Bertozzi, A. Broggi,
More informationStereo Vision-based Feature Extraction for Vehicle Detection
Stereo Vision-based Feature Extraction for Vehicle Detection A. Bensrhair, M. Bertozzi, A. Broggi, A. Fascioli, S. Mousset, and G. Toulminet Abstract This paper presents a stereo vision system for vehicle
More informationAn Evolutionary Approach to Lane Markings Detection in Road Environments
An Evolutionary Approach to Lane Markings Detection in Road Environments M. Bertozzi, A. Broggi, A. Fascioli, A. Tibaldi Dipartimento di Ingegneria dell Informazione Università di Parma I-43100 Parma,
More informationOn Road Vehicle Detection using Shadows
On Road Vehicle Detection using Shadows Gilad Buchman Grasp Lab, Department of Computer and Information Science School of Engineering University of Pennsylvania, Philadelphia, PA buchmag@seas.upenn.edu
More informationLocal qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:
Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu
More informationIR Pedestrian Detection for Advanced Driver Assistance Systems
IR Pedestrian Detection for Advanced Driver Assistance Systems M. Bertozzi 1, A. Broggi 1, M. Carletti 1, A. Fascioli 1, T. Graf 2, P. Grisleri 1, and M. Meinecke 2 1 Dipartimento di Ingegneria dell Informazione
More informationIntelligent overhead sensor for sliding doors: a stereo based method for augmented efficiency
Intelligent overhead sensor for sliding doors: a stereo based method for augmented efficiency Luca Bombini, Alberto Broggi, Michele Buzzoni, and Paolo Medici VisLab Dipartimento di Ingegneria dell Informazione
More informationLow-level Image Processing for Lane Detection and Tracking
Low-level Image Processing for Lane Detection and Tracking Ruyi Jiang 1, Mutsuhiro Terauchi 2, Reinhard Klette 3, Shigang Wang 1, and Tobi Vaudrey 3 1 Shanghai Jiao Tong University, Shanghai, China 2 Hiroshima
More informationLow-level Image Processing for Lane Detection and Tracking
Low-level Image Processing for Lane Detection and Tracking Ruyi Jiang 1, Reinhard Klette 2, Shigang Wang 1, and Tobi Vaudrey 2 1 Shanghai Jiao Tong University, Shanghai, China 2 The University of Auckland,
More informationAn Efficient Obstacle Awareness Application for Android Mobile Devices
An Efficient Obstacle Awareness Application for Android Mobile Devices Razvan Itu, Radu Danescu Computer Science Department Technical University of Cluj-Napoca Cluj-Napoca, Romania itu.razvan@gmail.com,
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationInfrared Stereo Vision-based Pedestrian Detection
Infrared Stereo Vision-based Pedestrian Detection M. Bertozzi, A. Broggi, and A. Lasagni Dipartimento di Ingegneria dell Informazione Università di Parma Parma, I-43100, Italy {bertozzi,broggi,lasagni}@ce.unipr.it
More informationFree Space Detection on Highways using Time Correlation between Stabilized Sub-pixel precision IPM Images
Free Space Detection on Highways using Time Correlation between Stabilized Sub-pixel precision IPM Images Pietro Cerri and Paolo Grisleri Artificial Vision and Intelligent System Laboratory Dipartimento
More informationProceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives
Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns CAIP'95, pp. 874-879, Prague, Czech Republic, Sep 1995 Direct Obstacle Detection and Motion from Spatio-Temporal Derivatives
More informationExploration of Unknown or Partially Known. Prof. Dr. -Ing. G. Farber. the current step. It can be derived from a third camera
Exploration of Unknown or Partially Known Environments? Darius Burschka, Christof Eberst Institute of Process Control Computers Prof. Dr. -Ing. G. Farber Technische Universitat Munchen 80280 Munich, Germany
More informationT his paper discusses the main architectural issues of a challenging application of real-time
Real-Time Imaging 6, 313±324 (2000) doi:10.1006/rtim.1999.0191, available online at http://www.idealibrary.com on Architectural Issues on Vision-Based Automatic Vehicle Guidance: The Experience of the
More informationVehicle Detection Method using Haar-like Feature on Real Time System
Vehicle Detection Method using Haar-like Feature on Real Time System Sungji Han, Youngjoon Han and Hernsoo Hahn Abstract This paper presents a robust vehicle detection approach using Haar-like feature.
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationStochastic Road Shape Estimation, B. Southall & C. Taylor. Review by: Christopher Rasmussen
Stochastic Road Shape Estimation, B. Southall & C. Taylor Review by: Christopher Rasmussen September 26, 2002 Announcements Readings for next Tuesday: Chapter 14-14.4, 22-22.5 in Forsyth & Ponce Main Contributions
More informationare now opportunities for applying stereo ranging to problems in mobile robot navigation. We
A Multiresolution Stereo Vision System for Mobile Robots Luca Iocchi Dipartimento di Informatica e Sistemistica Universita di Roma \La Sapienza", Italy iocchi@dis.uniroma1.it Kurt Konolige Articial Intelligence
More informationReal Time Obstacle Detection in Stereovision on Non Flat Road Geometry Through V-disparity Representation.
Real Time Obstacle Detection in Stereovision on Non Flat Road Geometry Through V-disparity Representation. Raphael Labayrade, Didier Aubert, Jean-Philippe Tarel Abstract Many roads are not totaly planar
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationPaper title: A Multi-resolution Approach for Infrared Vision-based Pedestrian Detection
Paper title: A Multi-resolution Approach for Infrared Vision-based Pedestrian Detection Authors: A. Broggi, A. Fascioli, M. Carletti, T. Graf, and M. Meinecke Technical categories: Vehicle Environment
More informationCONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE
National Technical University of Athens School of Civil Engineering Department of Transportation Planning and Engineering Doctoral Dissertation CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE
More informationReal-time Stereo Vision for Urban Traffic Scene Understanding
Proceedings of the IEEE Intelligent Vehicles Symposium 2000 Dearborn (MI), USA October 3-5, 2000 Real-time Stereo Vision for Urban Traffic Scene Understanding U. Franke, A. Joos DaimlerChrylser AG D-70546
More informationDetecting Planar Homographies in an Image Pair. submission 335. all matches. identication as a rst step in an image analysis
Detecting Planar Homographies in an Image Pair submission 335 Abstract This paper proposes an algorithm that detects the occurrence of planar homographies in an uncalibrated image pair. It then shows that
More informationReal-Time Detection of Road Markings for Driving Assistance Applications
Real-Time Detection of Road Markings for Driving Assistance Applications Ioana Maria Chira, Ancuta Chibulcutean Students, Faculty of Automation and Computer Science Technical University of Cluj-Napoca
More informationDepth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth
Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze
More informationDetecting motion by means of 2D and 3D information
Detecting motion by means of 2D and 3D information Federico Tombari Stefano Mattoccia Luigi Di Stefano Fabio Tonelli Department of Electronics Computer Science and Systems (DEIS) Viale Risorgimento 2,
More informationReal Time Lane Detection for Autonomous Vehicles
Proceedings of the International Conference on Computer and Communication Engineering 2008 May 13-15, 2008 Kuala Lumpur, Malaysia Real Time Lane Detection for Autonomous Vehicles Abdulhakam.AM.Assidiq,
More informationInterpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgar
Interpretation of Urban Surface Models using 2D Building Information Norbert Haala and Claus Brenner Institut fur Photogrammetrie Universitat Stuttgart Geschwister-Scholl-Strae 24, 70174 Stuttgart, Germany
More informationarxiv: v1 [cs.cv] 28 Sep 2018
Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,
More informationFeature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies
Feature Transfer and Matching in Disparate Stereo Views through the use of Plane Homographies M. Lourakis, S. Tzurbakis, A. Argyros, S. Orphanoudakis Computer Vision and Robotics Lab (CVRL) Institute of
More informationVISION-BASED HANDLING WITH A MOBILE ROBOT
VISION-BASED HANDLING WITH A MOBILE ROBOT STEFAN BLESSING TU München, Institut für Werkzeugmaschinen und Betriebswissenschaften (iwb), 80290 München, Germany, e-mail: bl@iwb.mw.tu-muenchen.de STEFAN LANSER,
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationCritique: Efficient Iris Recognition by Characterizing Key Local Variations
Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher
More informationA Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland
W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland College Park, MD
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationBitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4
Ecient pictograph detection Dietrich Buesching TU Muenchen, Fakultaet fuer Informatik FG Bildverstehen 81667 Munich, Germany email: bueschin@informatik.tu-muenchen.de 1 Introduction Pictographs are ubiquitous
More informationPerception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction
More informationTHE development of in-vehicle assistance systems dedicated
1666 IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 53, NO. 6, NOVEMBER 2004 Pedestrian Detection for Driver Assistance Using Multiresolution Infrared Vision Massimo Bertozzi, Associate Member, IEEE,
More informationMorphological Change Detection Algorithms for Surveillance Applications
Morphological Change Detection Algorithms for Surveillance Applications Elena Stringa Joint Research Centre Institute for Systems, Informatics and Safety TP 270, Ispra (VA), Italy elena.stringa@jrc.it
More informationDepartment of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan
Shape Modeling from Multiple View Images Using GAs Satoshi KIRIHARA and Hideo SAITO Department of Electrical Engineering, Keio University 3-14-1 Hiyoshi Kouhoku-ku Yokohama 223, Japan TEL +81-45-563-1141
More informationCharacter Recognition
Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches
More informationfirst order approx. u+d second order approx. (S)
Computing Dierential Properties of 3-D Shapes from Stereoscopic Images without 3-D Models F. Devernay and O. D. Faugeras INRIA. 2004, route des Lucioles. B.P. 93. 06902 Sophia-Antipolis. FRANCE. Abstract
More informationMachine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices
Technical University of Cluj-Napoca Image Processing and Pattern Recognition Research Center www.cv.utcluj.ro Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving
More informationMOTION. Feature Matching/Tracking. Control Signal Generation REFERENCE IMAGE
Head-Eye Coordination: A Closed-Form Solution M. Xie School of Mechanical & Production Engineering Nanyang Technological University, Singapore 639798 Email: mmxie@ntuix.ntu.ac.sg ABSTRACT In this paper,
More informationDetection and Classification of Painted Road Objects for Intersection Assistance Applications
Detection and Classification of Painted Road Objects for Intersection Assistance Applications Radu Danescu, Sergiu Nedevschi, Member, IEEE Abstract For a Driving Assistance System dedicated to intersection
More informationCIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS
CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing
More informationAn Interactive Technique for Robot Control by Using Image Processing Method
An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.
More information(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)
Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application
More informationECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination
ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.
More informationcoding of various parts showing different features, the possibility of rotation or of hiding covering parts of the object's surface to gain an insight
Three-Dimensional Object Reconstruction from Layered Spatial Data Michael Dangl and Robert Sablatnig Vienna University of Technology, Institute of Computer Aided Automation, Pattern Recognition and Image
More informationEffects Of Shadow On Canny Edge Detection through a camera
1523 Effects Of Shadow On Canny Edge Detection through a camera Srajit Mehrotra Shadow causes errors in computer vision as it is difficult to detect objects that are under the influence of shadows. Shadow
More informationMonocular Vision Based Autonomous Navigation for Arbitrarily Shaped Urban Roads
Proceedings of the International Conference on Machine Vision and Machine Learning Prague, Czech Republic, August 14-15, 2014 Paper No. 127 Monocular Vision Based Autonomous Navigation for Arbitrarily
More informationTime Stamp Detection and Recognition in Video Frames
Time Stamp Detection and Recognition in Video Frames Nongluk Covavisaruch and Chetsada Saengpanit Department of Computer Engineering, Chulalongkorn University, Bangkok 10330, Thailand E-mail: nongluk.c@chula.ac.th
More information3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT
3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT V. M. Lisitsyn *, S. V. Tikhonova ** State Research Institute of Aviation Systems, Moscow, Russia * lvm@gosniias.msk.ru
More informationBinocular Tracking Based on Virtual Horopters. ys. Rougeaux, N. Kita, Y. Kuniyoshi, S. Sakane, yf. Chavand
Binocular Tracking Based on Virtual Horopters ys. Rougeaux, N. Kita, Y. Kuniyoshi, S. Sakane, yf. Chavand Autonomous Systems Section ylaboratoire de Robotique d'evry Electrotechnical Laboratory Institut
More informationTransactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN
Hopeld Network for Stereo Correspondence Using Block-Matching Techniques Dimitrios Tzovaras and Michael G. Strintzis Information Processing Laboratory, Electrical and Computer Engineering Department, Aristotle
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationOvertaking Vehicle Detection Using Implicit Optical Flow
Overtaking Vehicle Detection Using Implicit Optical Flow Parag H. Batavia Dean A. Pomerleau Charles E. Thorpe Robotics Institute - Carnegie Mellon University Pittsburgh, PA, USA Keywords - Optical Flow,
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationIntroducing Robotics Vision System to a Manufacturing Robotics Course
Paper ID #16241 Introducing Robotics Vision System to a Manufacturing Robotics Course Dr. Yuqiu You, Ohio University c American Society for Engineering Education, 2016 Introducing Robotics Vision System
More informationSimultaneous surface texture classification and illumination tilt angle prediction
Simultaneous surface texture classification and illumination tilt angle prediction X. Lladó, A. Oliver, M. Petrou, J. Freixenet, and J. Martí Computer Vision and Robotics Group - IIiA. University of Girona
More informationA Stochastic Environment Modeling Method for Mobile Robot by using 2-D Laser scanner Young D. Kwon,Jin.S Lee Department of Electrical Engineering, Poh
A Stochastic Environment Modeling Method for Mobile Robot by using -D Laser scanner Young D. Kwon,Jin.S Lee Department of Electrical Engineering, Pohang University of Science and Technology, e-mail: jsoo@vision.postech.ac.kr
More informationStereo Vision. MAN-522 Computer Vision
Stereo Vision MAN-522 Computer Vision What is the goal of stereo vision? The recovery of the 3D structure of a scene using two or more images of the 3D scene, each acquired from a different viewpoint in
More informationACTIVITYDETECTION 2.5
ACTIVITYDETECTION 2.5 Configuration Revision 1 2018 ACIC sa/nv. All rights reserved. Document history Revision Date Comment 1 23/11/18 First revision for version 2.5 Target public This document is written
More informationStereo Image Rectification for Simple Panoramic Image Generation
Stereo Image Rectification for Simple Panoramic Image Generation Yun-Suk Kang and Yo-Sung Ho Gwangju Institute of Science and Technology (GIST) 261 Cheomdan-gwagiro, Buk-gu, Gwangju 500-712 Korea Email:{yunsuk,
More informationALMA Memo No An Imaging Study for ACA. Min S. Yun. University of Massachusetts. April 1, Abstract
ALMA Memo No. 368 An Imaging Study for ACA Min S. Yun University of Massachusetts April 1, 2001 Abstract 1 Introduction The ALMA Complementary Array (ACA) is one of the several new capabilities for ALMA
More informationTransactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN
ransactions on Information and Communications echnologies vol 6, 996 WI Press, www.witpress.com, ISSN 743-357 Obstacle detection using stereo without correspondence L. X. Zhou & W. K. Gu Institute of Information
More informationCar tracking in tunnels
Czech Pattern Recognition Workshop 2000, Tomáš Svoboda (Ed.) Peršlák, Czech Republic, February 2 4, 2000 Czech Pattern Recognition Society Car tracking in tunnels Roman Pflugfelder and Horst Bischof Pattern
More information3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1. A Real Time System for Detecting and Tracking People
3. International Conference on Face and Gesture Recognition, April 14-16, 1998, Nara, Japan 1 W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David
More information3D Modeling of Objects Using Laser Scanning
1 3D Modeling of Objects Using Laser Scanning D. Jaya Deepu, LPU University, Punjab, India Email: Jaideepudadi@gmail.com Abstract: In the last few decades, constructing accurate three-dimensional models
More informationStereo Vision Based Advanced Driver Assistance System
Stereo Vision Based Advanced Driver Assistance System Ho Gi Jung, Yun Hee Lee, Dong Suk Kim, Pal Joo Yoon MANDO Corp. 413-5,Gomae-Ri, Yongin-Si, Kyongi-Do, 449-901, Korea Phone: (82)31-0-5253 Fax: (82)31-0-5496
More informationOCR For Handwritten Marathi Script
International Journal of Scientific & Engineering Research Volume 3, Issue 8, August-2012 1 OCR For Handwritten Marathi Script Mrs.Vinaya. S. Tapkir 1, Mrs.Sushma.D.Shelke 2 1 Maharashtra Academy Of Engineering,
More informationProc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992
Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp.957-962, Kobe, Japan, September 1992 Tracking a Moving Object by an Active Vision System: PANTHER-VZ Jun Miura, Hideharu Kawarabayashi,
More informationVehicle Detection under Day and Night Illumination
Vehicle Detection under Day and Night Illumination R. Cucchiara 1, M. Piccardi 2 1 Dipartimento di Scienze dell Ingegneria Università di Modena e Reggio Emilia Via Campi 213\b - 41100 Modena, Italy e-mail:
More informationLooming Motion Segmentation in Vehicle Tracking System using Wavelet Transforms
Looming Motion Segmentation in Vehicle Tracking System using Wavelet Transforms K. SUBRAMANIAM, S. SHUKLA, S.S. DLAY and F.C. RIND Department of Electrical and Electronic Engineering University of Newcastle-Upon-Tyne
More informationBall detection and predictive ball following based on a stereoscopic vision system
Research Collection Conference Paper Ball detection and predictive ball following based on a stereoscopic vision system Author(s): Scaramuzza, Davide; Pagnottelli, Stefano; Valigi, Paolo Publication Date:
More informationProblem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1
Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page
More informationText Information Extraction And Analysis From Images Using Digital Image Processing Techniques
Text Information Extraction And Analysis From Images Using Digital Image Processing Techniques Partha Sarathi Giri Department of Electronics and Communication, M.E.M.S, Balasore, Odisha Abstract Text data
More informationMeasurements using three-dimensional product imaging
ARCHIVES of FOUNDRY ENGINEERING Published quarterly as the organ of the Foundry Commission of the Polish Academy of Sciences ISSN (1897-3310) Volume 10 Special Issue 3/2010 41 46 7/3 Measurements using
More informationLane Markers Detection based on Consecutive Threshold Segmentation
ISSN 1746-7659, England, UK Journal of Information and Computing Science Vol. 6, No. 3, 2011, pp. 207-212 Lane Markers Detection based on Consecutive Threshold Segmentation Huan Wang +, Mingwu Ren,Sulin
More informationResearch on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration
, pp.33-41 http://dx.doi.org/10.14257/astl.2014.52.07 Research on an Adaptive Terrain Reconstruction of Sequence Images in Deep Space Exploration Wang Wei, Zhao Wenbin, Zhao Zhengxu School of Information
More informationStereo CSE 576. Ali Farhadi. Several slides from Larry Zitnick and Steve Seitz
Stereo CSE 576 Ali Farhadi Several slides from Larry Zitnick and Steve Seitz Why do we perceive depth? What do humans use as depth cues? Motion Convergence When watching an object close to us, our eyes
More informationDETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS
DETECTION OF 3D POINTS ON MOVING OBJECTS FROM POINT CLOUD DATA FOR 3D MODELING OF OUTDOOR ENVIRONMENTS Tsunetake Kanatani,, Hideyuki Kume, Takafumi Taketomi, Tomokazu Sato and Naokazu Yokoya Hyogo Prefectural
More informationsizes. Section 5 briey introduces some of the possible applications of the algorithm. Finally, we draw some conclusions in Section 6. 2 MasPar Archite
Parallelization of 3-D Range Image Segmentation on a SIMD Multiprocessor Vipin Chaudhary and Sumit Roy Bikash Sabata Parallel and Distributed Computing Laboratory SRI International Wayne State University
More informationDense 3D Reconstruction. Christiano Gava
Dense 3D Reconstruction Christiano Gava christiano.gava@dfki.de Outline Previous lecture: structure and motion II Structure and motion loop Triangulation Today: dense 3D reconstruction The matching problem
More informationTime-to-Contact from Image Intensity
Time-to-Contact from Image Intensity Yukitoshi Watanabe Fumihiko Sakaue Jun Sato Nagoya Institute of Technology Gokiso, Showa, Nagoya, 466-8555, Japan {yukitoshi@cv.,sakaue@,junsato@}nitech.ac.jp Abstract
More informationTask Driven Perceptual Organization for Extraction of Rooftop Polygons. Christopher Jaynes, Frank Stolle and Robert Collins
Task Driven Perceptual Organization for Extraction of Rooftop Polygons Christopher Jaynes, Frank Stolle and Robert Collins Abstract A new method for extracting planar polygonal rooftops in monocular aerial
More informationThomas Labe. University ofbonn. A program for the automatic exterior orientation called AMOR was developed by Wolfgang
Contributions to the OEEPE-Test on Automatic Orientation of Aerial Images, Task A - Experiences with AMOR Thomas Labe Institute of Photogrammetry University ofbonn laebe@ipb.uni-bonn.de (in: OEEPE Publication
More informationCover Page. Abstract ID Paper Title. Automated extraction of linear features from vehicle-borne laser data
Cover Page Abstract ID 8181 Paper Title Automated extraction of linear features from vehicle-borne laser data Contact Author Email Dinesh Manandhar (author1) dinesh@skl.iis.u-tokyo.ac.jp Phone +81-3-5452-6417
More information