Camera Behavior Models for ADAS and AD functions with Open Simulation Interface and Functional Mockup Interface

Size: px
Start display at page:

Download "Camera Behavior Models for ADAS and AD functions with Open Simulation Interface and Functional Mockup Interface"

Transcription

1 Camera Behavior Models for ADAS and AD functions with Open Simulation Interface and Functional Mockup Interface Kmeid Saad Stefan-Alexander Schneider Master s Course of Advanced Driver Assistance Systems, University of Applied Sciences Kempten, Germany, {kmeid.saad, stefan-alexander.schneider}@hs-kempten.de Abstract Advanced driver assistance systems (ADAS) and Autonomous Driving (AD) provide among comfort to the driver also a great potential for future mobility and tends to increase traffic and car safety. All ADAS and AD functions and especially those associated with high safety levels, require paradigm-changing approaches for the homologation: some ADAS and AD functions require up to about 200 million km [1] of real drive testing for the qualification. This amount of real drive testing is not feasible for any OEM and therefore there is a strong need for a hybrid test strategy where the performed real drive tests take credit from a virtual campaign evaluation. Such a combined real and virtual test strategy could reduce the necessary efforts for the qualification of a given function development and its validation. This homologation method is not new and already applied at the homologation of vehicle dynamic driving functions, like the Electronically Stability Program (ESP) [2]. To support virtual testing, the triangle of the driver, the vehicle and the environment has to be modeled for simulation. The interface between the vehicle and the environment, i.e. a sensor that is a device to transform physical information into electrical signals, is of crucial importance for the ADAS and AD function, because it replaces step by step the function driver in the car. This paper focuses on sensor behavioral models, specifically camera behavioral model, as a part of a new tool chain method that combines integrations platforms and authoring tools by the Open Simulation Interface [8] and the Functional Mockup Interface [9]. The integration is following the concept of an integration of a Functional Mockup Unit (FMU) with a specific add-on for the environment semantics by OSI. Such a corresponding container is a called in the following Open Simulation Functional Mockup Unit (OFU). An OFU with, e.g., a camera behavior model will be able to provide a simulation model with more realistic Video Data Stream (VDS) for image based ADAS and AD functions and the necessary Vehicle Detection (VD). This paper also provides an outlook on how to use this basic architecture. Keywords: OSI, FMI, OFU, virtual testing, camera behavior model, image based ADAS and AD functions. 1 Introduction Reliable ADAS and AD functions are of crucial importance today. However, in the presence of so many homologation requirements to be fulfilled and as the vehicle s complexity is constantly increasing, the validation of ADAS and AD functions is getting more and more time consuming and very expensive. No wonder that the automotive suppliers are forced to deliver sensor behavior models to the automotive OEMs. The testing effort often consume more than 50% of the overall development effort [2]. Late fault and failure detection normally leads to huge corrections and additionally maintenance costs. Reducing quality or compromising on functional safety for ADAS and AD functions is often not acceptable. Therefore, the main problem remains how to improve test coverage for a prescribed product s functionality and reliability. One way to overcome this challenge is to introduce and/or to increase the amount of Virtual Testing (VT) in ADAS or AD function development. This results in a hybrid test strategy for real and virtual testing. VT methods replaces real drive test by virtual tests of the ADAS or AD function; instead of the real car to be tested itself. In VT, functions can be characterized, and their performance can also be predicted via the usage of simulation models. Therefore, VT can also support ADAS and AD function s validation and function development as well as the homologation, where in a qualification process the virtual tests can provide support and integrate if not replace the actual car test as well as the actual car component. By verifying the adequacy of the modelling results obtained from VT and comparing them to real drive behavior, it is argued that this will lead to savings in both development resources as well as in development time by partially or totally depending on result of the corresponding VT [4]. 1

2 In some case modeling ADAS sensors can turn out to be a very challenging task, especially when considering various but necessary modeling approaches needed to replicate the actual sensor s behavior as close and reliably as possible. When we look, e.g. at the most commonly used ADAS and AD sensors, we can definitely see some of the common and specific factors that affect the sensor s behavior. 1. Camera, Radar and Lidar are affected by: a. Target distance, b. Field of View, and c. Housing, mounting position. 2. Camera: a. Visibility due to Weather situation, and b. Night/Day time. 3. Radar: a. Antenna Diagrams, b. Ducting/Echoes, and c. Resolution(angle/distance/Speed). 4. Lidar: a. Number of beams, angular resolution, b. Object characteristics, material composition, and c. Noise resistance. These factors are not only necessary to be provided as a valid input to our sensor models, but also in the most standardized and interchangeable way; a perfect justification for OFU. The Functional Mockup Interface (FMI) supports to develop products controlled by a complex set of physical laws that can be represented by virtual products assembled from a set of digital models and control systems. Each set represents a combination of parts capable of simulating real product s functionality [5]. The FMI standard thus provides the means for model based development of systems, e.g. it can be used in designing ADAS and AD functions that are driven by electronic devices inside vehicles (e.g. ESP controllers, multi-functional cameras, radars). Activities from systems modelling, simulation, validation and test can be covered with the FMI based approach. The open simulation interface (OSI) contains an object based environment description using the message format of the protocol buffers library developed and maintained by Google [6]. OSI consists of two individual top level messages defining the so called Ground Truth Interface (GTI) and the Sensor Data Interface (SDI). The GTI gives an exact view on the simulated objects in a global coordinate system. The SDI describes the objects in the reference frame of a sensor for environmental perception [7]. A Multi-Functional Camera (MFC), e.g., is a specific ADAS and AD product capable of implementing numerous functions like vehicle detection, lane departure warning, traffic sign detection,, thus making the everyday driving safer. The contribution of new ADAS and AD functions regarding accident prevention is enhancing. According to the NHTSA in the US a statistical projection of traffic fatalities for the first half of 2016 shows that an estimated 17,775 people died in motor vehicle traffic crashes (NHTSA, 2016). A lot of those accidents could have been prevented by correct recognition of valid targets for instance with VD. VD function is constantly scanning the field of view (FOV) in front of the ego car and trying to find valid targets, i.e. vehicles. Taking in consideration that some VD method could be highly depend on specific pattern aspect ratios or features and could also be estimating the distance to the vehicle using pixel distance difference between a detected vehicle and the hood, it is clear how a more realistic VDS can be of an essence for VD s virtual testing approach. Further in this paper, we will demonstrate how the VT approach used for environment simulation and FMI approach used for physical camera model and ADAS and AD function integration can be combined to aid in the process of developing, testing and even validation of ADAS and AD functions (VD). 2 Specific Aims A series of studies is proposed to investigate the usage of VT approach and FMI approach in testing and validating new developed ADAS and AD function coupled with sensor models, i.e. the physical camera behavioral model, VD in our case. First, we intend to study the virtual testing approach by selecting an appropriate integration platform capable of implementing OFUs. The integration platform should be capable of interacting with the implemented OFU providing it with all necessary inputs coming from the virtual platform, and be able to receive necessary outputs from the OFU. Second, we shall study the possibilities of creating an OFU based on the FMI capable of integrating a camera behavior model defining all of its necessary inputs from outside environment (for both real and virtual) and its specific outputs for function testing and validation. 2

3 Another OFU representing the VD function will be created. The VD-OFU should be able to interact with the camera behavior model-ofu. The VDS provided by the camera behavior model-ofu is meant to reflect the behavior of a realistic camera to a certain percentage by implementing relevant image distortions and aberrations on the ideal VDS. The manipulated VDS will then be the main input for the VD-OFU and not the ideal VDS generated by the virtual simulation/integration platform. 2.1 VT Approach The aim of this approach is to investigate the capabilities of an integration platform in reflecting specific environmental details in a digital/virtualized format. This virtualized data should respect a certain level of abstraction necessary for various function simulation and validation. The integration platform must have the capabilities of integrating OFU components into its environment, more specifically OFU components representing a camera behavior model and the VD function. In addition to its capabilities of integrating OFUs, we should be able to generate various test scenarios reflecting possible real life situations. Integration platforms should also provide a certain level or reliability so that all test results can be used further on for early design modification and for satisfying various homologation requirements. 2.2 FMI Approach Based on the FMI standard, various OFU units can interact with each other and with the integration platform easily. The camera behavior model-ofu and the VD-OFU will be integrated in the same integration platform sharing inputs and outputs, as shown in the following figure. Integration/Simulation Plattform Ideal VDS OSI Data OFU 1 Physical Camera Behavioral Model Manipulated VDS OSI Data OFU 2 VD Figure 1. Example for the basic architecture of OFUs. 2.3 Vehicle Detection (VD) function (machine learning based) Vehicle Object List (OSI) The aim of the VD function is to detect vehicles inside the FOV of the imaging device and estimate the distance between them and the ego car. By using standard computer vision algorithms and machine learning algorithms it s possible to analyze a camera s video data and find signs of vehicles. This can be done by following algorithm: 1) Extract training features for vehicle and nonvehicle objects (could be done offline), 2) Train the model using linear support vector classification (could be done offline), 3) Implement a sliding window approach to scan each frame and 4) Use the extracted windows and the trained model to generate an object list (OL) with the detected vehicles. 3 Research and Design Methods In this section, we present in details the research design and methodology for the proposed approaches. For the VT approach, we used CarMaker from IPG as an integration platform whereas for the FMI approach we describe the development of the camera behavior model and a VD-Function into separate OFU units and how they are integrated in CarMaker. 3.1 CarMaker in VT Approach An integration platform (IP) is a development environment, e.g. a software tool, that enables to integrate various functions like the driver, the vehicle or the sensor behavior that describe in total a system in its use like a car on the motorway. IPs are the most important element in the VT approach. CarMaker is an example of an IP in today s market. It represents an open integration and test platform and enables a wide spectrum of applications including the classic vehicle dynamics simulation. In CarMaker, the virtual vehicle contains almost all parts of a real vehicle, including powertrain, tires, brakes, etc. It is also possible to integrate real automotive controllers, e.g. ABS, ESP, ACC, or software modeled controllers. The basic test scenario created in CarMaker includes the configuration of: 1) Demo car (integrating OFUs), 2) Test track, 3) Driving maneuver (speed, acceleration, braking force, etc ) and 4) Environmental factors (day, night, rain, fog, position of the sun, ). 3

4 3.2 Camera behavior model as an OFU VDS is mainly obtained from CarMaker by positioning a virtual camera in the ego car defined by the mounting position. Initial VDS generated by the virtual camera represents an ideal image of the artificial environment included in the camera s FOV. The main purpose of the camera model-ofu is to be able to handle the original VDS and apply on it image transformation algorithms, thus generating a more realistic video data stream (VDS-OFU). At this point, it is important to point out that although all image processing techniques will be directly implemented inside the OFU, the ideal VDS will be provided by a TCP or a UDP connection between the OFU and the integration platform CarMaker, whereas all other ground truth data, vehicle dynamics and other sensor data (if needed) will be provided via the standardized OSI buffer. Two main technical steps of a camera are modeled by the corresponding OFU, i.e. the so-called Optical acquisition and the Image acquisition : 1. Optical acquisition represents the first contact of the camera with the analog environment where a system of lenses collects and focalizes light in order to be projected on an active sensor grid. 2. Image acquisition represents the second contact of the camera with the analog environment, which is already provided via Optical acquisition as a focalized and concentrated light waves over the active sensor grid. At this stage the analog signal is transformed to a digital signal corresponding to its intensity. VDS-OFU reflects the relevant image distortions and aberrations applied to the ideal VDS. In our current demonstration, we will focus on the modeling of the following effects (similarly additionally effects can also be implemented as long as they are defined by algorithms using VDS and OSI): Image Distortions Here, we classify distortion into radial and tangential distortion, where: 1. Radial distortion occurs when light rays bend more near the edges of a lens than they do at its optical center. Figure 2. Radial distortions (see mathworks.com). 2. Tangential distortion occurs when the lens and the image plane are not parallel. Figure 3. Tangential distortions (see mathworks.com) Lens Blur In an ideal situation, each small point within the object would be represented by a small, well-defined point within the image. In reality, the "image" of each object point is spread, or blurred, within the image. this places a definite limit on the amount of detail (object smallness) that can be visualized. Figure 4.a Ideal Image. Figure 4.b Blurred Image (taken with the camera) Lens Flare Lens flare is an unintended effect caused by rays passing through the camera lens in an unintended way. It is due to inter-reflections of lens elements of a compound lens and diffraction, where it ads various artifacts to photos, like multiple ghosts. 4

5 3.3.1 VDS-Input To be able to detect e.g. the vehicles, a multifunctional camera should be analyzing the situation upfront. The video stream provided by camera behavior model (VDS-OFU) represents the main input for the VD-OFU detection. Figure 5. Multiple ghosts due to lens flare (taken with a camera) Vignetting The effect of the vignetting may be described as the reduction of an image's brightness the periphery compared to the image center. Vignetting is caused by camera settings or lens limitations. Figure 6. Vignetting image from an integration sphere (taken with a camera). 3.3 VD Function as an OFU By integrating the VD function in an OFU, it is possible to transfer it to several systems and integration platforms, e.g. it can also be tested with a real camera, when the input is mapped to the VD- OFU. From a black box perspective, VD s functionality can be separated into three essential parts: VDSinput, VD-process, and OL-output VD-Processing Those frames are analyzed by the VD function as mentioned above. Computer vision libraries are used for the image analysis. The VD algorithm is capable of detecting vehicles at varying distances from the ego car and with different color and dimension properties. The VD is also capable of estimating the distance between the ego car and the detected vehicle OL-Output VD function outputs an OL with the number of detected vehicle, estimated distances to targets and possible inlane indicating whether the detected vehicle is in the ego car s driving lane or not. Inattentive drivers could be warned from a sudden emerging dangerous situation. It s also imaginable to support an emergency brake assistant by initiating an emergency braking when distance to the next car is critical. Provided with a training data, Histogram of Oriented Gradient (HOG) features were extracted and then used to train a support vector classifier. Later a sliding windows approach is implemented where overlapping tiles from each frame are then classified as vehicle or non-vehicle. Finally, heat maps to show locations of repeated detections helped in identifying detections that were found in the same location or near the same location in several subsequent frames, thus minimizing the false positive rate. A similar approach was also used to generate bounding boxes around high-confidence detections where multiple overlapping detections occur and to estimate the distance between the detected vehicles and the ego car. VD algorithm is described in the following flow chart. Input VDS Processing Vehicle Detection (OFU) Object List Figure 7. VDS-Input, VD-Processing, and OL-Output. 5

6 Training: Training Set: 1)Vehicle 2)Non-Vehicle Processing (VD): Frame: From Camera Model Feature Extraction: 1)Color Features 2)Gradient based Features Train a Classifier: 1)Linear SVC Search for Vehicles: 1)Sliding Window technique Feature Extraction: 1)Color Features 2)Gradient based Features Classifier: 1)Linear SVC Reduce False Positives: 1)Heat map technique Figure 10. Distorted frame, output of sensor model. By closely examining figure 9 and 10, we can see the effects of radial and tangential distortions where a clear shift in the pixels positions is taking place. It is important to point out that this shift was not arbitrary generated, but by the actual computation of the camera s intrinsic calibration matrix. Object List: 1)Bounding Boxes 2)Distance estimation Figure 8. Flow chart for VD. 4 Preliminary Results Taking in consideration, the current state of the research with a lot of potential in this section we will present the first results we obtained on our camera physical modeling approach, OFU performing edge detection and our machine based VD algorithm Camera Physical Behavioral Model Figure 11. Pixel difference, output of sensor model. Figure 11 shows a maximum diagonal shit of pixels this indicates that some objects in the camera s FOV may appear to be closer to the camera s center of projection by pixels. Figure 9. Ideal frame showing no distortion. Figure 12. Ideal frame to the left, blurred frame to the right (output of camera model). 6

7 Another important camera feature, is its ability to resolve a necessary level of details. Lens train properties and minor imperfections sum up in some cases to produce an undesired blurry image. In figure 12 we show how after computing our camera s blur kernel we were able to replicate this phenomena with our camera model, thus obtaining a more realistic and more similar to the real camera s image. In figure 15 we show our initial attempt to model the GHOST effect. The goal is to show that such GHOST artifacts are very important to study and reproduce in camera models because when judging by their diameter, which is constant, we can see how this effect can totally obscure a vehicle, or any other traffic participant inside the camera s FOV. This is demonstrated in figures 15.b and 15.c. Figure 13. Ideal frame (solid background). In figure 13 and 14 we demonstrate how would our image look like with and without the vignetting effect. Examining different levels of average pixel intensity we could definitely see how crucial it is to implement the vignetting effect especially for functions that are color / intensity based. Figure 15.b Ghosting, car-distance (10-15) m Figure 15.c Ghosting, obscured car-distance (10-15) m Figure 14. Vignetting effect, output of sensor model. 4.2 VD Algorithm In this section we show some of our results in developing the VD algorithm, it is important to emphasize that developing the VD algorithm by ourselves and thus possessing the source code and all of its specific and detailed features will be of a great advantage for us in the upcoming camera model validation process. In figure 16 and 17 we can see the histogram of oriented gradient (HOG) represented respectively for a vehicle and a none vehicle example. Figure 15.a Ghosting, result of Lens Flare (early model), the red border represents the camera s FOV. 7

8 Figure 16. Vehicle features extraction. Figure 19. Heat map and bounding boxes. 4.3 OFU VDS In this part we demonstrate the possibility of transferring a VDS to the OFU which is running an edge detection algorithm. Figure 17. Non-Vehicle features extraction. In figure 18 we can see the sliding window s approach implemented in the scope of searching for car features. The different colored boxes represent a number of overlaying searching boxes. Here it is important to point out the position of the white car, at the very right margin of the FOV exactly where distortion effects are most significant. In figure 20 we can see the IPG Movie representing a VDS of a specific CarMaker testing scenario. In this scenario the ego car is being configured with a VDS- OFMU with the main task of collecting the simulation s ground truth and sensor data (Ideal VDS in this case). Figure 20. Testing scenario in CarMaker (Ideal VDS). Figure 18. Sliding window approach. In figure 19 we can see some of the VD s obtained results where starting form the 1st coloumn the initial frames are presented then the heat map, based on multiple detections in a certain number of frames (used to decrease the probability of false positives), and finally at the last column we can see the bounded boxes arround the successfully detected vehicles. Figure 21 represents the output of an edge detection algorithm that we implemented inside the VDS- OFMU, thus demonstrating the following: 1. A successful VDS transfer from the simulation environment to the VDS-OFMU. 2. A successful computer vision algorithm implementation inside the VDS-OFMU. 3. A successful VDS output from the VDS- OFMU. 8

9 Further on with our adequate measurement interface, the previously mentioned test bed, the virtual simulation tools and our OFMUs for our sensor models and ADAS and AD functions we will be ready to extend our tool chain to validation. Figure 21. Edge detection performed and OFU output. 5 Short look into Validation As mentioned before we intend to use our tool chain not only for ADAS and AD function development and virtual testing, but also for sensor model validation and later on for ADAS and AD function validation. To do that we started to build up our first test bed prototype, to be used for camera model validation. Figure 22. Test bed prototype for camera model validation. The test bed represented in figure 22 is capable of controlling the camera s pitch, yaw and roll angles and one additional translation, all which can be easily controlled via a dedicated desktop interface as represented in figures 23. Figure 23 Dedicated test bed control interface. 6 Conclusion In this paper, we intended to show that virtual simulation is a very important pillar in ADAS and AD function development and validation. In order to successfully virtualize this process, we showed that sensor behavior models and standardized interfaces, especially OSI and FMI, are of an essence. Several challenges in design and sensor behavior model validation are still up to come however, we did manage to show the evident difference in image quality between ideal VDS and camera behavior model VDS. We also managed to show that an OFU is capable of receiving and processing VDS, thus proving that this is easily extendable from Computer Vision algorithms to machine learning based detection implementations or localization and control functions. Our up-coming tasks would be to further develop the camera sensor model and integrate it with the machine learning VD algorithm as OFUs thus performing much more complicated tasks. Future work will also be to apply this new method to other sensors not only cameras but radars, lidars, etc. and the validation of the corresponding sensor behavior models vs. the real sensor behavior. References [1] Handbook of Driver Assistance Systems, Editors: Winner, H., Hakuli, S., Lotz, F., Singer, C. [2] Elberzhager, F., Rosbach, A., Münch, J., & Eschbach, R. 4th International Conference, SWQD Inspection and Test Process Integration Based on Explicit Test Prioritization strategies, pp , [3] Keynote Speaker L. Rogowski from Continental at the IPG Open House 2015, see automotive.com/de/veranstaltungen/open-house- 2018/rueckblick [4] C. Cifaldi. Virtual Test and Engineering Simulation in Aerospace & Defense, [5] Blochwitz, M., Otter, M., Arnold, M., Bausch, C., Clauß, C., Elmqvist, H., Peetz, J.-V., Wolf, S. The Funcional Mockup Interface for Tool independent Exchange of Simulation Models, [6] [7] [8] Open Simulation Interface OSI, see 9

10 [9] Functional Mockup Interface, see 10

SOLUTIONS FOR TESTING CAMERA-BASED ADVANCED DRIVER ASSISTANCE SYSTEMS SOLUTIONS FOR VIRTUAL TEST DRIVING

SOLUTIONS FOR TESTING CAMERA-BASED ADVANCED DRIVER ASSISTANCE SYSTEMS SOLUTIONS FOR VIRTUAL TEST DRIVING SOLUTIONS FOR TESTING CAMERA-BASED ADVANCED DRIVER ASSISTANCE SYSTEMS SOLUTIONS FOR VIRTUAL TEST DRIVING Table of Contents Motivation... 3 Requirements... 3 Solutions at a Glance... 4 Video Data Stream...

More information

Advanced Driver Assistance: Modular Image Sensor Concept

Advanced Driver Assistance: Modular Image Sensor Concept Vision Advanced Driver Assistance: Modular Image Sensor Concept Supplying value. Integrated Passive and Active Safety Systems Active Safety Passive Safety Scope Reduction of accident probability Get ready

More information

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA F2014-ACD-014 AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA 1 Roy Bours (*), 1 Martijn Tideman, 2 Ulrich Lages, 2 Roman Katz, 2 Martin Spencer 1 TASS International, Rijswijk, The

More information

Seamless Tool Chain for Testing Camera-based Advanced Driver Assistance Systems

Seamless Tool Chain for Testing Camera-based Advanced Driver Assistance Systems DEVELOPMENT Driver Assistance Systems IPG Automotive Seamless Tool Chain for Testing Camera-based Advanced Driver Assistance Systems AUTHORS Dipl.-Wirt.-Ing. Raphael Pfeffer is Product Manager Test Systems

More information

Photonic Technologies for LiDAR in Autonomous/ADAS. Jake Li (Market Specialist SiPM & Automotive) Hamamatsu Corporation

Photonic Technologies for LiDAR in Autonomous/ADAS. Jake Li (Market Specialist SiPM & Automotive) Hamamatsu Corporation Photonic Technologies for LiDAR in Autonomous/ADAS Jake Li (Market Specialist SiPM & Automotive) 03-2018 Hamamatsu Corporation Outline of Presentation 1. Introduction to Hamamatsu 2. Autonomous Levels

More information

FPGA Image Processing for Driver Assistance Camera

FPGA Image Processing for Driver Assistance Camera Michigan State University College of Engineering ECE 480 Design Team 4 Feb. 8 th 2011 FPGA Image Processing for Driver Assistance Camera Final Proposal Design Team: Buether, John Frankfurth, Josh Lee,

More information

Glare Spread Function (GSF) - 12 Source Angle

Glare Spread Function (GSF) - 12 Source Angle Normalized Pixel Value POWERED BY OPTEST SOFTWARE Stray Light Measurement on LensCheck Lens Measurement Systems 1 Glare Spread Function (GSF) - 12 Source Angle 0.1 0.01 0.001 0.0001 0.00001 0.000001 1

More information

Supplier Business Opportunities on ADAS and Autonomous Driving Technologies

Supplier Business Opportunities on ADAS and Autonomous Driving Technologies AUTOMOTIVE Supplier Business Opportunities on ADAS and Autonomous Driving Technologies 19 October 2016 Tokyo, Japan Masanori Matsubara, Senior Analyst, +81 3 6262 1734, Masanori.Matsubara@ihsmarkit.com

More information

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Optical Sensors: Key Technology for the Autonomous Car

Optical Sensors: Key Technology for the Autonomous Car Optical Sensors: Key Technology for the Autonomous Car Rajeev Thakur, P.E., Product Marketing Manager, Infrared Business Unit, Osram Opto Semiconductors Autonomously driven cars will combine a variety

More information

An Efficient Algorithm for Forward Collision Warning Using Low Cost Stereo Camera & Embedded System on Chip

An Efficient Algorithm for Forward Collision Warning Using Low Cost Stereo Camera & Embedded System on Chip An Efficient Algorithm for Forward Collision Warning Using Low Cost Stereo Camera & Embedded System on Chip 1 Manoj Rajan, 2 Prabhudev Patil, and 3 Sravya Vunnam 1 Tata Consultancy Services manoj.cr@tcs.com;

More information

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca

Image processing techniques for driver assistance. Razvan Itu June 2014, Technical University Cluj-Napoca Image processing techniques for driver assistance Razvan Itu June 2014, Technical University Cluj-Napoca Introduction Computer vision & image processing from wiki: any form of signal processing for which

More information

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices Technical University of Cluj-Napoca Image Processing and Pattern Recognition Research Center www.cv.utcluj.ro Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving

More information

Vision Review: Image Formation. Course web page:

Vision Review: Image Formation. Course web page: Vision Review: Image Formation Course web page: www.cis.udel.edu/~cer/arv September 10, 2002 Announcements Lecture on Thursday will be about Matlab; next Tuesday will be Image Processing The dates some

More information

OPEN SIMULATION INTERFACE. INTRODUCTION AND OVERVIEW.

OPEN SIMULATION INTERFACE. INTRODUCTION AND OVERVIEW. OPEN SIMULATION INTERFACE. INTRODUCTION AND OVERVIEW. DIFFERENTIATION OF SIMULATION DATA INTERFACES. Odometry / Dynamics Interface Open Interface (OSI) Map Interface Dynamics Model Map Model Vehicle Dynamics

More information

Probabilistic Sensor Models for Virtual Validation Use Cases and Benefits

Probabilistic Sensor Models for Virtual Validation Use Cases and Benefits Probabilistic Sensor Models for Virtual Validation Use Cases and Benefits Dr. Robin Schubert Co-Founder & CEO BASELABS GmbH 2 BASELABS enables data fusion results. 3 Who we are What we do We partner Data

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

> Acoustical feedback in the form of a beep with increasing urgency with decreasing distance to an obstacle

> Acoustical feedback in the form of a beep with increasing urgency with decreasing distance to an obstacle PARKING ASSIST TESTING THE MEASURABLE DIFFERENCE. > Creation of complex 2-dimensional objects > Online distance calculations between moving and stationary objects > Creation of Automatic Points of Interest

More information

Automotive and Aerospace Synergies

Automotive and Aerospace Synergies Corporate Technical Office Automotive and Aerospace Synergies Potential for common activities Denis Chapuis, EADS Corporate Technical Office, Electronics denis.chapuis@eads.net Seite 1 Presentation title

More information

Vehicle Localization. Hannah Rae Kerner 21 April 2015

Vehicle Localization. Hannah Rae Kerner 21 April 2015 Vehicle Localization Hannah Rae Kerner 21 April 2015 Spotted in Mtn View: Google Car Why precision localization? in order for a robot to follow a road, it needs to know where the road is to stay in a particular

More information

Creating a distortion characterisation dataset for visual band cameras using fiducial markers.

Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Creating a distortion characterisation dataset for visual band cameras using fiducial markers. Robert Jermy Council for Scientific and Industrial Research Email: rjermy@csir.co.za Jason de Villiers Council

More information

12X Zoom. Incredible 12X (0.58-7X) magnification for inspection of a wider range of parts.

12X Zoom. Incredible 12X (0.58-7X) magnification for inspection of a wider range of parts. Incredible 12X (0.58-7X) magnification for inspection of a wider range of parts. Telecentric attachment gives you the world s first parfocal telecentric zoom lens with field coverage up to 50 mm. Increased

More information

For 3CCD/3CMOS/4CCD Line Scan Cameras. Designed to be suitable for PRISM based 3CCD/CMOS/4CCD line scan cameras

For 3CCD/3CMOS/4CCD Line Scan Cameras. Designed to be suitable for PRISM based 3CCD/CMOS/4CCD line scan cameras BV-L series lenses For 3CCD/3CMOS/4CCD Line Scan Cameras Common Features Designed to be suitable for PRISM based 3CCD/CMOS/4CCD line scan cameras New optics design to improve the longitudinal chromatic

More information

Simulation: A Must for Autonomous Driving

Simulation: A Must for Autonomous Driving Simulation: A Must for Autonomous Driving NVIDIA GTC 2018 (SILICON VALLEY) / Talk ID: S8859 Rohit Ramanna Business Development Manager Smart Virtual Prototyping, ESI North America Rodolphe Tchalekian EMEA

More information

IMPROVING ADAS VALIDATION WITH MBT

IMPROVING ADAS VALIDATION WITH MBT Sophia Antipolis, French Riviera 20-22 October 2015 IMPROVING ADAS VALIDATION WITH MBT Presented by Laurent RAFFAELLI ALL4TEC laurent.raffaelli@all4tec.net AGENDA What is an ADAS? ADAS Validation Implementation

More information

Geo-location and recognition of electricity distribution assets by analysis of ground-based imagery

Geo-location and recognition of electricity distribution assets by analysis of ground-based imagery Geo-location and recognition of electricity distribution assets by analysis of ground-based imagery Andrea A. Mammoli Professor, Mechanical Engineering, University of New Mexico Thomas P. Caudell Professor

More information

SIMULATION ENVIRONMENT

SIMULATION ENVIRONMENT F2010-C-123 SIMULATION ENVIRONMENT FOR THE DEVELOPMENT OF PREDICTIVE SAFETY SYSTEMS 1 Dirndorfer, Tobias *, 1 Roth, Erwin, 1 Neumann-Cosel, Kilian von, 2 Weiss, Christian, 1 Knoll, Alois 1 TU München,

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION

STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION C. Schwalm, DESY, Hamburg, Germany Abstract For the Alignment of the European XFEL, a Straight Line Reference System will be used

More information

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22) Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application

More information

Designing a Site with Avigilon Self-Learning Video Analytics 1

Designing a Site with Avigilon Self-Learning Video Analytics 1 Designing a Site with Avigilon Self-Learning Video Analytics Avigilon HD cameras and appliances with self-learning video analytics are easy to install and can achieve positive analytics results without

More information

Extending the IPG CarMaker by FMI Compliant Units

Extending the IPG CarMaker by FMI Compliant Units Extending the IPG CarMaker by FMI Compliant Units Stephan Ziegler and Robert Höpler Modelon GmbH München Agnes-Pockels-Bogen 1, 80992 München, Germany {stephan.ziegler,robert.hoepler}@modelon.com Abstract

More information

Map Guided Lane Detection Alexander Döbert 1,2, Andre Linarth 1,2, Eva Kollorz 2

Map Guided Lane Detection Alexander Döbert 1,2, Andre Linarth 1,2, Eva Kollorz 2 Map Guided Lane Detection Alexander Döbert 1,2, Andre Linarth 1,2, Eva Kollorz 2 1 Elektrobit Automotive GmbH, Am Wolfsmantel 46, 91058 Erlangen, Germany {AndreGuilherme.Linarth, Alexander.Doebert}@elektrobit.com

More information

Design guidelines for embedded real time face detection application

Design guidelines for embedded real time face detection application Design guidelines for embedded real time face detection application White paper for Embedded Vision Alliance By Eldad Melamed Much like the human visual system, embedded computer vision systems perform

More information

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models

Cameras and Radiometry. Last lecture in a nutshell. Conversion Euclidean -> Homogenous -> Euclidean. Affine Camera Model. Simplified Camera Models Cameras and Radiometry Last lecture in a nutshell CSE 252A Lecture 5 Conversion Euclidean -> Homogenous -> Euclidean In 2-D Euclidean -> Homogenous: (x, y) -> k (x,y,1) Homogenous -> Euclidean: (x, y,

More information

Optimization of optical systems for LED spot lights concerning the color uniformity

Optimization of optical systems for LED spot lights concerning the color uniformity Optimization of optical systems for LED spot lights concerning the color uniformity Anne Teupner* a, Krister Bergenek b, Ralph Wirth b, Juan C. Miñano a, Pablo Benítez a a Technical University of Madrid,

More information

Evaluation of a laser-based reference system for ADAS

Evaluation of a laser-based reference system for ADAS 23 rd ITS World Congress, Melbourne, Australia, 10 14 October 2016 Paper number ITS- EU-TP0045 Evaluation of a laser-based reference system for ADAS N. Steinhardt 1*, S. Kaufmann 2, S. Rebhan 1, U. Lages

More information

Be sure to always check the camera is properly functioning, is properly positioned and securely mounted.

Be sure to always check the camera is properly functioning, is properly positioned and securely mounted. Please read all of the installation instructions carefully before installing the product. Improper installation will void manufacturer s warranty. The installation instructions do not apply to all types

More information

An introduction to 3D image reconstruction and understanding concepts and ideas

An introduction to 3D image reconstruction and understanding concepts and ideas Introduction to 3D image reconstruction An introduction to 3D image reconstruction and understanding concepts and ideas Samuele Carli Martin Hellmich 5 febbraio 2013 1 icsc2013 Carli S. Hellmich M. (CERN)

More information

Axis Guide to Image Usability

Axis Guide to Image Usability Whitepaper Axis Guide to Image Usability Comparison of IP- and analog-based surveillance systems Table of contents 1. Axis Guide to Image Usability 3 2. Image usability challenges and solutions 4 2.1 Both

More information

Real-Time Detection of Road Markings for Driving Assistance Applications

Real-Time Detection of Road Markings for Driving Assistance Applications Real-Time Detection of Road Markings for Driving Assistance Applications Ioana Maria Chira, Ancuta Chibulcutean Students, Faculty of Automation and Computer Science Technical University of Cluj-Napoca

More information

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System

Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Research on the Measurement Method of the Detection Range of Vehicle Reversing Assisting System Bowei Zou and Xiaochuan Cui Abstract This paper introduces the measurement method on detection range of reversing

More information

Non-axially-symmetric Lens with extended depth of focus for Machine Vision applications

Non-axially-symmetric Lens with extended depth of focus for Machine Vision applications Non-axially-symmetric Lens with extended depth of focus for Machine Vision applications Category: Sensors & Measuring Techniques Reference: TDI0040 Broker Company Name: D Appolonia Broker Name: Tanya Scalia

More information

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera

[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera [10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera Image processing, pattern recognition 865 Kruchinin A.Yu. Orenburg State University IntBuSoft Ltd Abstract The

More information

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation

Introduction to Computer Vision. Introduction CMPSCI 591A/691A CMPSCI 570/670. Image Formation Introduction CMPSCI 591A/691A CMPSCI 570/670 Image Formation Lecture Outline Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic

More information

2 OVERVIEW OF RELATED WORK

2 OVERVIEW OF RELATED WORK Utsushi SAKAI Jun OGATA This paper presents a pedestrian detection system based on the fusion of sensors for LIDAR and convolutional neural network based image classification. By using LIDAR our method

More information

Why the Self-Driving Revolution Hinges on one Enabling Technology: LiDAR

Why the Self-Driving Revolution Hinges on one Enabling Technology: LiDAR Why the Self-Driving Revolution Hinges on one Enabling Technology: LiDAR Markus Prison Director Business Development Europe Quanergy ID: 23328 Who We Are The leader in LiDAR (laser-based 3D spatial sensor)

More information

Technical Bulletin Global Vehicle Target Specification Version 1.0 May 2018 TB 025

Technical Bulletin Global Vehicle Target Specification Version 1.0 May 2018 TB 025 Technical Bulletin Global Vehicle Target Specification Version 1.0 May 2018 TB 025 Title Global Vehicle Target Specification Version 1.0 Document Number TB025 Author Euro NCAP Secretariat Date May 2018

More information

SHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues

SHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues SHRP 2 Safety Research Symposium July 27, 2007 Site-Based Video System Design and Development: Research Plans and Issues S09 Objectives Support SHRP2 program research questions: Establish crash surrogates

More information

Thermal and Optical Cameras. By Philip Smerkovitz TeleEye South Africa

Thermal and Optical Cameras. By Philip Smerkovitz TeleEye South Africa Thermal and Optical Cameras By Philip Smerkovitz TeleEye South Africa phil@teleeye.co.za OPTICAL CAMERAS OVERVIEW Traditional CCTV Camera s (IP and Analog, many form factors). Colour and Black and White

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY

LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY LANE DEPARTURE WARNING SYSTEM FOR VEHICLE SAFETY 1 K. Sravanthi, 2 Mrs. Ch. Padmashree 1 P.G. Scholar, 2 Assistant Professor AL Ameer College of Engineering ABSTRACT In Malaysia, the rate of fatality due

More information

Fundamental Technologies Driving the Evolution of Autonomous Driving

Fundamental Technologies Driving the Evolution of Autonomous Driving 426 Hitachi Review Vol. 65 (2016), No. 9 Featured Articles Fundamental Technologies Driving the Evolution of Autonomous Driving Takeshi Shima Takeshi Nagasaki Akira Kuriyama Kentaro Yoshimura, Ph.D. Tsuneo

More information

Solid-State Hybrid LiDAR for Autonomous Driving Product Description

Solid-State Hybrid LiDAR for Autonomous Driving Product Description Solid-State Hybrid LiDAR for Autonomous Driving Product Description What is LiDAR Sensor Who is Using LiDARs How does LiDAR Work Hesai LiDAR Demo Features Terminologies Specifications What is LiDAR A LiDAR

More information

Fiber Composite Material Analysis in Aerospace Using CT Data

Fiber Composite Material Analysis in Aerospace Using CT Data 4th International Symposium on NDT in Aerospace 2012 - We.2.A.3 Fiber Composite Material Analysis in Aerospace Using CT Data Dr. Tobias DIERIG, Benjamin BECKER, Christof REINHART, Thomas GÜNTHER Volume

More information

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg

Human Detection. A state-of-the-art survey. Mohammad Dorgham. University of Hamburg Human Detection A state-of-the-art survey Mohammad Dorgham University of Hamburg Presentation outline Motivation Applications Overview of approaches (categorized) Approaches details References Motivation

More information

FIXED FOCAL LENGTH LENSES

FIXED FOCAL LENGTH LENSES Edmund Optics BROCHURE FIXED FOCAL LENGTH LENSES INNOVATION STARTS HERE... Global Design & Support Rapid Prototyping Volume Manufacturing & Pricing Contact us for a Stock or Custom Quote Today! USA: +1-856-547-3488

More information

Computer and Machine Vision

Computer and Machine Vision Computer and Machine Vision Lecture Week 12 Part-2 Additional 3D Scene Considerations March 29, 2014 Sam Siewert Outline of Week 12 Computer Vision APIs and Languages Alternatives to C++ and OpenCV API

More information

Preceding vehicle detection and distance estimation. lane change, warning system.

Preceding vehicle detection and distance estimation. lane change, warning system. Preceding vehicle detection and distance estimation for lane change warning system U. Iqbal, M.S. Sarfraz Computer Vision Research Group (COMVis) Department of Electrical Engineering, COMSATS Institute

More information

Flat-Field Mega-Pixel Lens Series

Flat-Field Mega-Pixel Lens Series Flat-Field Mega-Pixel Lens Series Flat-Field Mega-Pixel Lens Flat-Field Mega-Pixel Lens 205.ver.0 E Specifications and Lineup Full Full Full Full 5MP MP MP MP Image Model Imager Size Mount Focal Length

More information

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module www.lnttechservices.com Table of Contents Abstract 03 Introduction 03 Solution Overview 03 Output

More information

Linescan System Design for Robust Web Inspection

Linescan System Design for Robust Web Inspection Linescan System Design for Robust Web Inspection Vision Systems Design Webinar, December 2011 Engineered Excellence 1 Introduction to PVI Systems Automated Test & Measurement Equipment PC and Real-Time

More information

3D Time-of-Flight Image Sensor Solutions for Mobile Devices

3D Time-of-Flight Image Sensor Solutions for Mobile Devices 3D Time-of-Flight Image Sensor Solutions for Mobile Devices SEMICON Europa 2015 Imaging Conference Bernd Buxbaum 2015 pmdtechnologies gmbh c o n f i d e n t i a l Content Introduction Motivation for 3D

More information

HiFi Visual Target Methods for measuring optical and geometrical characteristics of soft car targets for ADAS and AD

HiFi Visual Target Methods for measuring optical and geometrical characteristics of soft car targets for ADAS and AD HiFi Visual Target Methods for measuring optical and geometrical characteristics of soft car targets for ADAS and AD S. Nord, M. Lindgren, J. Spetz, RISE Research Institutes of Sweden Project Information

More information

Challenges in Manufacturing of optical and EUV Photomasks Martin Sczyrba

Challenges in Manufacturing of optical and EUV Photomasks Martin Sczyrba Challenges in Manufacturing of optical and EUV Photomasks Martin Sczyrba Advanced Mask Technology Center Dresden, Germany Senior Member of Technical Staff Advanced Mask Technology Center Dresden Key Facts

More information

Carmen Alonso Montes 23rd-27th November 2015

Carmen Alonso Montes 23rd-27th November 2015 Practical Computer Vision: Theory & Applications 23rd-27th November 2015 Wrap up Today, we are here 2 Learned concepts Hough Transform Distance mapping Watershed Active contours 3 Contents Wrap up Object

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Photogrammetric mapping: introduction, applications, and tools GNSS/INS-assisted photogrammetric and LiDAR mapping LiDAR mapping: principles, applications, mathematical model, and

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Towards Autonomous Vehicle. What is an autonomous vehicle? Vehicle driving on its own with zero mistakes How? Using sensors

Towards Autonomous Vehicle. What is an autonomous vehicle? Vehicle driving on its own with zero mistakes How? Using sensors 7 May 2017 Disclaimer Towards Autonomous Vehicle What is an autonomous vehicle? Vehicle driving on its own with zero mistakes How? Using sensors Why Vision Sensors? Humans use both eyes as main sense

More information

Towards Fully-automated Driving. tue-mps.org. Challenges and Potential Solutions. Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA

Towards Fully-automated Driving. tue-mps.org. Challenges and Potential Solutions. Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA Towards Fully-automated Driving Challenges and Potential Solutions Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA Mobile Perception Systems 6 PhDs, 1 postdoc, 1 project manager, 2 software engineers

More information

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE

CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE IN THREE-DIMENSIONAL SPACE National Technical University of Athens School of Civil Engineering Department of Transportation Planning and Engineering Doctoral Dissertation CONTRIBUTION TO THE INVESTIGATION OF STOPPING SIGHT DISTANCE

More information

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy

Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Vision-based ACC with a Single Camera: Bounds on Range and Range Rate Accuracy Gideon P. Stein Ofer Mano Amnon Shashua MobileEye Vision Technologies Ltd. MobileEye Vision Technologies Ltd. Hebrew University

More information

Conceptual Physics 11 th Edition

Conceptual Physics 11 th Edition Conceptual Physics 11 th Edition Chapter 28: REFLECTION & REFRACTION This lecture will help you understand: Reflection Principle of Least Time Law of Reflection Refraction Cause of Refraction Dispersion

More information

MATLAB Expo 2014 Verkehrszeichenerkennung in Fahrerassistenzsystemen Continental

MATLAB Expo 2014 Verkehrszeichenerkennung in Fahrerassistenzsystemen Continental Senses for Safety. Driver assistance systems help save lives. MATLAB Expo 2014 Verkehrszeichenerkennung in Fahrerassistenzsystemen MATLAB @ Continental http://www.continental-automotive.com/ Chassis &

More information

Designing a software framework for automated driving. Dr.-Ing. Sebastian Ohl, 2017 October 12 th

Designing a software framework for automated driving. Dr.-Ing. Sebastian Ohl, 2017 October 12 th Designing a software framework for automated driving Dr.-Ing. Sebastian Ohl, 2017 October 12 th Challenges Functional software architecture with open interfaces and a set of well-defined software components

More information

Digital Images. Kyungim Baek. Department of Information and Computer Sciences. ICS 101 (November 1, 2016) Digital Images 1

Digital Images. Kyungim Baek. Department of Information and Computer Sciences. ICS 101 (November 1, 2016) Digital Images 1 Digital Images Kyungim Baek Department of Information and Computer Sciences ICS 101 (November 1, 2016) Digital Images 1 iclicker Question I know a lot about how digital images are represented, stored,

More information

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth

Depth. Common Classification Tasks. Example: AlexNet. Another Example: Inception. Another Example: Inception. Depth Common Classification Tasks Recognition of individual objects/faces Analyze object-specific features (e.g., key points) Train with images from different viewing angles Recognition of object classes Analyze

More information

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm)

Jo-Car2 Autonomous Mode. Path Planning (Cost Matrix Algorithm) Chapter 8.2 Jo-Car2 Autonomous Mode Path Planning (Cost Matrix Algorithm) Introduction: In order to achieve its mission and reach the GPS goal safely; without crashing into obstacles or leaving the lane,

More information

Time-of-flight basics

Time-of-flight basics Contents 1. Introduction... 2 2. Glossary of Terms... 3 3. Recovering phase from cross-correlation... 4 4. Time-of-flight operating principle: the lock-in amplifier... 6 5. The time-of-flight sensor pixel...

More information

Variable Zoom Lenses* USB High-Resolution Camera

Variable Zoom Lenses* USB High-Resolution Camera speckfinder CS 3.0 High Definition Magnification System The speckfinder CS 3.0 High-Definition Compact Magnification and Imaging System completely integrates the technologies of high quality variable zoom

More information

Chapter 6 : Results and Discussion

Chapter 6 : Results and Discussion Refinement and Verification of the Virginia Tech Doppler Global Velocimeter (DGV) 86 Chapter 6 : Results and Discussion 6.1 Background The tests performed as part of this research were the second attempt

More information

Understanding Variability

Understanding Variability Understanding Variability Why so different? Light and Optics Pinhole camera model Perspective projection Thin lens model Fundamental equation Distortion: spherical & chromatic aberration, radial distortion

More information

TECHSPEC COMPACT FIXED FOCAL LENGTH LENS

TECHSPEC COMPACT FIXED FOCAL LENGTH LENS Designed for use in machine vision applications, our TECHSPEC Compact Fixed Focal Length Lenses are ideal for use in factory automation, inspection or qualification. These machine vision lenses have been

More information

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision

Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision Fundamentals of Stereo Vision Michael Bleyer LVA Stereo Vision What Happened Last Time? Human 3D perception (3D cinema) Computational stereo Intuitive explanation of what is meant by disparity Stereo matching

More information

OPTICAL TOOL FOR IMPACT DAMAGE CHARACTERIZATION ON AIRCRAFT FUSELAGE

OPTICAL TOOL FOR IMPACT DAMAGE CHARACTERIZATION ON AIRCRAFT FUSELAGE OPTICAL TOOL FOR IMPACT DAMAGE CHARACTERIZATION ON AIRCRAFT FUSELAGE N.Fournier 1 F. Santos 1 - C.Brousset 2 J.L.Arnaud 2 J.A.Quiroga 3 1 NDT EXPERT, 2 AIRBUS France, 3 Universidad Cmplutense de Madrid

More information

Pedestrian Detection with Radar and Computer Vision

Pedestrian Detection with Radar and Computer Vision Pedestrian Detection with Radar and Computer Vision camera radar sensor Stefan Milch, Marc Behrens, Darmstadt, September 25 25 / 26, 2001 Pedestrian accidents and protection systems Impact zone: 10% opposite

More information

CS6670: Computer Vision

CS6670: Computer Vision CS6670: Computer Vision Noah Snavely Lecture 5: Projection Reading: Szeliski 2.1 Projection Reading: Szeliski 2.1 Projection Müller Lyer Illusion http://www.michaelbach.de/ot/sze_muelue/index.html Modeling

More information

DUAL MODE SCANNER for BROKEN RAIL DETECTION

DUAL MODE SCANNER for BROKEN RAIL DETECTION DUAL MODE SCANNER for BROKEN RAIL DETECTION ROBERT M. KNOX President; Epsilon Lambda Electronics Dr. BENEDITO FONSECA Northern Illinois University Presenting a concept for improved rail safety; not a tested

More information

Rapid Natural Scene Text Segmentation

Rapid Natural Scene Text Segmentation Rapid Natural Scene Text Segmentation Ben Newhouse, Stanford University December 10, 2009 1 Abstract A new algorithm was developed to segment text from an image by classifying images according to the gradient

More information

Digital Imaging Study Questions Chapter 8 /100 Total Points Homework Grade

Digital Imaging Study Questions Chapter 8 /100 Total Points Homework Grade Name: Class: Date: Digital Imaging Study Questions Chapter 8 _/100 Total Points Homework Grade True/False Indicate whether the sentence or statement is true or false. 1. You can change the lens on most

More information

LEICA Elmarit-S 45mm f/2.8 ASPH./CS

LEICA Elmarit-S 45mm f/2.8 ASPH./CS LEICA Elmarit-S 45 f/2.8 ASPH./CS Technical data. Display scale 1:2 TECHNICAL DATA Order number 1177 (CS: 1178) Angle of view (diagonal, horizontal, vertical) ~ 62, 53, 37, equivalent to approx. 36 in

More information

P recise Eye. High resolution, diffraction-limited f/4.5 optical quality for high precision measurement and inspection.

P recise Eye. High resolution, diffraction-limited f/4.5 optical quality for high precision measurement and inspection. High resolution, diffraction-limited f/4.5 optical quality for high precision measurement and inspection. Long working distance makes lighting and handling easier. Compact size. Coaxial lighting available

More information

Victor S. Grinberg Gregg W. Podnar M. W. Siegel

Victor S. Grinberg Gregg W. Podnar M. W. Siegel Geometry of binocular imaging II : The augmented eye Victor S. Grinberg Gregg W. Podnar M. W. Siegel Robotics Institute, School of Computer Science, Carnegie Mellon University 5000 Forbes Ave., Pittsburgh,

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania

Image Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives

More information

Tooling Overview ADAS - Status & Ongoing Developments

Tooling Overview ADAS - Status & Ongoing Developments Tooling Overview ADAS - Status & Ongoing Developments Vector India Conference 2017 V0.1 2017-07-04 ADAS solution - Efficient development of multisensor applications Contents of Vector ADAS solution algorithm

More information

Ethernet TSN as Enabling Technology for ADAS and Automated Driving Systems

Ethernet TSN as Enabling Technology for ADAS and Automated Driving Systems IEEE-2016 January 17-22, Atlanta, Georgia Ethernet TSN as Enabling Technology for ADAS and Automated Driving Systems Michael Potts General Motors Company co-authored by Soheil Samii General Motors Company

More information

INFINITY-CORRECTED TUBE LENSES

INFINITY-CORRECTED TUBE LENSES INFINITY-CORRECTED TUBE LENSES For use with Infinity-Corrected Objectives Available in Focal Lengths Used by Thorlabs, Nikon, Leica, Olympus, and Zeiss Designs for Widefield and Laser Scanning Applications

More information

1 Although other ways of exporting like using the 2 s 1

1 Although other ways of exporting like using the 2 s 1 A Novel Proposal on how to Parameterize Models in Dymola Utilizing External Files under Consideration of a Subsequent Model Export using the Functional Mock-Up Interface Thomas Schmitt 1 Markus Andres

More information