Simulation: A Must for Autonomous Driving

Similar documents
Measuring the World: Designing Robust Vehicle Localization for Autonomous Driving. Frank Schuster, Dr. Martin Haueis

Realtime Object Detection and Segmentation for HD Mapping

AUTOMATED GENERATION OF VIRTUAL DRIVING SCENARIOS FROM TEST DRIVE DATA

Cloud-based Large Scale Video Analysis

DEEP NEURAL NETWORKS CHANGING THE AUTONOMOUS VEHICLE LANDSCAPE. Dennis Lui August 2017

Solid State LiDAR for Ubiquitous 3D Sensing

Towards Fully-automated Driving. tue-mps.org. Challenges and Potential Solutions. Dr. Gijs Dubbelman Mobile Perception Systems EE-SPS/VCA

Giancarlo Vasta, Magneti Marelli, Lucia Lo Bello, University of Catania,

Advanced Driver Assistance: Modular Image Sensor Concept

S7348: Deep Learning in Ford's Autonomous Vehicles. Bryan Goodman Argo AI 9 May 2017

IMPROVING ADAS VALIDATION WITH MBT

W4. Perception & Situation Awareness & Decision making

VISION FOR AUTOMOTIVE DRIVING

Autonomous Driving Solutions

Can We Improve Autonomous Driving With IoT and Cloud?

Training models for road scene understanding with automated ground truth Dan Levi

Automated Driving Development

SEMI-AUTOMATED ANNOTATION

OPEN SIMULATION INTERFACE. INTRODUCTION AND OVERVIEW.

A New Approach To Robust Non-Intrusive Traffic Detection at Intersections!

Korea ICT Market Overview. Yoonmi Kim Finpro Korea

Why the Self-Driving Revolution Hinges on one Enabling Technology: LiDAR

TorontoCity: Seeing the World with a Million Eyes

Collaborative Mapping with Streetlevel Images in the Wild. Yubin Kuang Co-founder and Computer Vision Lead

MIOVISION DEEP LEARNING TRAFFIC ANALYTICS SYSTEM FOR REAL-WORLD DEPLOYMENT. Kurtis McBride CEO, Miovision

Transforming Transport Infrastructure with GPU- Accelerated Machine Learning Yang Lu and Shaun Howell

Applying AI to Mapping

Evaluation of a laser-based reference system for ADAS

Synscapes A photorealistic syntehtic dataset for street scene parsing Jonas Unger Department of Science and Technology Linköpings Universitet.

A NEW COMPUTING ERA. Shanker Trivedi Senior Vice President Enterprise Business at NVIDIA

Detection and Classification of Vehicles

Vision based autonomous driving - A survey of recent methods. -Tejus Gupta

LPR and Traffic Analytics cameras installation guide. Technical Manual

A. SERVEL. EuCNC Special Sessions 5G connected car 01/07/2015

What s inside: What is deep learning Why is deep learning taking off now? Multiple applications How to implement a system.

Designing a software framework for automated driving. Dr.-Ing. Sebastian Ohl, 2017 October 12 th

Automatic Incident Detection Solutions UK & Ireland Partner

MATLAB Expo 2014 Verkehrszeichenerkennung in Fahrerassistenzsystemen Continental

Training models for road scene understanding with automated ground truth Dan Levi

Team Aware Perception System using Stereo Vision and Radar

Automotive LiDAR. General Motors R&D. Ariel Lipson

Sensor networks. Ericsson

Dan Galves, Sr VP Communications, Mobileye. January 17, 2018

Precision Roadway Feature Mapping Jay A. Farrell, University of California-Riverside James A. Arnold, Department of Transportation

Future Implications for the Vehicle When Considering the Internet of Things (IoT)

Automated Driving System Toolbox 소개

Creating Affordable and Reliable Autonomous Vehicle Systems

To realize Connected Vehicle Society. Yosuke NISHIMURO Ministry of Internal Affairs and Communications (MIC), Japan

Connected Car. Dr. Sania Irwin. Head of Systems & Applications May 27, Nokia Solutions and Networks 2014 For internal use

Trimble Positioning for Automotive Applications

Towards Autonomous Vehicle. What is an autonomous vehicle? Vehicle driving on its own with zero mistakes How? Using sensors

Understanding the Potential for Video Analytics to Support Traffic Management Functions

Cooperative Vehicles Opportunity and Challenges

Turning an Automated System into an Autonomous system using Model-Based Design Autonomous Tech Conference 2018

VIA Mobile360 Surround View

Seamless Tool Chain for Testing Camera-based Advanced Driver Assistance Systems

Matt Ronning Automotive sub-group Chairman. MIPI Alliance Extends Interface Standards to Support Automotive Market

rfpro & DS Products: Integrated development software for human and autonomously driven vehicles

LOW COST ADVANCDED DRIVER ASSISTANCE SYSTEM (ADAS) DEVELOPMENT

Option Driver Assistance. Product Information

Laserscanner Based Cooperative Pre-Data-Fusion

Vehicle Detection Using Android Smartphones

Communication Patterns in Safety Critical Systems for ADAS & Autonomous Vehicles Thorsten Wilmer Tech AD Berlin, 5. March 2018

Machine learning based automatic extrinsic calibration of an onboard monocular camera for driving assistance applications on smart mobile devices

NVIDIA AI BRAIN OF SELF DRIVING AND HD MAPPING. September 13, 2016

Functional Safety Architectural Challenges for Autonomous Drive

Intel and Mobileye Autonomous Driving Solutions

RTMaps Embedded facilitating development and testing of complex HAD software on modern ADAS platforms

Mobile Mapping and Navigation. Brad Kohlmeyer NAVTEQ Research

Embedded GPGPU and Deep Learning for Industrial Market

A Whole New World of Mapping and Sensing: Uses from Asset to Management to In Vehicle Sensing for Collision Avoidance

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Joint Object Detection and Viewpoint Estimation using CNN features

CONTENT ENGINEERING & VISION LABORATORY. Régis Vinciguerra

Learning Semantic Video Captioning using Data Generated with Grand Theft Auto

arxiv: v1 [cs.cv] 30 Oct 2018

A probabilistic distribution approach for the classification of urban roads in complex environments

ADAS: A Safe Eye on The Road

5G Spectrum Access. Wassim Chourbaji. Vice President, Government Affairs and Public Policy EMEA Qualcomm Technologies Inc.

Visual Perception for Autonomous Driving on the NVIDIA DrivePX2 and using SYNTHIA

Effective Monitoring System for Tunnel

IoT in Smart Cities Technology overview and future trends

Enabling Smart Lighting for Smart Cities. How Cheen Ng 18 August 2017

Advanced Driver Assistance Systems: A Cost-Effective Implementation of the Forward Collision Warning Module

Design your autonomous vehicle applications with NVIDIA DriveWorks components on RTMaps

Object Detection. CS698N Final Project Presentation AKSHAT AGARWAL SIDDHARTH TANWAR

White Paper: VANTIQ Digital Twin Architecture

Supplier Business Opportunities on ADAS and Autonomous Driving Technologies

Deep Learning Requirements for Autonomous Vehicles

Roger C. Lanctot Director, Automotive Connected Mobility

High Speed Measurement For ADAS And Fast Analysis

ANALYZING AND COMPARING TRAFFIC NETWORK CONDITIONS WITH A QUALITY TOOL BASED ON FLOATING CAR AND STATIONARY DATA

Regulatory perspective and technical tools and platforms to measure QoS - COUNTRY EXPERIENCES

Mobile Millennium Using Smartphones as Traffic Sensors

SHRP 2 Safety Research Symposium July 27, Site-Based Video System Design and Development: Research Plans and Issues

INTELLIGENT TRAFFIC MANAGEMENT FOR INDIA Phil Allen VP Sales APAC

Camera Behavior Models for ADAS and AD functions with Open Simulation Interface and Functional Mockup Interface

SLiM : Scalable Live Media Streaming Framework for a U-City

Testing of automated driving systems

Small is the New Big: Data Analytics on the Edge

Transcription:

Simulation: A Must for Autonomous Driving NVIDIA GTC 2018 (SILICON VALLEY) / Talk ID: S8859 Rohit Ramanna Business Development Manager Smart Virtual Prototyping, ESI North America Rodolphe Tchalekian EMEA Pre-Sales Engineer for ADAS, ESI Group March 28, 2018 Copyright ESI Copyright Group, 2018. ESI All Group, rights reserved. 2018. All rights reserved. 1

Deep Learning for Autonomous Driving Requirements (1/4) Quantity Huge amount of sensor data is necessary to train Neural Networks Resources may be unavailable (e.g. test vehicle, specific sensor set, test drivers ) Current processes are inefficient: Data has to be collected, stored, classified & transferred for labeling Multiple teams/stakeholders are involved Long & expensive cycle time to generate a new data set Only a small percentage of the generated data is relevant for Neural Networks (~80h driving to produce 3min of relevant data) Power of Simulation Define reusable virtual scenarios & new multi-sensor configurations Generate new sensor data sets continuously Automate/optimize processes & focus on relevant data 2

Deep Learning for Autonomous Driving Requirements (2/4) Diversity Diversity of the data is the key for a proper training Difficulty to replicate driving scenarios in the real world (e.g. weather conditions, variations in trajectories/positioning of target vehicles ) Specific long & costly test expeditions have to been prepared (e.g. winter/summer testing, day/night driving, multiple test vehicles, highway/country road/city driving) Power of Simulation Create thousands of virtual scenarios through scripting Integrate new environment & road user models on demand 3

Deep Learning for Autonomous Driving Requirements (3/4) Labeling To train the neural network: The data needs to be annotated e.g. object classification, bounding box, image segmentation Expensive & Time consuming to do it manually Ground truth information is missing Power of Simulation Generate labelled sensor data using ground truth information automatically 4

Deep Learning for Autonomous Driving Requirements (4/4) Flexibility Different data sets are needed for Training & Validation. Neural Networks should continuously integrate new driving situations. Existing databases may not provide enough quantity and diversity for both Flexibility is reduced to create new data sets Different sources are necessary to validate robustness of Neural Networks Introducing unpredicted objects / environments is the key to further train Neural Networks Power of Simulation Extend Simulation database through scripting and automation Combine Real and Simulated data for more robust Training/Validation Extend Simulation database with new objects and 3D environments 5

Safety Ratings Virtual Performance Solution Crash & Safety Occupant Safety Virtual Systems & Controls Euro NCAP / ANCAP Rating 2018-2020 6 http://www.ancap.com.au/safety-ratings-explained

Generating Synthetic Data for Neural Networks ESI Pro-SiVIC : Overview 7

Introduction to Pro-SiVIC A platform solution for modeling and simulating multi-frequency environments & multi-technology sensors ACC AEB LDW BSD FCW/RCW IPA CTA LCA Sensors Environment Drivers Applications 8

ESI Pro-SiVIC : Overview Realistic Environment Models DB Road environment simulation Highway, country road, urban area Customized marking Different road signs Road users simulation Cars, trucks Pedestrians, bicyclists Traffic Dynamics Generation Parameters Management 9

ESI Pro-SiVIC : Overview Physical Sensors Models / Customization Cameras Physical behavior Radar Radar & Targets characteristics Lidar, Ultrasonic, GPS Ground Truth Data Scenario Scripting Testing Automation 10

Training and Validation of Neural Networks using Camera Sensor Data New process including introduction of Simulation 11

Training and Validation of Neural Networks using camera sensor data Current Process Camera videos are recorded from real test campaigns Videos are sequenced and annotated to create useable database with scenario characteristics Raw & annotated images are used to train Deep Learning algorithms (e.g. lane detection) HD videos (>FullHD) TB of stored data The rest of the raw images is sent as input to previously trained algorithms Output of trained algorithms is finally compared to annotated images for validation Manual intervention required for processing 12

Training and Validation of Neural Networks using camera sensor data Step 1: Introduction of Simulation to Replace Real Driving Camera videos are recorded from real test campaigns Videos are sequenced and annotated to create useable database with scenario characteristics 3D Models of scenes of high quality replace real driving environments ESI Pro-SiVIC TM Libraries Create Virtual scenarios with custom characteristics, and capture high quality simulated videos In ESI Pro-SiVIC TM Raw & annotated images are used to train Deep Learning algorithms (e.g. lane detection) The rest of the raw images is sent as input to previously trained algorithms Output of trained algorithms is finally compared to annotated images for validation 13

Training and Validation of Neural Networks using camera sensor data Step 2: Introduction of Simulation to Replace Physical Camera Camera videos are recorded from real test campaigns Videos are sequenced and annotated to create useable database with scenario characteristics Raw & annotated images are used to train Deep Learning algorithms (e.g. lane detection) The rest of the raw images is sent as input to previously trained algorithms Output of trained algorithms is finally compared to annotated images for validation 3D Models of scenes of high quality replace real driving environments ESI Pro-SiVIC TM Libraries Create Virtual scenarios with custom characteristics, and capture high quality simulated videos In ESI Pro-SiVIC TM Simulated camera model is used rather than physical camera to create raw & segmented images In ESI Pro-SiVIC TM DL algorithms are trained on a dedicated platform using one subset of the simulated images (could be completed by real data) On Linux machine Trained DL algorithms run on NVIDIA PX2 and receive the other subset of the simulated data through cosimulation interface for validation purposes (could be completed by real data) On NVIDIA PX2 14 DL- Deep Learning

Training & Validating Neural Network Using Synthetic Camera Images Concept & Application example 15

Part1: Training Neural Network Using Synthetic Camera Images Process Synthetic Images Generation Automatically Labeled One Color/Category Cars, Motorcycles, Roads, Lane Markings ~ 5000 images to validate concept Highway & City Environments Cars & Motorcycles Color Variations Images captured at different positions in the scene using multiple sensors Fit Images in Random to avoid overfitting 16

Part1: Training Neural Network Using Synthetic Camera Images Results Tested on Synthetic and Real World Data Road & Car detected on both Concept Validated Object Detection could be refined Next steps: More Image Quantity More Image Diversity Various Image Sources New Objects Tradeoff Resolution vs Speed 17

Part2: Validating Neural Network Using Synthetic Camera Images Process: Using Synthetic Data to Validate Algorithm Pro-SiVIC NVIDIA PX2 BOARD Camera Sensor Video Stream DDS Interface Ethernet Link DDS Interface Camera Sensor Video Stream Lane Detection Algorithm Camera Sensor Vidéo Stream & Lane Détection Superposed 18

Part2: Validating Neural Network Using Synthetic Camera Images Results 19

Conclusion The Simulation is opening new perspectives to enhance AI for Autonomous Driving: Optimize Current Algorithm Training and Validation Processes faster & cheaper, earlier validation, better accessibility Introduce more flexibility in generating training and validation data higher fidelity of Neural Networks Generate Multi-Sensor data automatically with Ground Truth Information higher data quality, reduced lead time Use Automation to extend databases higher test/scenario coverage, increased data sources, improved parallel processing 20

Thank you! R o h i t R a m a n n a B u s i n e s s D e v e l o p m e n t M a n a g e r, S m a r t V i r t u a l P r o t o t y p i n g E S I N o r t h A m e r i c a Ema i l : r o h i t. r a m a n n a @ e s i - g r o u p. c o m 21