MARK. January 2019 issue... BENCH THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS
|
|
- Letitia Carter
- 5 years ago
- Views:
Transcription
1 BENCH MARK January 2019 issue... Simulation Limited: How Sensor Simulation for Self-driving Vehicles is Limited by Game Engine Based Simulators A Guide to the Internet of Things Simulation of Complex Brain Surgery with a Personalized Brain Model Learn How to See Prediction of Clothing Pressure Distribution on the Human Body for Wearable Health Monitoring What is Uncertainty Quantification (UQ)? Efficient Preparation of Quality Simulation Models - An Event Summary Excel for Engineers and other STEM Professionals THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS
2 Simulation Limited: How Sensor Simulation for Self-driving Vehicles is Limited by Game Engine Based Simulators Zoltán Hortsin, AImotive Simulating the images of unique cameras used for self-driving was a driving force behind the development of a new engine for a simulator designed for autonomous vehicle development. Over the last 18 months, the importance of simulation in the development of autonomous vehicle technologies has become widely accepted. Industry experts and analysts alike are claiming that enormous distances must be covered by self-driving systems to achieve safe operation in varied road conditions and environments. Meanwhile, certain weather conditions and traffic scenarios are extremely rare, resulting in limited test opportunities. The only way to cover the distances and reach the diversity required for the safe operation of highly automated vehicles is through testing these systems and their modules in virtual environments. The demands of simulation for self-driving cars are extremely high and not all simulators are created equal. Simulation for Self-driving Simulators for autonomous technologies must be comprehensive and robust tools; offering at the least: 1. A diversity of maps, environments, conditions and driving cultures. 2. Repeatability of tests and scenarios for the continuous development of safety-critical systems. 3. Ready access for self-driving developers and engineers to accelerate iteration times and development loops. However, these requirements only scratch the surface. The demands of self-driving are unique, as a result, only a purpose-built virtualization environment can become a full solution. Several industry stakeholders have built proprietary solutions based on game engines. While game engine rendered simulators can provide the above-listed characteristics, they do not solve all the challenges of self-driving simulation. Beyond the basic demands listed above a true self-driving simulator has to offer more: 1. The utmost level of physical realism in vehicle and road models, weather conditions, and sensor simulation. 2. Pixel-precise deterministic rendering to ensure that minor differences in simulated environments do not affect the results of repeated tests. 3. The efficient use of hardware resources including the ability to run on any system from laptops to servers or utilizing the performance of multiple CPUs and GPUs. The Limitations of Game Engines These more specific demands cannot be answered efficiently by game engine based simulators. Their focus is inherently different. A game engine is designed with a commercial end-user in mind and is built to utilize average hardware setups while being optimized to offer the best game performance. Furthermore, it is often 3D and graphic artists that create the final visual effect to ensure a spectacular world for the consumer, rather than a physically correct environment.
3
4 AImotive encountered several problems while using a game engine-based simulator. One of the most notable was how game engines are not prepared to simulate the images of unique sensors such as ultra-wide field of view fisheye lenses or narrow view long-range cameras. This is fundamentally because such views are almost never used in games. As a result, the previous iteration of our simulator contained custom modifications to the engine. However, as these were not organic elements of the code they were also a bottleneck. On the one hand, they affected the performance of the simulator. On the other hand, certain effects built into the game engine could not be used on intermediary images, only on the final render, which resulted in less realism. Of these more unique sensors, ultra-wide field of view fisheye cameras were the most exciting challenge. The following sections will outline the approach we took to overcome these difficulties. Synthesizing Camera Images A camera s operation can be divided into two parts: the optics and the sensor. Rays of light move through the lens and arrive in a given pixel of the sensor. When simulating camera images, it is this process that has to be recreated. There are several mathematical models for this projection, the commonality between them being that they all result in a 2D pixel position from 3D data. However, when discussing images taken with ultra-wideangle lenses, further difficulties arise. We use fisheye lenses to cover ultra-wide field of views which do not create rectilinear images, but exhibit barrel distortion. As a result, projecting them onto a two-dimensional plane is a slightly more complex process as the distortion must be accounted for. Figure 1 illustrates how this distortion is mapped to a 2D plane. Simulating a Fisheye Image The most obvious solution to simulate such an image would be to simulate the exact paths of the rays of light as they move through a simulated lens and onto the sensor. However, this approach relies on ray tracing, a technology currently only available in a few select GPUs. Most GPUs employ rasterization for image synthesizing (Figure 2), a solution that does not allow for a robust fisheye projection. While there are certain workarounds that make the projection possible, the nature of these solutions mean that the projection will not be robust, thus, rendering may be incorrect, or the performance of the engine may be affected. To find a solution the problem has to be reexamined. Reexamining the Problem To achieve a robust projection the task at hand has to be divided into two parts. In the first step, data for rays of light reaching the lens is generated. In the second step the corresponding data for the rays that reach the sensor must be found. Naturally, the high distortion of fisheye, lenses causes difficulty in the projection. Step one This is the most demanding part of the process, which relies most heavily on the rasterization performance of the GPU. Based on the calculated location of the sensor, images must be generated that cover the area of space from which light can enter the lens. This can be a single image or a total of six, depending on the characteristics of the simulated camera, as shown in Figure 3. The hardware demands of the process are heavily dependent on the quality of these images, and to achieve the highest degree of reality highquality images and physically based High Dynamic Range (HDR) rendering must be employed. As several cameras and other sensors are being simulated concurrently by the simulator, this can lead to huge demands on memory and compute capacity. Hence the ability to efficiently utilize multiple CPUs and GPUs if needed is vital. However, to ensure flexibility, the system should also be able to run on more everyday systems such as desktop PCs. This is to allow developers and self-driving testers to have proper access to the simulator and use it as a development tool. Naturally, high-quality tests have to be run on heavy performance setups, but not all tests require such resolutions. Figure 1: Fisheye projection illustrating rays and final pixels Figure 2 : GPU rasterizing
5 Simulation Limited Figure 3: Six images forming a cube with a camera placed in the center Figure 4: Pixels from images are projected onto a sphere Step two In essence, the task at hand is to simulate the lens distortion of the camera and create the image that appears on the sensor itself. The resolution of this image matches the resolution of the sensor. The data used to create the image is obtained from the 1-6 images generated in step one. The GPU is used to calculate the characteristics of the 3D rays that correspond to the 2D pixels on these images (Figure 4). Further characteristics of the simulated camera (exposure time, tonemapping etc.) are also added in this step. These are needed to create a simulated sensor image that is the closest it can be to the image the real sensor would provide to the selfdriving system in the real world. Only through a high correlation can simulation truly be a viable tool for the development and validation of self-driving systems. The Simulated Image Following these steps, we can create a virtual representation of an image that a fisheye camera would provide to the self-driving system. The robustness of this solution is based on recreating the problem in a way that allows GPUs to compute the simulation effectively without relying on niche technologies. The method is also extremely precise, allowing for pixel-perfect deterministic rendering: each scene will be calculated and rendered in entirely the same way, every time it is loaded. If simulators are to be used as a platform for not only testing but also validation, they must be the closest possible representation of the real world in every regard. The example given above clearly presents how the inherent limitations of game engines cannot serve as a reliable platform for this. Fisheye cameras are an important element of a vision-first self-driving setup as they can be used to create a wide (or ultra-wide) field of view around the vehicle easily. Without the ability to properly simulate these there will be limits to the correlation between the real world and the simulated test environment. Beyond this highly technical difficulty and the required level of physical fidelity, the limitations discussed above regarding the flexibility and scalability of the simulator, the efficient use of hardware are also important factors. Building on our experience with selfdriving technology and working with a game-enginebased simulator in the past we strongly believe that in the future simulators for autonomous technology testing and verification must be purpose built. Zoltán Hortsin is a self-trained game engine development and OpenGL ES 2, 3 and Vulkan 3D API specialist who originally studied Dentistry at Semmelweis University, Budapest. After discontinuing his university studies during his final semester, he joined AImotive's predecessor, Kishonti Informatics Ltd, where he served as a team leader to work on the company's mobile GPU benchmarks. Zoltan is the technological mentor of AImotive's internal simulation engine development team. His favorite area of research is real-time global illumination. Zoltán is a chocolate connoisseur, and spends his free time reading up on paleontology and collecting fossils. To read more about AImotive visit aimotive.com
6 Join us +44 (0) nafems.org/joinus
Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016
edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract
More informationReal-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov
Real-Time Scene Reconstruction Remington Gong Benjamin Harris Iuri Prilepov June 10, 2010 Abstract This report discusses the implementation of a real-time system for scene reconstruction. Algorithms for
More informationNVIDIA DESIGNWORKS Ankit Patel - Prerna Dogra -
NVIDIA DESIGNWORKS Ankit Patel - ankitp@nvidia.com Prerna Dogra - pdogra@nvidia.com 1 Autonomous Driving Deep Learning Visual Effects Virtual Desktops Visual Computing is our singular mission Gaming Product
More informationProjection simulator to support design development of spherical immersive display
Projection simulator to support design development of spherical immersive display Wataru Hashimoto, Yasuharu Mizutani, and Satoshi Nishiguchi Faculty of Information Sciences and Technology, Osaka Institute
More informationASYNCHRONOUS SHADERS WHITE PAPER 0
ASYNCHRONOUS SHADERS WHITE PAPER 0 INTRODUCTION GPU technology is constantly evolving to deliver more performance with lower cost and lower power consumption. Transistor scaling and Moore s Law have helped
More informationS U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T
S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render
More informationWHITE PAPER. Get optical products to market faster using modern virtual prototyping. By Mark Nicholson and Cort Stinnett
WHITE PAPER Get optical products to market faster using modern virtual prototyping By Mark Nicholson and Cort Stinnett Get optical products to market faster using modern virtual prototyping 1 Introduction
More informationScalable multi-gpu cloud raytracing with OpenGL
Scalable multi-gpu cloud raytracing with OpenGL University of Žilina Digital technologies 2014, Žilina, Slovakia Overview Goals Rendering distant details in visualizations Raytracing Multi-GPU programming
More informationCreative Efficiency Research: MODO 801 vs. Maya From CAD file to finished rendering: How MODO stacks up in a real-world workflow situation
Creative Efficiency Research: vs. From CAD file to finished rendering: How MODO stacks up in a real-world workflow situation About this Benchmark Project This benchmark project was conducted to measure
More informationLSI for Car Navigation Systems/
LSI for Car Navigation Systems/ Digital Dashboards Realizing the Next-Generation Automotive Terminal System on a single chip A 1-chip system LSI integrating an ARM926EJ-S core and all necessary functions
More informationCSE4030 Introduction to Computer Graphics
CSE4030 Introduction to Computer Graphics Dongguk University Jeong-Mo Hong Timetable 00:00~00:10 Introduction (English) 00:10~00:50 Topic 1 (English) 00:50~00:60 Q&A (English, Korean) 01:00~01:40 Topic
More informationComputational Photography: Real Time Plenoptic Rendering
Computational Photography: Real Time Plenoptic Rendering Andrew Lumsdaine, Georgi Chunev Indiana University Todor Georgiev Adobe Systems Who was at the Keynote Yesterday? 2 Overview Plenoptic cameras Rendering
More informationEnhancing Traditional Rasterization Graphics with Ray Tracing. October 2015
Enhancing Traditional Rasterization Graphics with Ray Tracing October 2015 James Rumble Developer Technology Engineer, PowerVR Graphics Overview Ray Tracing Fundamentals PowerVR Ray Tracing Pipeline Using
More informationVersion 2.0. Complete lighting, shading, and rendering toolkit for artists
Version 2.0 Complete lighting, shading, and rendering toolkit for artists ARTWORKFLOW V-Ray started as a simple thought to develop the tools artists need to visualize and communicate their ideas. Today
More informationExecutable Requirements: Opportunities and Impediments
Executable Requirements: Oppotunities and Impediments Executable Requirements: Opportunities and Impediments G. A. Shaw and A. H. Anderson * Abstract: In a top-down, language-based design methodology,
More informationA Qualitative Analysis of 3D Display Technology
A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract
More informationAbstract. Introduction. Kevin Todisco
- Kevin Todisco Figure 1: A large scale example of the simulation. The leftmost image shows the beginning of the test case, and shows how the fluid refracts the environment around it. The middle image
More informationSOLUTIONS FOR TESTING CAMERA-BASED ADVANCED DRIVER ASSISTANCE SYSTEMS SOLUTIONS FOR VIRTUAL TEST DRIVING
SOLUTIONS FOR TESTING CAMERA-BASED ADVANCED DRIVER ASSISTANCE SYSTEMS SOLUTIONS FOR VIRTUAL TEST DRIVING Table of Contents Motivation... 3 Requirements... 3 Solutions at a Glance... 4 Video Data Stream...
More informationCS 498 VR. Lecture 20-4/11/18. go.illinois.edu/vrlect20
CS 498 VR Lecture 20-4/11/18 go.illinois.edu/vrlect20 Review from last lecture Texture, Normal mapping Three types of optical distortion? How does texture mipmapping work? Improving Latency and Frame Rates
More informationPer-Pixel Lighting and Bump Mapping with the NVIDIA Shading Rasterizer
Per-Pixel Lighting and Bump Mapping with the NVIDIA Shading Rasterizer Executive Summary The NVIDIA Quadro2 line of workstation graphics solutions is the first of its kind to feature hardware support for
More informationCopyright Khronos Group Page 1. Vulkan Overview. June 2015
Copyright Khronos Group 2015 - Page 1 Vulkan Overview June 2015 Copyright Khronos Group 2015 - Page 2 Khronos Connects Software to Silicon Open Consortium creating OPEN STANDARD APIs for hardware acceleration
More informationChoosing the Right Photonic Design Software
White Paper Choosing the Right Photonic Design Software September 2016 Authors Chenglin Xu RSoft Product Manager, Synopsys Dan Herrmann CAE Manager, Synopsys Introduction There are many factors to consider
More informationThe Handheld Graphics. Market. Size, needs, and opportunities. Jon Peddie Research
The Handheld Graphics Market Size, needs, and opportunities Founded in 2001 - our 24th year Focus and emphasis on Digital Technology, Multi Media, and Graphics Consulting and market research - Advisor
More informationHardware Displacement Mapping
Matrox's revolutionary new surface generation technology, (HDM), equates a giant leap in the pursuit of 3D realism. Matrox is the first to develop a hardware implementation of displacement mapping and
More informationNVIDIA RESEARCH TALK: THE MAGIC BEHIND GAMEWORKS HYBRID FRUSTUM TRACED SHADOWS
NVIDIA RESEARCH TALK: THE MAGIC BEHIND GAMEWORKS HYBRID FRUSTUM TRACED SHADOWS Chris Wyman July 28, 2016 Left: Hybrid Frustum Traced Shadows (HFTS) Right: Percentage Closer Soft Shadows (PCSS) MARCH 2016:
More informationDesign guidelines for embedded real time face detection application
Design guidelines for embedded real time face detection application White paper for Embedded Vision Alliance By Eldad Melamed Much like the human visual system, embedded computer vision systems perform
More informationAutomated Test Data Design
Automating the Automation Automated Test Data Design How organizations can add Precision and Speed up their testing tasks A CONFORMIQ WHITEPAPER By Kimmo Nupponen 1 Abstract: Experimental evidence and
More informationHow Manual Testers can execute Test Automation. White Papers. Muthiah Director of Testing. Expedux on How Manual Testers
How Manual Testers can execute Test Automation without tool/ programming knowledge White Papers QA An exclusive Interview with Muthiah Director of Testing & Test Automation Services at Expedux on How Manual
More informationSDVoE turning the theoretical benefits of AV over IP into reality
SDVoE turning the theoretical benefits of AV over IP into reality Why, what and how AV over IP (or AVoIP) is a fast-growing market. Drivers include the need for solutions that can support 10GB video distribution
More informationIntelligent Robots for Handling of Flexible Objects. IRFO Vision System
Intelligent Robots for Handling of Flexible Objects IRFO Vision System Andreas Jordt Multimedia Information Processing Institute of Computer Science University Kiel IRFO Vision System Overview 2) Sensing
More informationAre you looking for ultrafast and extremely precise stereovision technology for industrial applications? Learn about
Edition November 2017 Image sensors and vision systems, Smart Industries, imec.engineering Are you looking for ultrafast and extremely precise stereovision technology for industrial applications? Learn
More informationLuxo Jr. (Pixar, 1986) Last Time. Real Cameras and Ray Tracing. Standard Rasterization. Lights, Cameras, Surfaces. Now Playing:
Now Playing: Luxo Jr. (Pixar, 1986) Giant Steps From Giant Steps Recorded May 4-5, 1959 John Coltrane - Tenor Sax Tommy Flanagan - Piano Paul Chambers - Bass Art Taylor - Drums Real Cameras and Ray Tracing
More informationHands-On Workshop: 3D Automotive Graphics on Connected Radios Using Rayleigh and OpenGL ES 2.0
Hands-On Workshop: 3D Automotive Graphics on Connected Radios Using Rayleigh and OpenGL ES 2.0 FTF-AUT-F0348 Hugo Osornio Luis Olea A P R. 2 0 1 4 TM External Use Agenda Back to the Basics! What is a GPU?
More informationEnabling immersive gaming experiences Intro to Ray Tracing
Enabling immersive gaming experiences Intro to Ray Tracing Overview What is Ray Tracing? Why Ray Tracing? PowerVR Wizard Architecture Example Content Unity Hybrid Rendering Demonstration 3 What is Ray
More information24th MONDAY. Overview 2018
24th MONDAY Overview 2018 Imagination: your route to success At Imagination, we create and license market-leading processor solutions for graphics, vision & AI processing, and multi-standard communications.
More informationHow to integrate data into Tableau
1 How to integrate data into Tableau a comparison of 3 approaches: ETL, Tableau self-service and WHITE PAPER WHITE PAPER 2 data How to integrate data into Tableau a comparison of 3 es: ETL, Tableau self-service
More informationCS451Real-time Rendering Pipeline
1 CS451Real-time Rendering Pipeline JYH-MING LIEN DEPARTMENT OF COMPUTER SCIENCE GEORGE MASON UNIVERSITY Based on Tomas Akenine-Möller s lecture note You say that you render a 3D 2 scene, but what does
More informationShadows. COMP 575/770 Spring 2013
Shadows COMP 575/770 Spring 2013 Shadows in Ray Tracing Shadows are important for realism Basic idea: figure out whether a point on an object is illuminated by a light source Easy for ray tracers Just
More informationMatrox Imaging White Paper
Reliable high bandwidth video capture with Matrox Radient Abstract The constant drive for greater analysis resolution and higher system throughput results in the design of vision systems with multiple
More informationData Virtualization Implementation Methodology and Best Practices
White Paper Data Virtualization Implementation Methodology and Best Practices INTRODUCTION Cisco s proven Data Virtualization Implementation Methodology and Best Practices is compiled from our successful
More informationImage Base Rendering: An Introduction
Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to
More informationAdvanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors
Advanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors Sriram Sethuraman Technologist & DMTS, Ittiam 1 Overview Imaging on Smart-phones
More informationAchieves excellent performance of 1,920 MIPS and a single-chip solution for nextgeneration car information systems
Renesas Technology to Release SH7776 (SH-Navi3), Industry s First Dual-Core SoC with Built-in Image Recognition Processing Function for Car Information Terminals Achieves excellent performance of 1,920
More informationMali Demos: Behind the Pixels. Stacy Smith
Mali Demos: Behind the Pixels Stacy Smith Mali Graphics: Behind the demos Mali Demo Team: Doug Day Stacy Smith (Me) Sylwester Bala Roberto Lopez Mendez PHOTOGRAPH UNAVAILABLE These days I spend more time
More informationSensor Modalities. Sensor modality: Different modalities:
Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature
More informationLow-Power Server Combines Efficiency with Durable, Mobile Performance
SOLUTION BRIEF Intel, XENON Systems & SenSen Networks Low-Power Server Combines Efficiency with Durable, Mobile Performance XENON* Systems developed an innovative and unique small form factor server as
More informationPanoMOBI: Panoramic Mobile Entertainment System
PanoMOBI: Panoramic Mobile Entertainment System Barnabas Takacs 1,2 1 MTA SZTAKI, Virtual Human Interface Group, Hungarian Academy of Sciences, Kende u. 11-13, 1111 Budapest, Hungary 2 Digital Elite Inc.
More informationS U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T
S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render
More informationSpatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology
Spatial Data Structures and Speed-Up Techniques Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial data structures What is it? Data structure that organizes
More informationTracking unit Optical filter SeeFront 3D algorithm
true 3D no glasses What do we do? SeeFront develops customer-specific OEM 3D display solutions and licenses its patented 3D technology to third parties. SeeFront s customers create a variety of innovative
More informationOne category of visual tracking. Computer Science SURJ. Michael Fischer
Computer Science Visual tracking is used in a wide range of applications such as robotics, industrial auto-control systems, traffic monitoring, and manufacturing. This paper describes a new algorithm for
More informationDigital Reality: Using Trained Models of Reality to Synthesize Data for the Training & Validating Autonomous Systems Philipp Slusallek
Digital Reality: Using Trained Models of Reality to Synthesize Data for the Training & Validating Autonomous Systems Philipp Slusallek German Research Center for Artificial Intelligence (DFKI) Saarland
More informationAdding Advanced Shader Features and Handling Fragmentation
Copyright Khronos Group, 2010 - Page 1 Adding Advanced Shader Features and Handling Fragmentation How to enable your application on a wide range of devices Imagination Technologies Copyright Khronos Group,
More informationBooks: 1) Computer Graphics, Principles & Practice, Second Edition in C JamesD. Foley, Andriesvan Dam, StevenK. Feiner, John F.
Computer Graphics Books: 1) Computer Graphics, Principles & Practice, Second Edition in C JamesD. Foley, Andriesvan Dam, StevenK. Feiner, John F. Huges 2) Schaim s Outline Computer Graphics Roy A. Plastock,
More informationChapter Review. 42 Chapter 1 Introduction to Computers
42 Chapter 1 Introduction to Computers Chapter Review 1. Why Is Computer Literacy Vital in Today s World? Computer literacy, or digital literacy, involves having current knowledge and understanding of
More informationNBASE-T and Machine Vision: A Winning Combination for the Imaging Market
NBASE-T and Machine Vision: A Winning Combination for the Imaging Market July 19, 2018 NBASE-T AllianceSM 2018 1 Webinar Speakers Ed Goffin Manager, Marketing Pleora Technologies Ed.Goffin@pleora.com @ed_goffin
More informationPhotorealism: Ray Tracing
Photorealism: Ray Tracing Reading Assignment: Chapter 13 Local vs. Global Illumination Local Illumination depends on local object and light sources only Global Illumination at a point can depend on any
More informationDepth of Field for Photorealistic Ray Traced Images JESSICA HO AND DUNCAN MACMICHAEL MARCH 7, 2016 CSS552: TOPICS IN RENDERING
Depth of Field for Photorealistic Ray Traced Images JESSICA HO AND DUNCAN MACMICHAEL MARCH 7, 2016 CSS552: TOPICS IN RENDERING Problem and Background Problem Statement The Phong Illumination Model and
More informationHere s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and
1 Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and the light. 2 To visualize this problem, consider the
More informationRobust Stencil Shadow Volumes. CEDEC 2001 Tokyo, Japan
Robust Stencil Shadow Volumes CEDEC 2001 Tokyo, Japan Mark J. Kilgard Graphics Software Engineer NVIDIA Corporation 2 Games Begin to Embrace Robust Shadows 3 John Carmack s new Doom engine leads the way
More informationzspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0
zspace Developer SDK Guide - Introduction Version 1.0 zspace.com Developer s Guide Rev 1.0 zspace, Inc. 2015. zspace is a registered trademark of zspace, Inc. All other trademarks are the property of their
More informationA Survey of Light Source Detection Methods
A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light
More informationComputer Graphics and Visualization. What is computer graphics?
CSCI 120 Computer Graphics and Visualization Shiaofen Fang Department of Computer and Information Science Indiana University Purdue University Indianapolis What is computer graphics? Computer graphics
More informationNext-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data
Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data 46 Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data
More informationGlare Spread Function (GSF) - 12 Source Angle
Normalized Pixel Value POWERED BY OPTEST SOFTWARE Stray Light Measurement on LensCheck Lens Measurement Systems 1 Glare Spread Function (GSF) - 12 Source Angle 0.1 0.01 0.001 0.0001 0.00001 0.000001 1
More informationThis course covers a group of global illumination algorithms known as many light methods, or VPL rendering methods. (VPL = virtual point light)
This course covers a group of global illumination algorithms known as many light methods, or VPL rendering methods. (VPL = virtual point light) 1 Our goal is to render realistic images and one of the
More informationToday. Rendering algorithms. Rendering algorithms. Images. Images. Rendering Algorithms. Course overview Organization Introduction to ray tracing
Today Rendering Algorithms Course overview Organization Introduction to ray tracing Spring 2009 Matthias Zwicker Universität Bern Rendering algorithms Problem statement Given computer representation of
More informationCreating Affordable and Reliable Autonomous Vehicle Systems
Creating Affordable and Reliable Autonomous Vehicle Systems Shaoshan Liu shaoshan.liu@perceptin.io Autonomous Driving Localization Most crucial task of autonomous driving Solutions: GNSS but withvariations,
More informationA NEURAL NETWORK BASED TRAFFIC-FLOW PREDICTION MODEL. Bosnia Herzegovina. Denizli 20070, Turkey. Buyukcekmece, Istanbul, Turkey
Mathematical and Computational Applications, Vol. 15, No. 2, pp. 269-278, 2010. Association for Scientific Research A NEURAL NETWORK BASED TRAFFIC-FLOW PREDICTION MODEL B. Gültekin Çetiner 1, Murat Sari
More informationTaking Back Control of Your Network With SD-LAN
IHS TECHNOLOGY SEPTEMBER 2016 Taking Back Control of Your Network With SD-LAN Matthias Machowinski, Senior Research Director, Enterprise Networks and Video TABLE OF CONTENTS Access Networks Are Under Pressure...
More informationAnimation & Rendering
7M836 Animation & Rendering Introduction, color, raster graphics, modeling, transformations Arjan Kok, Kees Huizing, Huub van de Wetering h.v.d.wetering@tue.nl 1 Purpose Understand 3D computer graphics
More informationECE 1161/2161 Embedded Computer System Design 2. Introduction. Wei Gao. Spring
ECE 1161/2161 Embedded Computer System Design 2 Introduction Wei Gao Spring 2018 1 Course Information Class time: 4:30pm 5:45pm TuTh Instructor: Wei Gao, weigao@pitt.edu Office: 1205 Benedum Office hour:
More informationOverview & White Paper.
Overview & White Paper www.phantomx.co CONTENT 2. Introduction 7. Scalability and Network 3. Summary 3. About PhantomX 3. Our mission 4. The team behind PhantomX 5. Specification 8. Proof-of-Work 9. Proof-of-Stake
More information3D Rendering Pipeline
3D Rendering Pipeline Reference: Real-Time Rendering 3 rd Edition Chapters 2 4 OpenGL SuperBible 6 th Edition Overview Rendering Pipeline Modern CG Inside a Desktop Architecture Shaders Tool Stage Asset
More informationTesting for the Unexpected Using PXI
Testing for the Unexpected Using PXI An Automated Method of Injecting Faults for Engine Management Development By Shaun Fuller Pickering Interfaces Ltd. What will happen if a fault occurs in an automotive
More information,
[Class Room Online Training] Weekdays:- 2hrs / 3 days Fastrack:- 1½hrs / Day Weekend:- 2½ hrs (Sat & Sun) An ISO 9001:2015 Institute ADMEC Multimedia Institute www.admecindia.co.in 9911782350, 9811818122
More informationVision based autonomous driving - A survey of recent methods. -Tejus Gupta
Vision based autonomous driving - A survey of recent methods -Tejus Gupta Presently, there are three major paradigms for vision based autonomous driving: Directly map input image to driving action using
More informationEuropeana DSI 2 Access to Digital Resources of European Heritage
Europeana DSI 2 Access to Digital Resources of European Heritage MILESTONE Revision 1.0 Date of submission 28.04.2017 Author(s) Krystian Adamski, Tarek Alkhaeir, Marcin Heliński, Aleksandra Nowak, Marcin
More informationChanges from the PMA mock-up
Since Konica Minolta Europe is constantly receiving many requests for updates about the development status of the Dynax 7 Digital, Clemens Helberg, Marketing Manager from Konica Minolta Europe, has gained
More informationThere we are; that's got the 3D screen and mouse sorted out.
Introduction to 3D To all intents and purposes, the world we live in is three dimensional. Therefore, if we want to construct a realistic computer model of it, the model should be three dimensional as
More informationTHE AUSTRALIAN NATIONAL UNIVERSITY Final Examinations (Semester 2) COMP4610/COMP6461 (Computer Graphics) Final Exam
THE AUSTRALIAN NATIONAL UNIVERSITY Final Examinations (Semester 2) 2015 COMP4610/COMP6461 (Computer Graphics) Final Exam Writing Period: 3 hours duration Study Period: 15 minutes duration. During this
More informationArea View Kit DC AVK. Features. Applications
The Area View Kit (AVK) consists of 4 HDR CMOS cameras, an automotive Power-over-Ethernet (PoE) switch and the necessary accessories. It allows to define the right camera positions over the desk and at
More informationIntroduction to Computer Graphics. Knowledge basic concepts 2D and 3D computer graphics
Introduction to Computer Graphics Knowledge basic concepts 2D and 3D computer graphics 1 Introduction 2 Basic math 3 2D transformations 4 3D transformations 5 Viewing 6 Primitives 7 Geometry 8 Shading
More informationMany Regions, Many Offices, Many Archives: An Office 365 Migration Story CASE STUDY
Many Regions, Many Offices, Many Archives: An Office 365 Migration Story CASE STUDY Making a Company s World a Smaller, Simpler Place Summary INDUSTRY Multi-national construction and infrastructure services
More informationMercury Mission Systems BuildSAFE Graphics Suite Multicore Software Renderer Scott Engle Director of Business Development
Mercury Mission Systems BuildSAFE Graphics Suite Multicore Software Renderer Scott Engle Director of Business Development Mercury acquires Richland Technologies to compliment MMSI Mercury Mission Systems
More informationWorking with your Camera
Topic 2 Introduction To Lenses Learning Outcomes By the end of this topic you will have a basic understanding of what lenses you need for specific types of shot. You will also be able to distinguish between
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationSpecifying Storage Servers for IP security applications
Specifying Storage Servers for IP security applications The migration of security systems from analogue to digital IP based solutions has created a large demand for storage servers high performance PCs
More information521493S Computer Graphics. Exercise 3
521493S Computer Graphics Exercise 3 Question 3.1 Most graphics systems and APIs use the simple lighting and reflection models that we introduced for polygon rendering. Describe the ways in which each
More informationAUGMENTED REALITY AS A METHOD FOR EXPANDED PRESENTATION OF OBJECTS OF DIGITIZED HERITAGE. Alexander Kolev, Dimo Dimov
Serdica J. Computing 8 (2014), No 4, 355 362 Serdica Journal of Computing Bulgarian Academy of Sciences Institute of Mathematics and Informatics AUGMENTED REALITY AS A METHOD FOR EXPANDED PRESENTATION
More informationMobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform
Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported
More informationAutodesk Fusion 360: Render. Overview
Overview Rendering is the process of generating an image by combining geometry, camera, texture, lighting and shading (also called materials) information using a computer program. Before an image can be
More informationBe sure to always check the camera is properly functioning, is properly positioned and securely mounted.
Please read all of the installation instructions carefully before installing the product. Improper installation will void manufacturer s warranty. The installation instructions do not apply to all types
More informationKanban Workshop 2 Days
Kanban Workshop 2 Days Kanban methods have increased in popularity. Going beyond the manufacturing origins, more and more teams in information technology are adopting the practices. Kanban methods go beyond
More informationVulkan and Animation 3/13/ &height=285&playerId=
https://media.oregonstate.edu/id/0_q2qgt47o?width= 400&height=285&playerId=22119142 Vulkan and Animation Natasha A. Anisimova (Particle systems in Vulkan) Intel Game Dev The Loop Vulkan Cookbook https://software.intel.com/en-us/articles/using-vulkan-graphics-api-to-render-acloud-of-animated-particles-in-stardust-application
More informationREAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU
High-Performance Сomputing REAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU P.Y. Yakimov Samara National Research University, Samara, Russia Abstract. This article shows an effective implementation of
More informationVisible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness
Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com
More informationExtreme automation of today s technological marvel - connected cars
VIEW POINT Extreme automation of today s technological marvel - connected cars - Sandhya Jeevan Rao Senior Project Manager Abstract Going by Gartner s findings which suggests that 25 billion connected
More informationDigital Image Processing Lectures 1 & 2
Lectures 1 & 2, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013 Introduction to DIP The primary interest in transmitting and handling images in digital
More informationAI in The Data AI Centre in the Data Centre
AI in The Data AI Centre in the Data Centre Walter Van Hoolst Technology Architect HPE Nimble Storage Walter Van Hoolst Technology Architect HPE Nimble Storage April 24, 2018 Walter.vanhoolst@hpe.com @WHoolst
More information