MARK. January 2019 issue... BENCH THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS

Size: px
Start display at page:

Download "MARK. January 2019 issue... BENCH THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS"

Transcription

1 BENCH MARK January 2019 issue... Simulation Limited: How Sensor Simulation for Self-driving Vehicles is Limited by Game Engine Based Simulators A Guide to the Internet of Things Simulation of Complex Brain Surgery with a Personalized Brain Model Learn How to See Prediction of Clothing Pressure Distribution on the Human Body for Wearable Health Monitoring What is Uncertainty Quantification (UQ)? Efficient Preparation of Quality Simulation Models - An Event Summary Excel for Engineers and other STEM Professionals THE INTERNATIONAL MAGAZINE FOR ENGINEERING DESIGNERS & ANALYSTS FROM NAFEMS

2 Simulation Limited: How Sensor Simulation for Self-driving Vehicles is Limited by Game Engine Based Simulators Zoltán Hortsin, AImotive Simulating the images of unique cameras used for self-driving was a driving force behind the development of a new engine for a simulator designed for autonomous vehicle development. Over the last 18 months, the importance of simulation in the development of autonomous vehicle technologies has become widely accepted. Industry experts and analysts alike are claiming that enormous distances must be covered by self-driving systems to achieve safe operation in varied road conditions and environments. Meanwhile, certain weather conditions and traffic scenarios are extremely rare, resulting in limited test opportunities. The only way to cover the distances and reach the diversity required for the safe operation of highly automated vehicles is through testing these systems and their modules in virtual environments. The demands of simulation for self-driving cars are extremely high and not all simulators are created equal. Simulation for Self-driving Simulators for autonomous technologies must be comprehensive and robust tools; offering at the least: 1. A diversity of maps, environments, conditions and driving cultures. 2. Repeatability of tests and scenarios for the continuous development of safety-critical systems. 3. Ready access for self-driving developers and engineers to accelerate iteration times and development loops. However, these requirements only scratch the surface. The demands of self-driving are unique, as a result, only a purpose-built virtualization environment can become a full solution. Several industry stakeholders have built proprietary solutions based on game engines. While game engine rendered simulators can provide the above-listed characteristics, they do not solve all the challenges of self-driving simulation. Beyond the basic demands listed above a true self-driving simulator has to offer more: 1. The utmost level of physical realism in vehicle and road models, weather conditions, and sensor simulation. 2. Pixel-precise deterministic rendering to ensure that minor differences in simulated environments do not affect the results of repeated tests. 3. The efficient use of hardware resources including the ability to run on any system from laptops to servers or utilizing the performance of multiple CPUs and GPUs. The Limitations of Game Engines These more specific demands cannot be answered efficiently by game engine based simulators. Their focus is inherently different. A game engine is designed with a commercial end-user in mind and is built to utilize average hardware setups while being optimized to offer the best game performance. Furthermore, it is often 3D and graphic artists that create the final visual effect to ensure a spectacular world for the consumer, rather than a physically correct environment.

3

4 AImotive encountered several problems while using a game engine-based simulator. One of the most notable was how game engines are not prepared to simulate the images of unique sensors such as ultra-wide field of view fisheye lenses or narrow view long-range cameras. This is fundamentally because such views are almost never used in games. As a result, the previous iteration of our simulator contained custom modifications to the engine. However, as these were not organic elements of the code they were also a bottleneck. On the one hand, they affected the performance of the simulator. On the other hand, certain effects built into the game engine could not be used on intermediary images, only on the final render, which resulted in less realism. Of these more unique sensors, ultra-wide field of view fisheye cameras were the most exciting challenge. The following sections will outline the approach we took to overcome these difficulties. Synthesizing Camera Images A camera s operation can be divided into two parts: the optics and the sensor. Rays of light move through the lens and arrive in a given pixel of the sensor. When simulating camera images, it is this process that has to be recreated. There are several mathematical models for this projection, the commonality between them being that they all result in a 2D pixel position from 3D data. However, when discussing images taken with ultra-wideangle lenses, further difficulties arise. We use fisheye lenses to cover ultra-wide field of views which do not create rectilinear images, but exhibit barrel distortion. As a result, projecting them onto a two-dimensional plane is a slightly more complex process as the distortion must be accounted for. Figure 1 illustrates how this distortion is mapped to a 2D plane. Simulating a Fisheye Image The most obvious solution to simulate such an image would be to simulate the exact paths of the rays of light as they move through a simulated lens and onto the sensor. However, this approach relies on ray tracing, a technology currently only available in a few select GPUs. Most GPUs employ rasterization for image synthesizing (Figure 2), a solution that does not allow for a robust fisheye projection. While there are certain workarounds that make the projection possible, the nature of these solutions mean that the projection will not be robust, thus, rendering may be incorrect, or the performance of the engine may be affected. To find a solution the problem has to be reexamined. Reexamining the Problem To achieve a robust projection the task at hand has to be divided into two parts. In the first step, data for rays of light reaching the lens is generated. In the second step the corresponding data for the rays that reach the sensor must be found. Naturally, the high distortion of fisheye, lenses causes difficulty in the projection. Step one This is the most demanding part of the process, which relies most heavily on the rasterization performance of the GPU. Based on the calculated location of the sensor, images must be generated that cover the area of space from which light can enter the lens. This can be a single image or a total of six, depending on the characteristics of the simulated camera, as shown in Figure 3. The hardware demands of the process are heavily dependent on the quality of these images, and to achieve the highest degree of reality highquality images and physically based High Dynamic Range (HDR) rendering must be employed. As several cameras and other sensors are being simulated concurrently by the simulator, this can lead to huge demands on memory and compute capacity. Hence the ability to efficiently utilize multiple CPUs and GPUs if needed is vital. However, to ensure flexibility, the system should also be able to run on more everyday systems such as desktop PCs. This is to allow developers and self-driving testers to have proper access to the simulator and use it as a development tool. Naturally, high-quality tests have to be run on heavy performance setups, but not all tests require such resolutions. Figure 1: Fisheye projection illustrating rays and final pixels Figure 2 : GPU rasterizing

5 Simulation Limited Figure 3: Six images forming a cube with a camera placed in the center Figure 4: Pixels from images are projected onto a sphere Step two In essence, the task at hand is to simulate the lens distortion of the camera and create the image that appears on the sensor itself. The resolution of this image matches the resolution of the sensor. The data used to create the image is obtained from the 1-6 images generated in step one. The GPU is used to calculate the characteristics of the 3D rays that correspond to the 2D pixels on these images (Figure 4). Further characteristics of the simulated camera (exposure time, tonemapping etc.) are also added in this step. These are needed to create a simulated sensor image that is the closest it can be to the image the real sensor would provide to the selfdriving system in the real world. Only through a high correlation can simulation truly be a viable tool for the development and validation of self-driving systems. The Simulated Image Following these steps, we can create a virtual representation of an image that a fisheye camera would provide to the self-driving system. The robustness of this solution is based on recreating the problem in a way that allows GPUs to compute the simulation effectively without relying on niche technologies. The method is also extremely precise, allowing for pixel-perfect deterministic rendering: each scene will be calculated and rendered in entirely the same way, every time it is loaded. If simulators are to be used as a platform for not only testing but also validation, they must be the closest possible representation of the real world in every regard. The example given above clearly presents how the inherent limitations of game engines cannot serve as a reliable platform for this. Fisheye cameras are an important element of a vision-first self-driving setup as they can be used to create a wide (or ultra-wide) field of view around the vehicle easily. Without the ability to properly simulate these there will be limits to the correlation between the real world and the simulated test environment. Beyond this highly technical difficulty and the required level of physical fidelity, the limitations discussed above regarding the flexibility and scalability of the simulator, the efficient use of hardware are also important factors. Building on our experience with selfdriving technology and working with a game-enginebased simulator in the past we strongly believe that in the future simulators for autonomous technology testing and verification must be purpose built. Zoltán Hortsin is a self-trained game engine development and OpenGL ES 2, 3 and Vulkan 3D API specialist who originally studied Dentistry at Semmelweis University, Budapest. After discontinuing his university studies during his final semester, he joined AImotive's predecessor, Kishonti Informatics Ltd, where he served as a team leader to work on the company's mobile GPU benchmarks. Zoltan is the technological mentor of AImotive's internal simulation engine development team. His favorite area of research is real-time global illumination. Zoltán is a chocolate connoisseur, and spends his free time reading up on paleontology and collecting fossils. To read more about AImotive visit aimotive.com

6 Join us +44 (0) nafems.org/joinus

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016

Pedestrian Detection Using Correlated Lidar and Image Data EECS442 Final Project Fall 2016 edestrian Detection Using Correlated Lidar and Image Data EECS442 Final roject Fall 2016 Samuel Rohrer University of Michigan rohrer@umich.edu Ian Lin University of Michigan tiannis@umich.edu Abstract

More information

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov

Real-Time Scene Reconstruction. Remington Gong Benjamin Harris Iuri Prilepov Real-Time Scene Reconstruction Remington Gong Benjamin Harris Iuri Prilepov June 10, 2010 Abstract This report discusses the implementation of a real-time system for scene reconstruction. Algorithms for

More information

NVIDIA DESIGNWORKS Ankit Patel - Prerna Dogra -

NVIDIA DESIGNWORKS Ankit Patel - Prerna Dogra - NVIDIA DESIGNWORKS Ankit Patel - ankitp@nvidia.com Prerna Dogra - pdogra@nvidia.com 1 Autonomous Driving Deep Learning Visual Effects Virtual Desktops Visual Computing is our singular mission Gaming Product

More information

Projection simulator to support design development of spherical immersive display

Projection simulator to support design development of spherical immersive display Projection simulator to support design development of spherical immersive display Wataru Hashimoto, Yasuharu Mizutani, and Satoshi Nishiguchi Faculty of Information Sciences and Technology, Osaka Institute

More information

ASYNCHRONOUS SHADERS WHITE PAPER 0

ASYNCHRONOUS SHADERS WHITE PAPER 0 ASYNCHRONOUS SHADERS WHITE PAPER 0 INTRODUCTION GPU technology is constantly evolving to deliver more performance with lower cost and lower power consumption. Transistor scaling and Moore s Law have helped

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

WHITE PAPER. Get optical products to market faster using modern virtual prototyping. By Mark Nicholson and Cort Stinnett

WHITE PAPER. Get optical products to market faster using modern virtual prototyping. By Mark Nicholson and Cort Stinnett WHITE PAPER Get optical products to market faster using modern virtual prototyping By Mark Nicholson and Cort Stinnett Get optical products to market faster using modern virtual prototyping 1 Introduction

More information

Scalable multi-gpu cloud raytracing with OpenGL

Scalable multi-gpu cloud raytracing with OpenGL Scalable multi-gpu cloud raytracing with OpenGL University of Žilina Digital technologies 2014, Žilina, Slovakia Overview Goals Rendering distant details in visualizations Raytracing Multi-GPU programming

More information

Creative Efficiency Research: MODO 801 vs. Maya From CAD file to finished rendering: How MODO stacks up in a real-world workflow situation

Creative Efficiency Research: MODO 801 vs. Maya From CAD file to finished rendering: How MODO stacks up in a real-world workflow situation Creative Efficiency Research: vs. From CAD file to finished rendering: How MODO stacks up in a real-world workflow situation About this Benchmark Project This benchmark project was conducted to measure

More information

LSI for Car Navigation Systems/

LSI for Car Navigation Systems/ LSI for Car Navigation Systems/ Digital Dashboards Realizing the Next-Generation Automotive Terminal System on a single chip A 1-chip system LSI integrating an ARM926EJ-S core and all necessary functions

More information

CSE4030 Introduction to Computer Graphics

CSE4030 Introduction to Computer Graphics CSE4030 Introduction to Computer Graphics Dongguk University Jeong-Mo Hong Timetable 00:00~00:10 Introduction (English) 00:10~00:50 Topic 1 (English) 00:50~00:60 Q&A (English, Korean) 01:00~01:40 Topic

More information

Computational Photography: Real Time Plenoptic Rendering

Computational Photography: Real Time Plenoptic Rendering Computational Photography: Real Time Plenoptic Rendering Andrew Lumsdaine, Georgi Chunev Indiana University Todor Georgiev Adobe Systems Who was at the Keynote Yesterday? 2 Overview Plenoptic cameras Rendering

More information

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015 Enhancing Traditional Rasterization Graphics with Ray Tracing October 2015 James Rumble Developer Technology Engineer, PowerVR Graphics Overview Ray Tracing Fundamentals PowerVR Ray Tracing Pipeline Using

More information

Version 2.0. Complete lighting, shading, and rendering toolkit for artists

Version 2.0. Complete lighting, shading, and rendering toolkit for artists Version 2.0 Complete lighting, shading, and rendering toolkit for artists ARTWORKFLOW V-Ray started as a simple thought to develop the tools artists need to visualize and communicate their ideas. Today

More information

Executable Requirements: Opportunities and Impediments

Executable Requirements: Opportunities and Impediments Executable Requirements: Oppotunities and Impediments Executable Requirements: Opportunities and Impediments G. A. Shaw and A. H. Anderson * Abstract: In a top-down, language-based design methodology,

More information

A Qualitative Analysis of 3D Display Technology

A Qualitative Analysis of 3D Display Technology A Qualitative Analysis of 3D Display Technology Nicholas Blackhawk, Shane Nelson, and Mary Scaramuzza Computer Science St. Olaf College 1500 St. Olaf Ave Northfield, MN 55057 scaramum@stolaf.edu Abstract

More information

Abstract. Introduction. Kevin Todisco

Abstract. Introduction. Kevin Todisco - Kevin Todisco Figure 1: A large scale example of the simulation. The leftmost image shows the beginning of the test case, and shows how the fluid refracts the environment around it. The middle image

More information

SOLUTIONS FOR TESTING CAMERA-BASED ADVANCED DRIVER ASSISTANCE SYSTEMS SOLUTIONS FOR VIRTUAL TEST DRIVING

SOLUTIONS FOR TESTING CAMERA-BASED ADVANCED DRIVER ASSISTANCE SYSTEMS SOLUTIONS FOR VIRTUAL TEST DRIVING SOLUTIONS FOR TESTING CAMERA-BASED ADVANCED DRIVER ASSISTANCE SYSTEMS SOLUTIONS FOR VIRTUAL TEST DRIVING Table of Contents Motivation... 3 Requirements... 3 Solutions at a Glance... 4 Video Data Stream...

More information

CS 498 VR. Lecture 20-4/11/18. go.illinois.edu/vrlect20

CS 498 VR. Lecture 20-4/11/18. go.illinois.edu/vrlect20 CS 498 VR Lecture 20-4/11/18 go.illinois.edu/vrlect20 Review from last lecture Texture, Normal mapping Three types of optical distortion? How does texture mipmapping work? Improving Latency and Frame Rates

More information

Per-Pixel Lighting and Bump Mapping with the NVIDIA Shading Rasterizer

Per-Pixel Lighting and Bump Mapping with the NVIDIA Shading Rasterizer Per-Pixel Lighting and Bump Mapping with the NVIDIA Shading Rasterizer Executive Summary The NVIDIA Quadro2 line of workstation graphics solutions is the first of its kind to feature hardware support for

More information

Copyright Khronos Group Page 1. Vulkan Overview. June 2015

Copyright Khronos Group Page 1. Vulkan Overview. June 2015 Copyright Khronos Group 2015 - Page 1 Vulkan Overview June 2015 Copyright Khronos Group 2015 - Page 2 Khronos Connects Software to Silicon Open Consortium creating OPEN STANDARD APIs for hardware acceleration

More information

Choosing the Right Photonic Design Software

Choosing the Right Photonic Design Software White Paper Choosing the Right Photonic Design Software September 2016 Authors Chenglin Xu RSoft Product Manager, Synopsys Dan Herrmann CAE Manager, Synopsys Introduction There are many factors to consider

More information

The Handheld Graphics. Market. Size, needs, and opportunities. Jon Peddie Research

The Handheld Graphics. Market. Size, needs, and opportunities. Jon Peddie Research The Handheld Graphics Market Size, needs, and opportunities Founded in 2001 - our 24th year Focus and emphasis on Digital Technology, Multi Media, and Graphics Consulting and market research - Advisor

More information

Hardware Displacement Mapping

Hardware Displacement Mapping Matrox's revolutionary new surface generation technology, (HDM), equates a giant leap in the pursuit of 3D realism. Matrox is the first to develop a hardware implementation of displacement mapping and

More information

NVIDIA RESEARCH TALK: THE MAGIC BEHIND GAMEWORKS HYBRID FRUSTUM TRACED SHADOWS

NVIDIA RESEARCH TALK: THE MAGIC BEHIND GAMEWORKS HYBRID FRUSTUM TRACED SHADOWS NVIDIA RESEARCH TALK: THE MAGIC BEHIND GAMEWORKS HYBRID FRUSTUM TRACED SHADOWS Chris Wyman July 28, 2016 Left: Hybrid Frustum Traced Shadows (HFTS) Right: Percentage Closer Soft Shadows (PCSS) MARCH 2016:

More information

Design guidelines for embedded real time face detection application

Design guidelines for embedded real time face detection application Design guidelines for embedded real time face detection application White paper for Embedded Vision Alliance By Eldad Melamed Much like the human visual system, embedded computer vision systems perform

More information

Automated Test Data Design

Automated Test Data Design Automating the Automation Automated Test Data Design How organizations can add Precision and Speed up their testing tasks A CONFORMIQ WHITEPAPER By Kimmo Nupponen 1 Abstract: Experimental evidence and

More information

How Manual Testers can execute Test Automation. White Papers. Muthiah Director of Testing. Expedux on How Manual Testers

How Manual Testers can execute Test Automation. White Papers. Muthiah Director of Testing. Expedux on How Manual Testers How Manual Testers can execute Test Automation without tool/ programming knowledge White Papers QA An exclusive Interview with Muthiah Director of Testing & Test Automation Services at Expedux on How Manual

More information

SDVoE turning the theoretical benefits of AV over IP into reality

SDVoE turning the theoretical benefits of AV over IP into reality SDVoE turning the theoretical benefits of AV over IP into reality Why, what and how AV over IP (or AVoIP) is a fast-growing market. Drivers include the need for solutions that can support 10GB video distribution

More information

Intelligent Robots for Handling of Flexible Objects. IRFO Vision System

Intelligent Robots for Handling of Flexible Objects. IRFO Vision System Intelligent Robots for Handling of Flexible Objects IRFO Vision System Andreas Jordt Multimedia Information Processing Institute of Computer Science University Kiel IRFO Vision System Overview 2) Sensing

More information

Are you looking for ultrafast and extremely precise stereovision technology for industrial applications? Learn about

Are you looking for ultrafast and extremely precise stereovision technology for industrial applications? Learn about Edition November 2017 Image sensors and vision systems, Smart Industries, imec.engineering Are you looking for ultrafast and extremely precise stereovision technology for industrial applications? Learn

More information

Luxo Jr. (Pixar, 1986) Last Time. Real Cameras and Ray Tracing. Standard Rasterization. Lights, Cameras, Surfaces. Now Playing:

Luxo Jr. (Pixar, 1986) Last Time. Real Cameras and Ray Tracing. Standard Rasterization. Lights, Cameras, Surfaces. Now Playing: Now Playing: Luxo Jr. (Pixar, 1986) Giant Steps From Giant Steps Recorded May 4-5, 1959 John Coltrane - Tenor Sax Tommy Flanagan - Piano Paul Chambers - Bass Art Taylor - Drums Real Cameras and Ray Tracing

More information

Hands-On Workshop: 3D Automotive Graphics on Connected Radios Using Rayleigh and OpenGL ES 2.0

Hands-On Workshop: 3D Automotive Graphics on Connected Radios Using Rayleigh and OpenGL ES 2.0 Hands-On Workshop: 3D Automotive Graphics on Connected Radios Using Rayleigh and OpenGL ES 2.0 FTF-AUT-F0348 Hugo Osornio Luis Olea A P R. 2 0 1 4 TM External Use Agenda Back to the Basics! What is a GPU?

More information

Enabling immersive gaming experiences Intro to Ray Tracing

Enabling immersive gaming experiences Intro to Ray Tracing Enabling immersive gaming experiences Intro to Ray Tracing Overview What is Ray Tracing? Why Ray Tracing? PowerVR Wizard Architecture Example Content Unity Hybrid Rendering Demonstration 3 What is Ray

More information

24th MONDAY. Overview 2018

24th MONDAY. Overview 2018 24th MONDAY Overview 2018 Imagination: your route to success At Imagination, we create and license market-leading processor solutions for graphics, vision & AI processing, and multi-standard communications.

More information

How to integrate data into Tableau

How to integrate data into Tableau 1 How to integrate data into Tableau a comparison of 3 approaches: ETL, Tableau self-service and WHITE PAPER WHITE PAPER 2 data How to integrate data into Tableau a comparison of 3 es: ETL, Tableau self-service

More information

CS451Real-time Rendering Pipeline

CS451Real-time Rendering Pipeline 1 CS451Real-time Rendering Pipeline JYH-MING LIEN DEPARTMENT OF COMPUTER SCIENCE GEORGE MASON UNIVERSITY Based on Tomas Akenine-Möller s lecture note You say that you render a 3D 2 scene, but what does

More information

Shadows. COMP 575/770 Spring 2013

Shadows. COMP 575/770 Spring 2013 Shadows COMP 575/770 Spring 2013 Shadows in Ray Tracing Shadows are important for realism Basic idea: figure out whether a point on an object is illuminated by a light source Easy for ray tracers Just

More information

Matrox Imaging White Paper

Matrox Imaging White Paper Reliable high bandwidth video capture with Matrox Radient Abstract The constant drive for greater analysis resolution and higher system throughput results in the design of vision systems with multiple

More information

Data Virtualization Implementation Methodology and Best Practices

Data Virtualization Implementation Methodology and Best Practices White Paper Data Virtualization Implementation Methodology and Best Practices INTRODUCTION Cisco s proven Data Virtualization Implementation Methodology and Best Practices is compiled from our successful

More information

Image Base Rendering: An Introduction

Image Base Rendering: An Introduction Image Base Rendering: An Introduction Cliff Lindsay CS563 Spring 03, WPI 1. Introduction Up to this point, we have focused on showing 3D objects in the form of polygons. This is not the only approach to

More information

Advanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors

Advanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors Advanced Imaging Applications on Smart-phones Convergence of General-purpose computing, Graphics acceleration, and Sensors Sriram Sethuraman Technologist & DMTS, Ittiam 1 Overview Imaging on Smart-phones

More information

Achieves excellent performance of 1,920 MIPS and a single-chip solution for nextgeneration car information systems

Achieves excellent performance of 1,920 MIPS and a single-chip solution for nextgeneration car information systems Renesas Technology to Release SH7776 (SH-Navi3), Industry s First Dual-Core SoC with Built-in Image Recognition Processing Function for Car Information Terminals Achieves excellent performance of 1,920

More information

Mali Demos: Behind the Pixels. Stacy Smith

Mali Demos: Behind the Pixels. Stacy Smith Mali Demos: Behind the Pixels Stacy Smith Mali Graphics: Behind the demos Mali Demo Team: Doug Day Stacy Smith (Me) Sylwester Bala Roberto Lopez Mendez PHOTOGRAPH UNAVAILABLE These days I spend more time

More information

Sensor Modalities. Sensor modality: Different modalities:

Sensor Modalities. Sensor modality: Different modalities: Sensor Modalities Sensor modality: Sensors which measure same form of energy and process it in similar ways Modality refers to the raw input used by the sensors Different modalities: Sound Pressure Temperature

More information

Low-Power Server Combines Efficiency with Durable, Mobile Performance

Low-Power Server Combines Efficiency with Durable, Mobile Performance SOLUTION BRIEF Intel, XENON Systems & SenSen Networks Low-Power Server Combines Efficiency with Durable, Mobile Performance XENON* Systems developed an innovative and unique small form factor server as

More information

PanoMOBI: Panoramic Mobile Entertainment System

PanoMOBI: Panoramic Mobile Entertainment System PanoMOBI: Panoramic Mobile Entertainment System Barnabas Takacs 1,2 1 MTA SZTAKI, Virtual Human Interface Group, Hungarian Academy of Sciences, Kende u. 11-13, 1111 Budapest, Hungary 2 Digital Elite Inc.

More information

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T

S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T S U N G - E U I YO O N, K A I S T R E N D E R I N G F R E E LY A VA I L A B L E O N T H E I N T E R N E T Copyright 2018 Sung-eui Yoon, KAIST freely available on the internet http://sglab.kaist.ac.kr/~sungeui/render

More information

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Spatial Data Structures and Speed-Up Techniques. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial Data Structures and Speed-Up Techniques Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Spatial data structures What is it? Data structure that organizes

More information

Tracking unit Optical filter SeeFront 3D algorithm

Tracking unit Optical filter SeeFront 3D algorithm true 3D no glasses What do we do? SeeFront develops customer-specific OEM 3D display solutions and licenses its patented 3D technology to third parties. SeeFront s customers create a variety of innovative

More information

One category of visual tracking. Computer Science SURJ. Michael Fischer

One category of visual tracking. Computer Science SURJ. Michael Fischer Computer Science Visual tracking is used in a wide range of applications such as robotics, industrial auto-control systems, traffic monitoring, and manufacturing. This paper describes a new algorithm for

More information

Digital Reality: Using Trained Models of Reality to Synthesize Data for the Training & Validating Autonomous Systems Philipp Slusallek

Digital Reality: Using Trained Models of Reality to Synthesize Data for the Training & Validating Autonomous Systems Philipp Slusallek Digital Reality: Using Trained Models of Reality to Synthesize Data for the Training & Validating Autonomous Systems Philipp Slusallek German Research Center for Artificial Intelligence (DFKI) Saarland

More information

Adding Advanced Shader Features and Handling Fragmentation

Adding Advanced Shader Features and Handling Fragmentation Copyright Khronos Group, 2010 - Page 1 Adding Advanced Shader Features and Handling Fragmentation How to enable your application on a wide range of devices Imagination Technologies Copyright Khronos Group,

More information

Books: 1) Computer Graphics, Principles & Practice, Second Edition in C JamesD. Foley, Andriesvan Dam, StevenK. Feiner, John F.

Books: 1) Computer Graphics, Principles & Practice, Second Edition in C JamesD. Foley, Andriesvan Dam, StevenK. Feiner, John F. Computer Graphics Books: 1) Computer Graphics, Principles & Practice, Second Edition in C JamesD. Foley, Andriesvan Dam, StevenK. Feiner, John F. Huges 2) Schaim s Outline Computer Graphics Roy A. Plastock,

More information

Chapter Review. 42 Chapter 1 Introduction to Computers

Chapter Review. 42 Chapter 1 Introduction to Computers 42 Chapter 1 Introduction to Computers Chapter Review 1. Why Is Computer Literacy Vital in Today s World? Computer literacy, or digital literacy, involves having current knowledge and understanding of

More information

NBASE-T and Machine Vision: A Winning Combination for the Imaging Market

NBASE-T and Machine Vision: A Winning Combination for the Imaging Market NBASE-T and Machine Vision: A Winning Combination for the Imaging Market July 19, 2018 NBASE-T AllianceSM 2018 1 Webinar Speakers Ed Goffin Manager, Marketing Pleora Technologies Ed.Goffin@pleora.com @ed_goffin

More information

Photorealism: Ray Tracing

Photorealism: Ray Tracing Photorealism: Ray Tracing Reading Assignment: Chapter 13 Local vs. Global Illumination Local Illumination depends on local object and light sources only Global Illumination at a point can depend on any

More information

Depth of Field for Photorealistic Ray Traced Images JESSICA HO AND DUNCAN MACMICHAEL MARCH 7, 2016 CSS552: TOPICS IN RENDERING

Depth of Field for Photorealistic Ray Traced Images JESSICA HO AND DUNCAN MACMICHAEL MARCH 7, 2016 CSS552: TOPICS IN RENDERING Depth of Field for Photorealistic Ray Traced Images JESSICA HO AND DUNCAN MACMICHAEL MARCH 7, 2016 CSS552: TOPICS IN RENDERING Problem and Background Problem Statement The Phong Illumination Model and

More information

Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and

Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and 1 Here s the general problem we want to solve efficiently: Given a light and a set of pixels in view space, resolve occlusion between each pixel and the light. 2 To visualize this problem, consider the

More information

Robust Stencil Shadow Volumes. CEDEC 2001 Tokyo, Japan

Robust Stencil Shadow Volumes. CEDEC 2001 Tokyo, Japan Robust Stencil Shadow Volumes CEDEC 2001 Tokyo, Japan Mark J. Kilgard Graphics Software Engineer NVIDIA Corporation 2 Games Begin to Embrace Robust Shadows 3 John Carmack s new Doom engine leads the way

More information

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0

zspace Developer SDK Guide - Introduction Version 1.0 Rev 1.0 zspace Developer SDK Guide - Introduction Version 1.0 zspace.com Developer s Guide Rev 1.0 zspace, Inc. 2015. zspace is a registered trademark of zspace, Inc. All other trademarks are the property of their

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Computer Graphics and Visualization. What is computer graphics?

Computer Graphics and Visualization. What is computer graphics? CSCI 120 Computer Graphics and Visualization Shiaofen Fang Department of Computer and Information Science Indiana University Purdue University Indianapolis What is computer graphics? Computer graphics

More information

Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data

Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data 46 Next-generation IT Platforms Delivering New Value through Accumulation and Utilization of Big Data

More information

Glare Spread Function (GSF) - 12 Source Angle

Glare Spread Function (GSF) - 12 Source Angle Normalized Pixel Value POWERED BY OPTEST SOFTWARE Stray Light Measurement on LensCheck Lens Measurement Systems 1 Glare Spread Function (GSF) - 12 Source Angle 0.1 0.01 0.001 0.0001 0.00001 0.000001 1

More information

This course covers a group of global illumination algorithms known as many light methods, or VPL rendering methods. (VPL = virtual point light)

This course covers a group of global illumination algorithms known as many light methods, or VPL rendering methods. (VPL = virtual point light) This course covers a group of global illumination algorithms known as many light methods, or VPL rendering methods. (VPL = virtual point light) 1 Our goal is to render realistic images and one of the

More information

Today. Rendering algorithms. Rendering algorithms. Images. Images. Rendering Algorithms. Course overview Organization Introduction to ray tracing

Today. Rendering algorithms. Rendering algorithms. Images. Images. Rendering Algorithms. Course overview Organization Introduction to ray tracing Today Rendering Algorithms Course overview Organization Introduction to ray tracing Spring 2009 Matthias Zwicker Universität Bern Rendering algorithms Problem statement Given computer representation of

More information

Creating Affordable and Reliable Autonomous Vehicle Systems

Creating Affordable and Reliable Autonomous Vehicle Systems Creating Affordable and Reliable Autonomous Vehicle Systems Shaoshan Liu shaoshan.liu@perceptin.io Autonomous Driving Localization Most crucial task of autonomous driving Solutions: GNSS but withvariations,

More information

A NEURAL NETWORK BASED TRAFFIC-FLOW PREDICTION MODEL. Bosnia Herzegovina. Denizli 20070, Turkey. Buyukcekmece, Istanbul, Turkey

A NEURAL NETWORK BASED TRAFFIC-FLOW PREDICTION MODEL. Bosnia Herzegovina. Denizli 20070, Turkey. Buyukcekmece, Istanbul, Turkey Mathematical and Computational Applications, Vol. 15, No. 2, pp. 269-278, 2010. Association for Scientific Research A NEURAL NETWORK BASED TRAFFIC-FLOW PREDICTION MODEL B. Gültekin Çetiner 1, Murat Sari

More information

Taking Back Control of Your Network With SD-LAN

Taking Back Control of Your Network With SD-LAN IHS TECHNOLOGY SEPTEMBER 2016 Taking Back Control of Your Network With SD-LAN Matthias Machowinski, Senior Research Director, Enterprise Networks and Video TABLE OF CONTENTS Access Networks Are Under Pressure...

More information

Animation & Rendering

Animation & Rendering 7M836 Animation & Rendering Introduction, color, raster graphics, modeling, transformations Arjan Kok, Kees Huizing, Huub van de Wetering h.v.d.wetering@tue.nl 1 Purpose Understand 3D computer graphics

More information

ECE 1161/2161 Embedded Computer System Design 2. Introduction. Wei Gao. Spring

ECE 1161/2161 Embedded Computer System Design 2. Introduction. Wei Gao. Spring ECE 1161/2161 Embedded Computer System Design 2 Introduction Wei Gao Spring 2018 1 Course Information Class time: 4:30pm 5:45pm TuTh Instructor: Wei Gao, weigao@pitt.edu Office: 1205 Benedum Office hour:

More information

Overview & White Paper.

Overview & White Paper. Overview & White Paper www.phantomx.co CONTENT 2. Introduction 7. Scalability and Network 3. Summary 3. About PhantomX 3. Our mission 4. The team behind PhantomX 5. Specification 8. Proof-of-Work 9. Proof-of-Stake

More information

3D Rendering Pipeline

3D Rendering Pipeline 3D Rendering Pipeline Reference: Real-Time Rendering 3 rd Edition Chapters 2 4 OpenGL SuperBible 6 th Edition Overview Rendering Pipeline Modern CG Inside a Desktop Architecture Shaders Tool Stage Asset

More information

Testing for the Unexpected Using PXI

Testing for the Unexpected Using PXI Testing for the Unexpected Using PXI An Automated Method of Injecting Faults for Engine Management Development By Shaun Fuller Pickering Interfaces Ltd. What will happen if a fault occurs in an automotive

More information

,

, [Class Room Online Training] Weekdays:- 2hrs / 3 days Fastrack:- 1½hrs / Day Weekend:- 2½ hrs (Sat & Sun) An ISO 9001:2015 Institute ADMEC Multimedia Institute www.admecindia.co.in 9911782350, 9811818122

More information

Vision based autonomous driving - A survey of recent methods. -Tejus Gupta

Vision based autonomous driving - A survey of recent methods. -Tejus Gupta Vision based autonomous driving - A survey of recent methods -Tejus Gupta Presently, there are three major paradigms for vision based autonomous driving: Directly map input image to driving action using

More information

Europeana DSI 2 Access to Digital Resources of European Heritage

Europeana DSI 2 Access to Digital Resources of European Heritage Europeana DSI 2 Access to Digital Resources of European Heritage MILESTONE Revision 1.0 Date of submission 28.04.2017 Author(s) Krystian Adamski, Tarek Alkhaeir, Marcin Heliński, Aleksandra Nowak, Marcin

More information

Changes from the PMA mock-up

Changes from the PMA mock-up Since Konica Minolta Europe is constantly receiving many requests for updates about the development status of the Dynax 7 Digital, Clemens Helberg, Marketing Manager from Konica Minolta Europe, has gained

More information

There we are; that's got the 3D screen and mouse sorted out.

There we are; that's got the 3D screen and mouse sorted out. Introduction to 3D To all intents and purposes, the world we live in is three dimensional. Therefore, if we want to construct a realistic computer model of it, the model should be three dimensional as

More information

THE AUSTRALIAN NATIONAL UNIVERSITY Final Examinations (Semester 2) COMP4610/COMP6461 (Computer Graphics) Final Exam

THE AUSTRALIAN NATIONAL UNIVERSITY Final Examinations (Semester 2) COMP4610/COMP6461 (Computer Graphics) Final Exam THE AUSTRALIAN NATIONAL UNIVERSITY Final Examinations (Semester 2) 2015 COMP4610/COMP6461 (Computer Graphics) Final Exam Writing Period: 3 hours duration Study Period: 15 minutes duration. During this

More information

Area View Kit DC AVK. Features. Applications

Area View Kit DC AVK. Features. Applications The Area View Kit (AVK) consists of 4 HDR CMOS cameras, an automotive Power-over-Ethernet (PoE) switch and the necessary accessories. It allows to define the right camera positions over the desk and at

More information

Introduction to Computer Graphics. Knowledge basic concepts 2D and 3D computer graphics

Introduction to Computer Graphics. Knowledge basic concepts 2D and 3D computer graphics Introduction to Computer Graphics Knowledge basic concepts 2D and 3D computer graphics 1 Introduction 2 Basic math 3 2D transformations 4 3D transformations 5 Viewing 6 Primitives 7 Geometry 8 Shading

More information

Many Regions, Many Offices, Many Archives: An Office 365 Migration Story CASE STUDY

Many Regions, Many Offices, Many Archives: An Office 365 Migration Story CASE STUDY Many Regions, Many Offices, Many Archives: An Office 365 Migration Story CASE STUDY Making a Company s World a Smaller, Simpler Place Summary INDUSTRY Multi-national construction and infrastructure services

More information

Mercury Mission Systems BuildSAFE Graphics Suite Multicore Software Renderer Scott Engle Director of Business Development

Mercury Mission Systems BuildSAFE Graphics Suite Multicore Software Renderer Scott Engle Director of Business Development Mercury Mission Systems BuildSAFE Graphics Suite Multicore Software Renderer Scott Engle Director of Business Development Mercury acquires Richland Technologies to compliment MMSI Mercury Mission Systems

More information

Working with your Camera

Working with your Camera Topic 2 Introduction To Lenses Learning Outcomes By the end of this topic you will have a basic understanding of what lenses you need for specific types of shot. You will also be able to distinguish between

More information

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical

More information

Specifying Storage Servers for IP security applications

Specifying Storage Servers for IP security applications Specifying Storage Servers for IP security applications The migration of security systems from analogue to digital IP based solutions has created a large demand for storage servers high performance PCs

More information

521493S Computer Graphics. Exercise 3

521493S Computer Graphics. Exercise 3 521493S Computer Graphics Exercise 3 Question 3.1 Most graphics systems and APIs use the simple lighting and reflection models that we introduced for polygon rendering. Describe the ways in which each

More information

AUGMENTED REALITY AS A METHOD FOR EXPANDED PRESENTATION OF OBJECTS OF DIGITIZED HERITAGE. Alexander Kolev, Dimo Dimov

AUGMENTED REALITY AS A METHOD FOR EXPANDED PRESENTATION OF OBJECTS OF DIGITIZED HERITAGE. Alexander Kolev, Dimo Dimov Serdica J. Computing 8 (2014), No 4, 355 362 Serdica Journal of Computing Bulgarian Academy of Sciences Institute of Mathematics and Informatics AUGMENTED REALITY AS A METHOD FOR EXPANDED PRESENTATION

More information

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform

Mobile Point Fusion. Real-time 3d surface reconstruction out of depth images on a mobile platform Mobile Point Fusion Real-time 3d surface reconstruction out of depth images on a mobile platform Aaron Wetzler Presenting: Daniel Ben-Hoda Supervisors: Prof. Ron Kimmel Gal Kamar Yaron Honen Supported

More information

Autodesk Fusion 360: Render. Overview

Autodesk Fusion 360: Render. Overview Overview Rendering is the process of generating an image by combining geometry, camera, texture, lighting and shading (also called materials) information using a computer program. Before an image can be

More information

Be sure to always check the camera is properly functioning, is properly positioned and securely mounted.

Be sure to always check the camera is properly functioning, is properly positioned and securely mounted. Please read all of the installation instructions carefully before installing the product. Improper installation will void manufacturer s warranty. The installation instructions do not apply to all types

More information

Kanban Workshop 2 Days

Kanban Workshop 2 Days Kanban Workshop 2 Days Kanban methods have increased in popularity. Going beyond the manufacturing origins, more and more teams in information technology are adopting the practices. Kanban methods go beyond

More information

Vulkan and Animation 3/13/ &height=285&playerId=

Vulkan and Animation 3/13/ &height=285&playerId= https://media.oregonstate.edu/id/0_q2qgt47o?width= 400&height=285&playerId=22119142 Vulkan and Animation Natasha A. Anisimova (Particle systems in Vulkan) Intel Game Dev The Loop Vulkan Cookbook https://software.intel.com/en-us/articles/using-vulkan-graphics-api-to-render-acloud-of-animated-particles-in-stardust-application

More information

REAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU

REAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU High-Performance Сomputing REAL-TIME ROAD SIGNS RECOGNITION USING MOBILE GPU P.Y. Yakimov Samara National Research University, Samara, Russia Abstract. This article shows an effective implementation of

More information

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness

Visible and Long-Wave Infrared Image Fusion Schemes for Situational. Awareness Visible and Long-Wave Infrared Image Fusion Schemes for Situational Awareness Multi-Dimensional Digital Signal Processing Literature Survey Nathaniel Walker The University of Texas at Austin nathaniel.walker@baesystems.com

More information

Extreme automation of today s technological marvel - connected cars

Extreme automation of today s technological marvel - connected cars VIEW POINT Extreme automation of today s technological marvel - connected cars - Sandhya Jeevan Rao Senior Project Manager Abstract Going by Gartner s findings which suggests that 25 billion connected

More information

Digital Image Processing Lectures 1 & 2

Digital Image Processing Lectures 1 & 2 Lectures 1 & 2, Professor Department of Electrical and Computer Engineering Colorado State University Spring 2013 Introduction to DIP The primary interest in transmitting and handling images in digital

More information

AI in The Data AI Centre in the Data Centre

AI in The Data AI Centre in the Data Centre AI in The Data AI Centre in the Data Centre Walter Van Hoolst Technology Architect HPE Nimble Storage Walter Van Hoolst Technology Architect HPE Nimble Storage April 24, 2018 Walter.vanhoolst@hpe.com @WHoolst

More information