Edge Detection. Whitepaper

Similar documents
Parallax Bumpmapping. Whitepaper

PVRHub. User Manual. Public Imagination Technologies

Navigational Data Tools. Reference Guide

PowerVR Performance Recommendations. The Golden Rules

Filewrap. User Manual

Shader Based Water Effects. Whitepaper

PowerVR Series5. Architecture Guide for Developers

PFX Language Format. Specification

PVRTC & Texture Compression. User Guide

POWERVR MBX. Technology Overview

PowerVR Hardware. Architecture Overview for Developers

PVRTune. Quick Start Guide for Android

PowerVR Performance Recommendations. The Golden Rules

PVR File Format. Specification

PVRTC Specification and User Guide

Rendering Grass with Instancing in DirectX* 10

Computer Graphics. Lecture 9 Environment mapping, Mirroring

PowerVR Performance Recommendations The Golden Rules. October 2015

Lets assume each object has a defined colour. Hence our illumination model is looks unrealistic.

TSBK03 Screen-Space Ambient Occlusion

Computer Graphics. Shadows

Advanced Deferred Rendering Techniques. NCCA, Thesis Portfolio Peter Smith

White Paper. Solid Wireframe. February 2007 WP _v01

PowerVR: Getting Great Graphics Performance with the PowerVR Insider SDK. PowerVR Developer Technology

Shadows and Texture Mapping

Graphics and Interaction Rendering pipeline & object modelling

Project report Augmented reality with ARToolKit

Bringing AAA graphics to mobile platforms. Niklas Smedberg Senior Engine Programmer, Epic Games

Vulkan Multipass mobile deferred done right

Pipeline Operations. CS 4620 Lecture 14

CSE 167: Lecture #17: Volume Rendering. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2012

Order Independent Transparency with Dual Depth Peeling. Louis Bavoil, Kevin Myers

Textures. Texture coordinates. Introduce one more component to geometry

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Render all data necessary into textures Process textures to calculate final image

CMSC427 Advanced shading getting global illumination by local methods. Credit: slides Prof. Zwicker

Adaptive Normal Map Compression for 3D Video Games

Technical Report. Anisotropic Lighting using HLSL

Advanced Lighting casting shadows where the sun never shines!

Adaptive Point Cloud Rendering

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

Orthogonal Projection Matrices. Angel and Shreiner: Interactive Computer Graphics 7E Addison-Wesley 2015

Module Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1

PowerVR. Performance Recommendations

POWERVR MBX & SGX OpenVG Support and Resources

CS 381 Computer Graphics, Fall 2008 Midterm Exam Solutions. The Midterm Exam was given in class on Thursday, October 23, 2008.

CS 354R: Computer Game Technology

Textures. Texture Mapping. Bitmap Textures. Basic Texture Techniques

Profiling and Debugging Games on Mobile Platforms

9. Illumination and Shading

POWERVR. 3D Application Development Recommendations

Tools To Get Great Graphics Performance

Computer Graphics 10 - Shadows

Introduction. QuickStart and Demo Scenes. Support & Contact info. Thank you for purchasing!

Image Precision Silhouette Edges

Shadows for Many Lights sounds like it might mean something, but In fact it can mean very different things, that require very different solutions.

E.Order of Operations

Chapter 3. Texture mapping. Learning Goals: Assignment Lab 3: Implement a single program, which fulfills the requirements:

PVRTune. User Manual

GPGPU. Peter Laurens 1st-year PhD Student, NSC

Module 13C: Using The 3D Graphics APIs OpenGL ES

Lecture 13: Reyes Architecture and Implementation. Kayvon Fatahalian CMU : Graphics and Imaging Architectures (Fall 2011)

Optimizing and Profiling Unity Games for Mobile Platforms. Angelo Theodorou Senior Software Engineer, MPG Gamelab 2014, 25 th -27 th June

Tutorial 4: Texture Mapping Techniques

Soft shadows. Steve Marschner Cornell University CS 569 Spring 2008, 21 February

The Application Stage. The Game Loop, Resource Management and Renderer Design

Shader Series Primer: Fundamentals of the Programmable Pipeline in XNA Game Studio Express

Drawing a Crowd. David Gosselin Pedro V. Sander Jason L. Mitchell. 3D Application Research Group ATI Research

CSE 167: Introduction to Computer Graphics Lecture #6: Lights. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2014

GPU Memory Model. Adapted from:

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

Direct Rendering of Trimmed NURBS Surfaces

First Steps in Hardware Two-Level Volume Rendering

Texture Mapping II. Light maps Environment Maps Projective Textures Bump Maps Displacement Maps Solid Textures Mipmaps Shadows 1. 7.

CS 130 Final. Fall 2015

OpenGl Pipeline. triangles, lines, points, images. Per-vertex ops. Primitive assembly. Texturing. Rasterization. Per-fragment ops.

Introduction to Visualization and Computer Graphics

x ~ Hemispheric Lighting

Horizon-Based Ambient Occlusion using Compute Shaders. Louis Bavoil

Using the PowerVR SDK to Optimize your Renderer

Synthesis of Textures with Intricate Geometries using BTF and Large Number of Textured Micropolygons. Abstract. 2. Related studies. 1.

Rendering Grass Terrains in Real-Time with Dynamic Lighting. Kévin Boulanger, Sumanta Pattanaik, Kadi Bouatouch August 1st 2006

Let s start with occluding contours (or interior and exterior silhouettes), and look at image-space algorithms. A very simple technique is to render

OpenGL: Open Graphics Library. Introduction to OpenGL Part II. How do I render a geometric primitive? What is OpenGL

Models and Architectures

Grafica Computazionale: Lezione 30. Grafica Computazionale. Hiding complexity... ;) Introduction to OpenGL. lezione30 Introduction to OpenGL

Lecture 6: Texturing Part II: Texture Compression and GPU Latency Hiding Mechanisms. Visual Computing Systems CMU , Fall 2014

Previously... contour or image rendering in 2D

Real-Time Rendering (Echtzeitgraphik) Michael Wimmer

Abstract. Introduction. Kevin Todisco

Graphics for VEs. Ruth Aylett

PowerVR GLSL optimization. Low level

Methodology for Lecture. Importance of Lighting. Outline. Shading Models. Brief primer on Color. Foundations of Computer Graphics (Spring 2010)

Homework 3: Programmable Shaders

Discrete Techniques. 11 th Week, Define a buffer by its spatial resolution (n m) and its depth (or precision) k, the number of

SDK White Paper. HLSL Blood Shader Gravity Maps

PVRTrace. User Manual

Chapter 17: The Truth about Normals

Photorealistic 3D Rendering for VW in Mobile Devices

Computergrafik. Matthias Zwicker. Herbst 2010

Transcription:

Public Imagination Technologies Edge Detection Copyright Imagination Technologies Limited. All Rights Reserved. This publication contains proprietary information which is subject to change without notice and is supplied 'as is' without warranty of any kind. Imagination Technologies and the Imagination Technologies logo are trademarks or registered trademarks of Imagination Technologies Limited. All other logos, products, trademarks and registered trademarks are the property of their respective owners. Filename : Edge Detection. Version : PowerVR SDK REL_17.1@4658063a External Issue Issue Date : 07 Apr 2017 Author : Imagination Technologies Limited Edge Detection 1 Revision PowerVR SDK REL_17.1@4658063a

Imagination Technologies Public Contents 1. Introduction... 3 2. Techniques Used... 4 2.1. Colour Based... 4 2.2. Depth Based... 4 2.3. Normal Based... 5 2.3.1. Pre-Processing... 5 2.3.2. Post-Processing... 5 2.4. Object Based... 6 3. Implementation... 7 3.1. Repurposing the Alpha Channel... 7 3.2. Simplifying the Checks... 7 4. Future Work... 8 5. Contact Details... 9 List of Figures Figure 1. Colour based edge detection... 4 Figure 2. Depth based edge detection... 5 Figure 3. Normal based edge detection... 5 Figure 4. Object ID based edge detection... 6 Revision PowerVR SDK REL_17.1@4658063a 2

Public Imagination Technologies 1. Introduction Edge detection is a process used in computer graphics to determine the borders between different objects or areas in an image. The main uses of edge detection are in computer vision and image processing, generally to help locate individual objects. There are many edge detection techniques available, ranging from those that will run in real-time to complex post processing methods only suitable for offline computation. This paper details a simple technique used to detect edges on a mobile platform, as implemented in the EdgeDetection Training Course, as well as discussing some alternative approaches. The purpose of detecting edges in this case is simply as a visual effect rather than gaining an understanding of the image s contents. Edge Detection 3 Revision PowerVR SDK REL_17.1@4658063a

Imagination Technologies Public 2. Techniques Used There are many different ways of detecting edges in a 3D scene and, with the exception of simple normal-based edge detection, they all require an initial render of the scene to be stored as a 2D texture for post processing in a second render. Post processing techniques are covered in depth in the White Paper entitled Post Processing Effects that is packaged along with our SDK. For the EdgeDetection Training Course it was concluded that the object based edge detection was the most optimal approach for PowerVR hardware, however, there are other techniques that could have been used to find edges, as detailed next. 2.1. Colour Based Colour based edge detection is one of the more standard methods of detecting edges and the only one available for a traditional 2D image (Figure 1). This technique essentially looks for steep changes in the colour between areas of pixels, where an edge exists at the boundary between these areas. As well as locating edges between objects, colour based edge detection will detect internal edges where parts of an object are different colours. This sort of effect can make a scene look quite cartoonlike by highlighting all areas of differing colour in a scene. However, this also means edges will not be detected if two objects are a similar colour, which can cause missing lines in the effect. This technique can also cause problems when textures are used that have a lot of variation, often causing excessive line detection between many segments. If edges are highlighted with a solid coloured line, as in the training course, the texture can become lost in the highlighting. Figure 1. Colour based edge detection 2.2. Depth Based Using depth to determine edges is a more standard approach in 3D scenes as the actual colour is ignored, which means textures and lighting do not affect the outcome (Figure 2). However, it works in a similar way to colour detection as steep changes in values within the depth buffer are used to detect edges. This technique can detect some internal edges, such as where a part of some object juts out towards the camera. This technique generally presents fewer visual artefacts than colour based edge detection. The main problem with this method is that edges with little depth variation, such as where a wall meets with the ground, are not detected as there is no change in depth value, which is a fairly frequent occurrence in many 3D scenes. To get around this, normal-based edge detection is often employed to mop-up the remaining edges that the depth based approach have missed. Revision PowerVR SDK REL_17.1@4658063a 4

Public Imagination Technologies Figure 2. Depth based edge detection 2.3. Normal Based There are two types of normal based edge detection (Figure 3): the first is a cheap but rough preprocessing technique which is demonstrated in the Mouse demo in the PowerVR SDK; the second is a post processing technique which produces a cleaner result. Figure 3. Normal based edge detection 2.3.1. Pre-Processing This simple approach checks for any normal that is at roughly right angles to the camera, which should draw an edge wherever a front-facing polygon meets with a back-facing polygon. It is quite a rough technique but will provide simple and very fast edge detection. This technique is a very rough one and can have problems, but its speed means it is a good choice for some simpler scenes. 2.3.2. Post-Processing Another method looks for sudden changes in the normal values at various pixel locations. By looking at pixels that neighbour others with a very different normal, edges between objects that meet up can be found which will, otherwise, not be found with depth-based detection. This method will detect most edges but misses the ones that depth-detection will pick up instead, such as between two objects with similar or identical normals, but at different depths in the scene. Therefore, it is ideally used in tandem with depth based edge detection. Edge Detection 5 Revision PowerVR SDK REL_17.1@4658063a

Imagination Technologies Public 2.4. Object Based Object based edge detection is novel in that it reverses the standard use of edge detection (Figure 4). Generally, edges are found first and used to locate objects, not the other way around. However, in a 3D scene the objects are known at the start, edges are not. To find the edges from this data, each object is assigned a unique ID and then drawn with this data preserved for each pixel. By then checking wherever the ID value differs from a neighbouring pixel, without requiring any sort of threshold, an edge can be found along the silhouette of the object. This method will perfectly highlight the edges of any object that is assigned an ID as long as the ID values remain unique. This is, therefore, one of the most accurate single techniques available. However, internal edges that would be detected by other methods will be completely missed, although this can be worked around by manually assigning a different ID value to portions of an object if necessary. Because of the accuracy of edge detection from this data, this technique was chosen for the EdgeDetection Training Course. Other methods had been tried but they all proved more expensive to use and less reliable. A combination of normal and depth techniques would be more accurate, but would be too resource-intensive for current generation mobile platforms. Figure 4. Object ID based edge detection Revision PowerVR SDK REL_17.1@4658063a 6

Public Imagination Technologies 3. Implementation To implement the edge detection, a method similar to using a convolution matrix is used (as described in the post processing recommendations document). Firstly, the entire scene is rendered into a texture, which is then drawn to a flat plane covering the entire screen. This plane is processed using a post-process fragment shader which looks at each texel from the first render and checks the ID values of neighbouring texels for differences. When differences are found, the fragment shader recognises this as an edge and is highlighted according to the user s choice. 3.1. Repurposing the Alpha Channel To implement the object based detection technique, a unique ID for each object needs to be stored. There are two options for doing this: either store it in a parallel texture or store it in the texture that is already being drawn to. Storing this information in the rendered texture provides the best performance, as the number of render passes is kept to a minimum (one to render the scene to a texture and a second to perform post processing) and the post processing render pass only has to read from a single texture. The best place to store the information for the Training Course was in the alpha channel, as this leaves the RGB channels free to store colour information. By putting the IDs into a single channel, the number of available IDs is limited by its precision. For example, a typical 8- bit channel will allow only 256 different ID values, which can cause problems with highly complex scenes that contain more objects than this. This can be somewhat countered by intelligent allocation of IDs but this could slow down the program. Although there are significant benefits to using the alpha channel in this way, there are some drawbacks. A major problem with using the alpha channel is that transparency is effectively disabled. There are ways of packing the ID and alpha into the same channel to circumvent this, but without the availability of bit manipulation in GLSL ES, this approach can significantly reduce precision. Despite this drawback, the technique works very well and allows accurate edge detection at a solid frame rate, even on lower end PowerVR graphics cores. The limitations of this approach are in most cases fairly insignificant and are largely outweighed by the gains. 3.2. Simplifying the Checks Texture lookups are expensive, so limiting the number of these is a good way to minimise memory bandwidth. A traditional matrix check such as convolution will look at several surrounding texels to get a result. Typically this is 4, 8 or even more lookups, as well as the lookup for the central texel. Diagonals can be used for high accuracy, but a sufficient result can be achieved by simply checking four directions (up, down, left and right). Additionally, looking in all four directions is not necessary as the boundary between fragments that are vertical neighbours is checked twice, as are horizontally neighbouring fragments. For this reason, the EdgeDetection Training Course only looks at texels above and to the right of the fragment being processed, which results in edges being drawn at the bottom left of objects. As this test is performed for all objects in the scene, the result is still convincing and is less intensive than performing the two additional texture lookups that four directions would require. When performing colour or depth edge detection, a check often needs to be done over multiple pixels in each direction using a threshold to account for shallower gradients where a single pixel step may not be enough to find an edge. The object ID approach avoids the need for a threshold, as the difference between the ID values of neighbouring fragments is instantly recognisable. With all of the simplifications mentioned above, each fragment only needs to look up just 3 texels, taking colour data from just one and the object ID in the alpha channel from all three. This results in an edge detection effect that can be run at interactive frame rates with ease, even on the lower powered PowerVR graphics cores. Edge Detection 7 Revision PowerVR SDK REL_17.1@4658063a

Imagination Technologies Public 4. Future Work Although the object based edge detection technique used in the training course is a robust approach, the limitation of only being able to store information in the RGBA channels of the rendered texture in OpenGL ES 2.0 prevents additional information being passed to the post processing render. With the introduction of Multiple Render Targets (MRTs) in future graphics APIs it will be possible to write to additional channels so that more information can be given to the post processing render pass. This would not only allow alpha information to be used during post processing with the object based technique, but would also allow other techniques to be combined with this one without introducing additional render passes. Revision PowerVR SDK REL_17.1@4658063a 8

Public Imagination Technologies 5. Contact Details For further support, visit our forum: http://forum.imgtec.com Or file a ticket in our support system: https://pvrsupport.imgtec.com To learn more about our PowerVR Graphics SDK and Insider programme, please visit: http://www.powervrinsider.com For general enquiries, please visit our website: http://imgtec.com/corporate/contactus.asp Edge Detection 9 Revision PowerVR SDK REL_17.1@4658063a

Imagination Technologies Public Imagination Technologies, the Imagination Technologies logo, AMA, Codescape, Ensigma, IMGworks, I2P, PowerVR, PURE, PURE Digital, MeOS, Meta, MBX, MTX, PDP, SGX, UCC, USSE, VXD and VXE are trademarks or registered trademarks of Imagination Technologies Limited. All other logos, products, trademarks and registered trademarks are the property of their respective owners. Revision PowerVR SDK REL_17.1@4658063a 10