Microsoft Direct3D Devices 1. Device Types. Devices 2. Creating a Device

Similar documents
The Application Stage. The Game Loop, Resource Management and Renderer Design

Rendering. Converting a 3D scene to a 2D image. Camera. Light. Rendering. View Plane

DirectX 11 First Elements

Optimisation. CS7GV3 Real-time Rendering

CS451Real-time Rendering Pipeline

Chapter 1 Introduction

Ciril Bohak. - INTRODUCTION TO WEBGL

Volume Shadows Tutorial Nuclear / the Lab

POWERVR MBX. Technology Overview

The Graphics Pipeline

1.2.3 The Graphics Hardware Pipeline

PowerVR Series5. Architecture Guide for Developers

The Rendering Pipeline (1)

PowerVR Hardware. Architecture Overview for Developers

Case 1:17-cv SLR Document 1-3 Filed 01/23/17 Page 1 of 33 PageID #: 60 EXHIBIT C

Advanced Lighting Techniques Due: Monday November 2 at 10pm

Real - Time Rendering. Pipeline optimization. Michal Červeňanský Juraj Starinský

Spring 2009 Prof. Hyesoon Kim

Module 13C: Using The 3D Graphics APIs OpenGL ES

Spring 2011 Prof. Hyesoon Kim

DOOM 3 : The guts of a rendering engine

Getting fancy with texture mapping (Part 2) CS559 Spring Apr 2017

Many rendering scenarios, such as battle scenes or urban environments, require rendering of large numbers of autonomous characters.

Lecture 25: Board Notes: Threads and GPUs

Rasterization Overview

Beginning Direct3D Game Programming: 1. The History of Direct3D Graphics

Building scalable 3D applications. Ville Miettinen Hybrid Graphics

Chapter Answers. Appendix A. Chapter 1. This appendix provides answers to all of the book s chapter review questions.

Hardware-driven visibility culling

Shading. Introduction to Computer Graphics Torsten Möller. Machiraju/Zhang/Möller/Fuhrmann

E.Order of Operations

CS230 : Computer Graphics Lecture 4. Tamar Shinar Computer Science & Engineering UC Riverside

Programming Graphics Hardware

OpenGL: Open Graphics Library. Introduction to OpenGL Part II. How do I render a geometric primitive? What is OpenGL

Last Time. Why are Shadows Important? Today. Graphics Pipeline. Clipping. Rasterization. Why are Shadows Important?

Dave Shreiner, ARM March 2009

The Traditional Graphics Pipeline

Scanline Rendering 2 1/42

Advanced 3D Game Programming with DirectX* 10.0

Graphics Performance Optimisation. John Spitzer Director of European Developer Technology

Per-Pixel Lighting and Bump Mapping with the NVIDIA Shading Rasterizer

CSE 167: Introduction to Computer Graphics Lecture #4: Vertex Transformation

GeForce3 OpenGL Performance. John Spitzer

Computer Graphics. Lecture 9 Hidden Surface Removal. Taku Komura

HE COMPLETE OPENGL PROGI FOR WINDOW WIND

3D Programming. 3D Programming Concepts. Outline. 3D Concepts. 3D Concepts -- Coordinate Systems. 3D Concepts Displaying 3D Models

Graphics Hardware, Graphics APIs, and Computation on GPUs. Mark Segal

Topic 10: Scene Management, Particle Systems and Normal Mapping. CITS4242: Game Design and Multimedia

For Intuition about Scene Lighting. Today. Limitations of Planar Shadows. Cast Shadows on Planar Surfaces. Shadow/View Duality.

Rendering Objects. Need to transform all geometry then

Architectures. Michael Doggett Department of Computer Science Lund University 2009 Tomas Akenine-Möller and Michael Doggett 1

Hidden surface removal. Computer Graphics

Graphics Hardware and Display Devices

White Paper. Solid Wireframe. February 2007 WP _v01

Module Contact: Dr Stephen Laycock, CMP Copyright of the University of East Anglia Version 1

OPENGL AND GLSL. Computer Graphics

Copyright Khronos Group, Page Graphic Remedy. All Rights Reserved

Programming Guide. Aaftab Munshi Dan Ginsburg Dave Shreiner. TT r^addison-wesley

Rendering approaches. 1.image-oriented. 2.object-oriented. foreach pixel... 3D rendering pipeline. foreach object...

CS452/552; EE465/505. Clipping & Scan Conversion

Graphics Hardware. Graphics Processing Unit (GPU) is a Subsidiary hardware. With massively multi-threaded many-core. Dedicated to 2D and 3D graphics

Shadows. COMP 575/770 Spring 2013

Filtering theory: Battling Aliasing with Antialiasing. Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology

Enhancing Traditional Rasterization Graphics with Ray Tracing. October 2015

Grafica Computazionale: Lezione 30. Grafica Computazionale. Hiding complexity... ;) Introduction to OpenGL. lezione30 Introduction to OpenGL

The Traditional Graphics Pipeline

Robust Stencil Shadow Volumes. CEDEC 2001 Tokyo, Japan

Real - Time Rendering. Graphics pipeline. Michal Červeňanský Juraj Starinský

Last Time. Reading for Today: Graphics Pipeline. Clipping. Rasterization

CS427 Multicore Architecture and Parallel Computing

Abstract. 2 Description of the Effects Used. 1 Introduction Phong Illumination Bump Mapping

Optimizing DirectX Graphics. Richard Huddy European Developer Relations Manager

CSE 167: Introduction to Computer Graphics Lecture #5: Rasterization. Jürgen P. Schulze, Ph.D. University of California, San Diego Fall Quarter 2015

Surface Graphics. 200 polys 1,000 polys 15,000 polys. an empty foot. - a mesh of spline patches:

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

TDA362/DIT223 Computer Graphics EXAM (Same exam for both CTH- and GU students)

graphics pipeline computer graphics graphics pipeline 2009 fabio pellacini 1

Windowing System on a 3D Pipeline. February 2005

Optimizing and Profiling Unity Games for Mobile Platforms. Angelo Theodorou Senior Software Engineer, MPG Gamelab 2014, 25 th -27 th June

Evolution of GPUs Chris Seitz

Photorealistic 3D Rendering for VW in Mobile Devices

MAXIS-mizing Darkspore*: A Case Study of Graphic Analysis and Optimizations in Maxis Deferred Renderer

TSBK 07! Computer Graphics! Ingemar Ragnemalm, ISY

Optimizing for DirectX Graphics. Richard Huddy European Developer Relations Manager

Physically-Based Laser Simulation

CS 381 Computer Graphics, Fall 2008 Midterm Exam Solutions. The Midterm Exam was given in class on Thursday, October 23, 2008.

Dual-Paraboloid Shadow Mapping. A paper by Stefan Brabec, Thomas Annen, Hans-Peter Seidel Presented By Patrick Wouterse, Selmar Kok

Real-Time Shadows. Last Time? Today. Why are Shadows Important? Shadows as a Depth Cue. For Intuition about Scene Lighting

Project 1, 467. (Note: This is not a graphics class. It is ok if your rendering has some flaws, like those gaps in the teapot image above ;-)

Rendering Grass with Instancing in DirectX* 10

The Traditional Graphics Pipeline

Coding OpenGL ES 3.0 for Better Graphics Quality

COMP 4801 Final Year Project. Ray Tracing for Computer Graphics. Final Project Report FYP Runjing Liu. Advised by. Dr. L.Y.

Introduction. What s New in This Edition

Metal Feature Set Tables

GPU Architecture and Function. Michael Foster and Ian Frasch

Shadows in the graphics pipeline

Vulkan and Animation 3/13/ &height=285&playerId=

Deferred Rendering Due: Wednesday November 15 at 10pm

FROM VERTICES TO FRAGMENTS. Lecture 5 Comp3080 Computer Graphics HKBU

Transcription:

Microsoft Direct3D Devices 1. Device Types 1 Hardware Device The primary device type is the hardware device, which supports hardware-accelerated rasterization and both hardware and software vertex processing. If the computer on which your application is running is equipped with a display adapter that supports Microsoft Direct3D, your application should use it for 3-D operations. Direct3D hardware devices implement all or part of the transformation, lighting, and rasterizing modules in hardware. Applications do not access 3-D cards directly. They call Direct3D functions and methods. Direct3D accesses the hardware through a hardware abstraction layer. If the computer running your application supports the abstraction layer, it will gain the best performance by using a hardware device. To specify hardware rasterization and shading, create a Device object using the DeviceType.Hardware constant, as shown in Create a Direct3D Hardware Device. Note Hardware devices cannot render to 8-bit render-target surfaces. Reference Device Direct3D supports an additional device type called a reference device or reference rasterizer. Unlike a software device, the reference rasterizer supports every Direct3D feature. Because these features are implemented for accuracy, rather than speed, and are implemented in software, the results are not very fast. The reference rasterizer does make use of special CPU instructions whenever it can, but it is not intended for retail applications. Use the reference rasterizer only for feature testing or demonstration purposes. To create a reference device, create a Device object using the DeviceType.Reference constant. Devices 2. Creating a Device Note All rendering devices created by a given Microsoft Direct3D object share the same physical resources. Although your application can create multiple rendering devices from a single Direct3D object, because they share the same hardware, extreme performance penalties will be incurred. To create a Direct3D device, your managed code application should first create a PresentParameters object that contains the presentation parameters for the device. The following code example contains a simple initialization using the Device(Int32,DeviceType,Control,CreateFlags,PresentParameters) constructor that receives a PresentParameters object. using Microsoft.DirectX.Direct3D;... // Global variables for this project Device device = null; // Create rendering device PresentParameters presentparams = new PresentParameters(); device = new Device(0, DeviceType.Hardware, this, CreateFlags.SoftwareVertexProcessing, presentparams); This call also specifies the default adapter, a hardware device, and software vertex processing. Note that a call to create, dispose, or reset the device should happen only on the same thread as the window procedure of the focus window. After creating the device, set its state. Related Topics Create a Direct3D Device

Devices 3. Selecting a Device 2 Applications can query hardware to detect the supported Microsoft Direct3D device types. This section contains information on the primary tasks involved in enumerating display adapters and selecting Direct3D devices. An application must perform a series of tasks to select an appropriate Direct3D device. Note that the following steps are intended for a full-screen application. In most cases, a windowed application can skip most of these steps. 1. Initially, the application must enumerate the display adapters on the system. An adapter is a physical piece of hardware. Note that the graphics card might contain more than a single adapter, as is the case with a dual-head display. For a detailed example of enumerating adapters, see (SDK root)\samples\c#\common\d3denumeration.cs. Applications that are not concerned with multiple-monitor support can disregard this step and use the default adapter enumeration. 2. For each adapter, the application enumerates the supported display modes with the Adapters property of the Manager object. The collection of adapters can be queried with the AdapterListEnumerator object. 3. If required, the application checks for the presence of hardware acceleration in each enumerated display mode by checking the return value of the checktype parameter of Manager.CheckDeviceType. Other parameters of CheckDeviceType can provide additional information on the current display and back buffer formats of the device. 4. The application checks for the desired level of functionality for the device on this adapter using the Device.DeviceCaps property. This property returns a Caps structure that describes the capabilities of the device. Those devices that do not support the required functionality will not contain a valid structure. The device capability returned by DeviceCaps is guaranteed to be constant for a device across all display modes verified by CheckDeviceType. 5. Devices can always render to surfaces of the format of an enumerated display mode that is supported by the device. If the application is required to render to a surface of a different format, it can call CheckDeviceType. If the device can render to the format, then it is guaranteed that all capabilities returned by DeviceCaps are applicable. 6. Lastly, the application can determine if multisampling techniques, such as full-scene antialiasing, are supported for a render format by using the CheckDeviceMultiSampleType method. After completing the above steps, the application should have a list of display modes in which it can operate. The final step is to verify that enough device-accessible memory is available to accommodate the required number of buffers and antialiasing. This test is necessary because the memory consumption for the mode and multisample combination is impossible to predict without verifying it. Moreover, some display adapter architectures might not have a constant amount of device-accessible memory. This means that an application should be able to report out-of-video-memory failures when going into full-screen mode. Typically, an application should remove full-screen mode from the list of modes it offers to a user, or it should attempt to consume less memory by reducing the number of back buffers or using a simpler multisampling technique. A windowed application performs a similar set of tasks, as follows. 1. It determines the desktop rectangle covered by the client area of the window. 2. It enumerates adapters, looking for the adapter for which the monitor covers the client area. If the client area is owned by more than one adapter, then the application can choose to drive each adapter independently, or to drive a single adapter and have Direct3D transfer pixels from one device to another at presentation. The application can also disregard the above two steps and use the default adapter, although this might result in slower operation when the window is placed on a secondary monitor. The default adapter is the first ordinal identified by the DeviceCreationParameters.AdapterOrdinal. The adapter ordinal value can be queried with the GetDeviceCaps method or returned by the AdapterInformation.Adapter property. 3. The application should call CheckDeviceType to determine if the device can support rendering to a back buffer of the specified format while in desktop mode. The AdapterInformation.CurrentDisplayMode property can be used to determine the desktop display format. 4. Devices 5. Determining Hardware Support Microsoft Direct3D provides the following methods to determine hardware support. CheckDeviceFormat Used to verify whether a surface format can be used as a texture, whether a surface format can be used as a texture and a render target, or whether a surface format can be used as a depth-stencil buffer. In addition, this method is used to verify depth buffer format support and depth-stencil buffer format support. CheckDeviceType Used to verify a device's ability to perform hardware acceleration, a device's ability to build a swap chain for presentation, or a device's ability to render to the current display format. CheckDepthStencilMatch Used to verify whether a depth-stencil buffer format is compatible with a render-target format. Note that before calling this method, the application should call CheckDeviceFormat on both the depth-stencil and render target formats.

Devices 6. Processing Vertex Data 3 The Device class supports vertex processing in both software and hardware. In general, the device capabilities for software and hardware vertex processing are not identical. Hardware capabilities are variable, depending on the display adapter and driver, while software capabilities are fixed. The following constants of the CreateFlags enumeration control vertex processing behavior for the hardware and reference devices. SoftwareVertexProcessing HardwareVertexProcessing MixedVertexProcessing The BehaviorFlags structure can also be used to control behavior using properties that access these same constant values. Specify one of these flags when calling the Device constructor. The MixedVertexProcessing mixed-mode flag enables the device to perform both software and hardware vertex processing. Only one of these three vertex processing flags may be set for a device at any one time; they are mutually exclusive. Note that the HardwareVertexProcessing flag is required to be set when creating a pure device (PureDevice). To avoid dual vertex processing capabilities on a single device, only the hardware vertex processing capabilities can be queried at run time. Software vertex processing capabilities are fixed and cannot be queried at run time. You can consult the properties of the VertexProcessingCaps structure to determine the hardware vertex processing capabilities of the device. For software vertex processing the following VertexProcessingCaps capabilities are supported (which are all but two of the available capabilities). SupportsDirectionAllLights SupportsLocalViewer SupportsMaterialSource SupportsPositionAllLights SupportsTextureGeneration SupportsTweening In addition, the following table lists the values that are set for device capabilities (Caps) for the software vertex processing mode. Member MaxActiveLights MaxUserClipPlanes 6 MaxVertexBlendMatrices 4 MaxStreams 16 MaxVertexIndex Software vertex processing capabilities Unlimited int(0xffffffff) Software vertex processing provides a guaranteed set of vertex processing capabilities, including an unbounded number of lights and full support for programmable vertex shaders. You can toggle between software and hardware vertex processing at any time when using the hardware device, the only device type that supports both hardware and software vertex processing. The only requirement is that vertex buffers used for software vertex processing must be allocated in system memory, typically with the SystemMemory constant of the Pool enumeration. Note The performance of hardware vertex processing is comparable to that of software vertex processing. For this reason, it is a good idea to provide, within a single device type, both hardware- and software-emulation functionality for vertex processing. This is not the case for rasterization, for which host processors are much slower than specialized graphics hardware. Thus both hardware- and software-emulated rasterization are not provided within a single device type. Software vertex processing is the only instance of functionality duplicated between the run time and the hardware (driver) within a single device. All other device capabilities represent potentially variable functionality provided by the driver.

4 Devices 7. Device-Supported Primitive Types You can render primitive types from a managed code application with any of the Device rendering methods. Line Lists A line list is a list of isolated, straight line segments. Line lists are useful for such tasks as adding sleet or heavy rain to a 3-D scene. Applications create a line list by filling an array of vertices, and the number of vertices in a line list must be an even number greater than or equal to two. The following illustration shows a rendered line list. You can apply materials and textures to a line list. The colors in the material or texture appear only along the lines drawn, not at any point in between the lines. Line Strips A line strip is a primitive that is composed of connected line segments. Your application can use line strips for creating polygons that are not closed. A closed polygon is a polygon whose last vertex is connected to its first vertex by a line segment. If your application makes polygons based on line strips, the vertices are not guaranteed to be coplanar. The following illustration depicts a rendered line strip. Point Lists A point list is a collection of vertices that are rendered as isolated points. Your application can use them in 3-D scenes for star fields, or dotted lines on the surface of a polygon. The following illustration depicts a rendered point list. Your application can apply materials and textures to a point list. The colors in the material or texture appear only at the points drawn, and not anywhere between the points.

5 Triangle Fans A triangle fan is similar to a triangle strip, except that all the triangles share one vertex. This is shown in the following illustration. The system uses vertices v2, v3, and v1 to draw the first triangle, v3, v4, and v1 to draw the second triangle, v4, v5, and v1 to draw the third triangle, and so on. When flat shading is enabled, the system shades the triangle with the color from its first vertex. The following illustration depicts a rendered triangle fan. Triangle Lists A triangle list is a list of isolated triangles. They might or might not be near each other. A triangle list must have at least three vertices. The total number of vertices must be divisible by three. Use triangle lists to create an object that is composed of disjoint pieces. For instance, one way to create a force-field wall in a 3-D game is to specify a large list of small, unconnected triangles. Then apply a material and texture that appears to emit light to the triangle list. Each triangle in the wall appears to glow. The scene behind the wall becomes partially visible through the gaps between the triangles, as a player might expect when looking at a force field. Triangle lists are also useful for creating primitives that have sharp edges and are shaded with Gouraud shading. See Face and Vertex Normal Vectors. The following illustration depicts a rendered triangle list.

Triangle Strips 6 A triangle strip is a series of connected triangles. Because the triangles are connected, the application does not need to repeatedly specify all three vertices for each triangle. For example, you need only seven vertices to define the following triangle strip. The system uses vertices v1, v2, and v3 to draw the first triangle, v2, v4, and v3 to draw the second triangle, v3, v4, and v5 to draw the third, v4, v6, and v5 to draw the fourth, and so on. Notice that the vertices of the second and fourth triangles are out of order; this is required to make sure that all the triangles are drawn in a clockwise orientation. Most objects in 3-D scenes are composed of triangle strips. This is because triangle strips can be used to specify complex objects in a way that makes efficient use of memory and processing time. The following illustration depicts a rendered triangle strip.