Com S 336 Final Project Ideas Deadlines These projects are to be done in groups of two. I strongly encourage everyone to start as soon as possible. Presentations begin four weeks from now (Tuesday, December 8). Thursday, November 5: With whom are you working and what are some ideas you are considering. You can tell me by email. Tuesday, November 10: a concrete description of what you plan to do. Please turn in a paragraph with 3 or so sentences describing what you propose to do. Do enough reading that you can try to be somewhat specific about what you want to try to implement, not just I want to investigate how skeletal animation works. Your project can be an application that you would like to try to create, or a focused investigation into a particular technique, or maybe something else that I haven t thought about before. All proposals need to be approved by the instructor. The idea is that you need to do something original, and do it in the context of what we have done in this course. For reference, here is a summary of the main topics we plan to cover in class in the next few weeks: Texture mapping Cube maps Basics of reflection/refraction Basics of transparency Some techniques for procedural textures (fragment shader) Perlin noise Bump mapping Frame buffer objects and render-to-texture Shadow mapping An excellent source of ideas to explore is the series of books GPU Gems (available online, scroll to bottom for table of contents) http://http.developer.nvidia.com/gpugems/gpugems_copyrightpg.html http://http.developer.nvidia.com/gpugems2/gpugems2_frontmatter.html http://http.developer.nvidia.com/gpugems3/gpugems3_pref01.html
Another great source of ideas and resources is the web page associated with the book Real Time Rendering, http://www.realtimerendering.com/. I have a copy of the book itself in my office if you want to borrow it. To get you started, here are a few examples of things that I think would probably lead to reasonable projects. Please do not limit yourself to this list, however! There are many other things you could investigate that you might be more interested in. (Light baking, particle systems, ray tracing, radiosity, HDR (high-dynamic-range) lighting, motion blur techniques...) Perlin Fire http://developer.download.nvidia.com/sdk/10/direct3d/source/perlinfire/doc/perlinfire.pdf Volumetric shadows http://nuclear.mutantstargoat.com/articles/volume_shadows_tutorial_nuclear.pdf http://http.developer.nvidia.com/gpugems/gpugems_ch09.html Parallax occlusion mapping http://developer.amd.com/media/gpu_assets/tatarchuk-pom-si3d06.pdf This one could be helpful too, describes a simpler but related technique: http://cowboyprogramming.com/2007/01/05/parallax-mapped-bullet-holes/
Tessellation You ll need OpenGL 4 for this... http://prideout.net/blog/?p=48 http://codeflow.org/entries/2010/nov/07/opengl-4-tessellation/ Water There s a lot that can be done here, so define the scope of your project carefully. http://habib.wikidot.com/ http://vterrain.org/water/ Generating fur, hair, cloth, etc. http://developer.download.nvidia.com/sdk/10/direct3d/source/fur/doc/furshellsandfins.pdf
Skeletal animation http://content.gpwiki.org/index.php/opengl:tutorials:basic_bones_system Deferred rendering In your typical "forward" rendering pipeline, we draw 1 object at a time, computing all the shading calculations for every visible point on that object before moving on. If we draw a second object in front of the first, we now won't even see the pixels we spent time shading! Deferred rendering is a popular technique to improve performance by leaving the lighting to the very end when all objects have been drawn. Lighting is then done in screen space, and each light can be applied to only the portion of the screen they affect. This makes it cheap to have many lights in the scene. http://http.developer.nvidia.com/gpugems2/gpugems2_chapter09.html This is a presentation on how deferred rendering was used in Killzone 2: http://www.guerrilla-games.com/publications/dr_kz2_rsx_dev07.pdf Geometry instancing (Not supported on WebGL, so you'd have to do the code in C++.) Geometry instancing is a method of rendering many copies of the same mesh fast by using only one draw call. This is done by using gldrawarraysinstanced() instead of gldrawarrays(), and in the vertex shader you are provided with a builtin variable gl_instanceid to let you know what instance you are handling. http://sol.gfxile.net/instancing.html And here is an example of using instancing to make grass: http://software.intel.com/en-us/articles/rendering-grass-with-instancing-in-directx-10/
Volumetric rendering (marching cubes, metaballs, isosurfaces) All we can really render in the OpenGL pipeline is a polygonal mesh. What if all you have is a bunch of measurements or points in 3D space, such as from medical imaging or an implicit function? Some mechanism is needed to derive a mesh from that data. One important technique is the marching cubes algorithm, (but other techniques are possible too). http://users.polytech.unice.fr/~lingrand/marchingcubes/accueil.html