PCP: Let’s just jump right into the issue at hand. What is your take on current ray tracing arguments floating around such as those featured in a couple of different articles here at PC Perspective? Have you been doing any work on ray tracing yourself?
John Carmack: I have my own personal hobby horse in this race and have some fairly firm opinions on the way things are going right now. I think that ray tracing in the classical sense, of analytically intersecting rays with conventionally defined geometry, whether they be triangle meshes or higher order primitives, I’m not really bullish on that taking over for primary rendering tasks, which is essentially what Intel is pushing. There are large advantages to rasterization from a performance standpoint and many of the things that they argue as far as using efficient culling technologies to be able to avoid referencing a lot of geometry, those are really bogus arguments because you could do similar things with occlusion queries and conditional renders with rasterization. Head to head rasterization is just a vastly more efficient use of whatever transistors you have available.
But, I do think that there is a very strong possibility as we move towards next generation technologies for a ray tracing architecture that uses a specific data structure; rather than just taking triangles like everybody uses and tracing rays against them and being really, really expensive. There is a specific format I have done some research on that I am starting to ramp back up on for some proof of concept work for next generation technologies. It involves ray tracing into a sparse voxel octree which is essentially a geometric evolution of the mega-texture technologies that we’re doing today for uniquely texturing entire worlds. It’s clear that what we want to do in the following generation is have unique geometry down to the equivalent of the texel across everything. There are different approaches that you could wind up and try to get that done that would involve tessellation and different levels of triangle meshes and you can could conceivably make something like that work but rasterization architecture does really start falling apart when your typical triangle size is less than one pixel. At that point you really have lost much of the benefits of rasterization. Not necessarily all of them, because linearly walking through a list of primitives can still be much faster than randomly accessing them for tracing, but the wins are diminishing there.
( More on sparse voxel octrees. Also, how about hybrid raytrace/raster approaches? Any thoughts on multi-SLI PC setups?Collapse )