id Tech 6
The id Tech 6 game engine, also referred to as id Tech 666 by id Software developers, is a technological successor of the id Tech 5 engine which powered id Software's Rage and several ZeniMax Media internal developers' games. It powers the 2016 game initially titled Doom 4, and finally known simply as Doom. It represents a serious advancement over the previous iteration, with support for physically-based rendering (PBR), extensive caching and effects precomputation, a streamlined shader model for geometry, use of asynchronous compute shaders, and restoration of dynamic, unified-pipeline lighting and shadowing effects on par with those that appeared previously in id Tech 4. Support for the bleeding-edge Vulkan graphics API was started three months before Doom's launch, and was made available to end users in an update shortly after release.
- 1 Transition of technical leadership
- 2 Technical details
- 3 Rendering
- 4 Other games
- 5 Sources
Transition of technical leadership
Though John Carmack had planned to use the name "id Tech 6" for a conceptual voxel engine, this concept is not known to have ever advanced into an implementation phase. Carmack left id Software in late 2013, leaving Robert Duffy as Chief Technical Officer, with Tiago Sousa being hired on shortly afterward to take over his role in graphics research and development. Tiago took the id Tech 5 engine in a different direction, utilizing approaches for PBR similar to those he previously used in CryEngine. Demonstrations shown at QuakeCon 2016 of alpha versions of the engine which had been used to power the abandoned Doom 4 concept were much less advanced, and most appeared to be only a small iteration of the id Tech 5 technology as compared to the final product.
The game engine was designed with scalability in mind, with the target goal of rendering a minimum of 60 frames per second at 1080p resolution on all of its supported platforms. Emphasis was also placed on speeding up the art asset workflow for the engine, enabling texture and level designers to have fine control over detailing and scene complexity parameters. The id Tech 6 rendering process consists of a number of distinct phases.
Opaque forward passes
As in id Tech 5, texturing is accomplished through use of virtual textures, 16000x8000 atlases of 128x128 tiles which are cached based on visibility. Textures are moved in and out of the atlas when necessary, though this is still accomplished in a mostly reactive, rather than predictive, nature, resulting in occasional visibility of texture popup. This is cited by id Software's programmers as room for future engineering improvement. The use of megatextures means that each area looks unique, at the cost of increased file size.
Dynamic shadows are aggressively cached by the engine, using results computed in previous frames and retaining them in a megatexture atlas. If the lighting of an area has not changed when it needs to be rendered again (for example, no dynamic elements such as actors have moved into or out of it), then the previous shadow map is retained. Static elements can also be independently retained while shadows cast by moving dynamic geometry are recomputed over them. The shadow cache is an 8000x8000 32-bit texture on PC, and 8000x4000 16-bit texture on consoles. The resolution and update time slice of individual shadow maps are based on level-of-detail (LOD) calculations, such that areas closer to the player have high-detail maps computed frequently, and more distant areas have smaller maps which update less aggressively.
Opaque meshes, including static and dynamic scene geometry, as well as the player's weapon, are rendered to the depth buffer to obtain z information. At this time, the engine also computes a velocity map using the difference in vertex positions of dynamic objects from the previous frame, the result being a colored texture representing the motion of objects in the scene relative to the player in horizontal and vertical directions. This is later used to accomplish temporal screen-space antialiasing (TSSAA) and motion blur effects.
In preparation for sending the scene geometry, meshes which are not visible are first culled. Both the Umbra middleware and additional GPU occlusion queries are used. Since query results are not immediately available, some conservative decisions are made to allow relying on data computed in previous frames. This avoids having objects appear to pop in or out unexpectedly.
Opaque geometry and decals are rendered using clustered-forward rendering, a technique that subdivides the view frustum, and the areas affected by light sources and environment probes (for cubemaps), into a logarithmic-scaled layered grid of cells in three dimensions. Light sources and probes are voxelized to test for intersection with these cells in order to create a multi-level lookup that can be used by the primary forward pass pixel shader to quickly compute the lighting and decal parameters for each pixel on screen.
Since depth information was pre-computed earlier, drawing of opaque geometry is accomplished front to back with zero overdraw, with the depth test function set to "equal" to avoid unnecessary GPU computations during this phase.
For detailing of scenes, decals are applied, also utilizing megatexture stamping with an 8000x8000 atlas. A 4000-in-view limit is placed on decals, with LOD parameters controlled by the artists.
At the end of this pass, the scene has been generated as an HDR floating point buffer, and additional integral graphics buffers hold the normal and specular maps. Smoothness information is additionally retained in the alpha channel of the specular map. These buffers are all rendered simultaneously through use of MRT.
An asynchronous compute shader is dispatched at this point to run the particle simulation. The information on each particle is buffered for later use.
In the deferred pass, various screen-space effects are computed. The first of these is screen space ambient occlusion (SSAO), a technique which darkens colors around seams and avoids artifacts from occluded geometry. Screen-space reflections (SSR) are then computed, using a combination of the depth buffer, normals, specular map, and the previous rendered frame. A form of ray tracing is employed to generate the SSR map from these inputs. A static reflection map is also generated using the pre-computed cubemaps inserted into the clustered frustum lookups earlier. The depth, normal, and specular buffers are combined with data from the cubemaps, with the influence of each map determined by its distance from the pixel being rendered. This is a form of image-based lighting.
After these effects have been computed individually, a compositing compute shader is used to blend the forward-pass lighting, SSAO, SSR, and static reflection map data. Fog effects are also computed in this shader. The results are more or less the completely rendered scene information, minus any transparent elements and post-process effects.
In the transparency pass, particle lighting is computed, various visual effects are rendered, and glass surfaces are rendered.
Particle lighting is decoupled from other elements due to the large number of possible particle effects per scene. Lighting for particles is computed independent from screen resolution, with adaptation to LOD. Computed particle lighting textures are cached in a 4000x4000 atlas, and each lighting texture is applied to its particle geometry using bicubic scaling.
Glass effects are accomplished using a combination of decals and computation of four mip levels of blur applied to the scene, computed using a Gaussian approximation. For the smaller mip levels, horizontal and vertical blur steps are performed separately to improve quality. The two closest blur level textures, based on the local smoothness of the glass, are blended together with linear interpolation.
A distortion map is created for areas which are "hot," allowing for apparent refraction of light. This is not applied until during the post-process stage.
The user interface is rendered to its own buffer at this point with pre-multiplied alpha.
Post-process effects are computed asynchronously and can overlap with elements of the opaque pass since the former rarely uses compute shaders.
If a depth-of-field effect is active, it will be computed first. A near-field and far-field image are created, using disk blur at half resolution to accomplish a proper bokeh effect. Temporal anti-aliasing and motion blur are computed from the velocity map generated in the depth pre-pass in combination with previous frames, with a small jitter in position information introduced to remove sub-pixel artifacts. Average scene luminance is computed as an input to the tone mapper. Bloom is computed via use of a bright pass filter and a set of Gaussian blur mip textures similar to those used for glass effects.
A final post-processing pass combines all of these inputs using a single shader, with additional effects such as vignetting, tonemapping, and color grading added. Finally, the user interface elements and a subtle film grain effect are composited with the scene, and rendering is complete.
- Courrèges, Adrian (9 September 2016). "DOOM (2016) - Graphics Study." Retrieved 2 October 2016.
- Sousa, Tiago and Jean Geffroy (25 July 2016). "The Devil is in the Details: idTech 666." SIGGRAPH2016: Advances in Real-Time Rendering. Retrieved 2 October 2016.
|Official source ports|
|Based on||Name||Base for|
|id Tech 5||id Tech 6||id Tech 7|