Thoughts about a zero-configuration PBR renderer.

Background

Production renderers are finicky. They often offer a lot settings and parameters to allow users to switch from one algorithm to another, tune performance and generally require an unwarranted level of familiarity with the renderer's design to be fully exploited. This has led many users to develop their own specific workflow, their secret sauce, to deal with this un-necessary level of complexity. In return, it forces software vendors to support those secret recipes by leaving all options open at all time and prevent them from deprecating old sections of their code.

Why so complex ?

For a long time, image synthesis was based on very rough approximations in order to be efficiently handled by computers. Nowadays, the typical workstation's power allows for much more sophisticated approaches based on real-world light transport. Over the last 3 years, a number of researchers have made the necessary breakthrough to design what I would call a "universal renderer" : a renderer that would truly "Render Everything You Ever Saw", from hard surfaces to volumes, within a single integration framework.

What makes a great renderer ?

  1. Image quality (client)
  2. Ease of use (artist)
  3. Reliability (tech)

General design

  • Minimal user configuration
  • Dynamic resource management through instrumentation
  • Black boxed integration
  • No lights, just emissive surfaces -  with the exception of env light.
  • Polygons, Sub-Ds, curves, particles, volumes
  • Advanced Camera Model
  • Shading Language
  • AOVs

Minimal user configuration

Frame resolution, number of threads and amount of memory used.

Internal sub-systems will self-configure based on a preliminary scene analysis, and then potentialy re-balance during the rendering process based on performance statistics. A render should always finish, never run out of memory nor under-utilise available resources.

When the renderer runs out of memory, it should automatically consider a number of fallback strategies, like stealing from one of the sub-systems, using virtual memory, pruning the BVH, automatically tiling the render, etc.

Clients will occasionaly tolerate a slow render, but not an un-renderable frame.

Integration

  • State of the art
    • Based on VCM with some MCMC extensions to deal with outdoor scenes.
    • It should "just work" and be presented as a black box.
    • Variance-driven for simplicity and adaptiveness.
    • Light Path Expressions to tweak light transport and define AOVs.
  • Progressive rendering with checkpointing.
  • Continuous statistics gathering to automate resources management.

Lights

  • Just light emitting surfaces with an optional angular distribution.
  • Environment lighting
  • Hosek-Wilkie Sky model

Geometry

  • Everything is a poly.
  • Surface refinement (higher-order surfaces and displacement)
    • Adaptive displaced SubD tesselation.
    • No render-time tessellation to avoid the usual bound expansion conundrum.
  • Instancing
  • Efficient multi-threaded BVH building and traversal.
    • Geometric data compression / quantization
    • Dynamic pruning/rebuilding based on statistics.

APIs

  • OSL
    • Allows for creative shading while protecting the renderer
    • OSL graph support.
  • C APIs
    • Display Driver
    • Direct Rendering
    • Procedural Primitives

Engineering

  • CPU rendering only.
  • Use multi-threading and SSE with an emphasis on scalability across a large number of threads.
  • Memory efficiency is a primary concern.
  • Implement statistics gathering as an integral part of the rendering system.
    • Use stats to identify implementation and rendering bottlenecks.
    • Implement resource allocation re-balancing.

Integration

  • The renderer should be available both as a library and a command line executable.
  • Katana integration is a priority.

Previous Post Next Post