Grey 18 workflow


In the latest release of YafaRay, developer David Bluecame has added several useful features to the clay render pass and new texturing options. Clay render is a feature intended for previsualisation work but it is also an useful feature to set up correct lighting power of a 3D scene. I use a lighting workflow which takes elements from existing lighting techniques but makes a stronger emphasis on certain steps and procedures in order to achieve a more credible result. The basic 5 elements of this workflow are:

  1. References, references, references !
  2. Scene basic understanding.
  3. Improved diffuse components.
  4. Lighting setup with Clay render.
  5. Tone mapping
  6. Post processing work.

1. References, references, references!

In any production pipeline, every department gather references in pre-production time to set solid foundations about their work. Lighting TDs should as well gather references and basic understanding about a particular lighting condition, particularly anything concerning lighting ratios and exposure values in the scenes we are trying to reproduce. All digital cameras nowadays have a manual configuration that allows for focused scene metering which could be useful to understand tonal proportions and equivalences we can use afterward on our renders.

Reference image for an exterior render, 1/250 F11 ISO 200 Partial Metering.

2. Scene basic understanding.

A basic frame of mind when using a raytracer is thinking of it always as a sum of components instead of a global calculus, such as diffuse + specular shading, indirect lighting + direct lighting or light source A + light source B. A raytracer is always producing different kind of rays depending on the scene properties and a render job always consist of a sum of different components and ray types. A good render engine operator is able to foresee and mentally decouple in a basic way kind and amount of components a given scene will have got before actually rendering it, particularly contribution of the indirect lighting component to the final result and caustic effects. This mindset is also good for any kind of workflow based on render passes.

3. Improved diffuse components.

One of the most difficult tasks for developers and users of render engines is achieving good diffuse shading, which is a basic component of any credible result, even in scenes with predominant specular components. One of the main issues is lack of observer-dependant effects such as masking, shadowing and interreflections at any level in bump mapping and normal mapping relief models. This means that light sampled on a fine irregular surface produced with bump or normal mapping does not change its looks depending on the observer's viewpoint, which is not a realistic asumption. Take a look at the test below:

We are rendering a cube where the 3 seen faces get the same amount of incident lighting from a light type with no attenuation, in this case a sun lamp. However, the camera is in a position so the 3 rendered faces have different angles to it (see image above).

Comparison between a lambert surface using a fine bump mapping (left) and an orent-nayar surface without bump mapping (right). Notice how bump mapping produces a flat shading regardless of camera position.

Other issues are scale and variation. Oren-Nayar reflectance model will work well in the very micro scale level, while for middle or macro level detail users will have to resort to true displacement for getting realistic gradients, which is often not practical or possible. Oren-Nayar also produces a very uniform effect.

Oren-Nayar Sigma mapping

To partially solve these issues, a feaure has been added so Oren-Nayar sigma value can be mapped using a scalar texture. Aim of this feature is reinforcing bump, normal and displacement mapping effects and adding variation.

Fine bump mapping with uniform Oren-Nayar sigma. Fine bump mapping reinforced with a coincident map for Oren-Nayar sigma.

Diffuse reflection strength mapping

Another new feature is mapping of the overall diffuse reflection strength value with a scalar texture. Again, render engines internally work out surface sampling decoupled in crominance (color) and luminance (brightness) coordinates, to accomodate for digital color maths which in fact are modelled after human perception. The general aim is to allow for more information processing and encoding into luminance coordinates than into chrominance, just as the human retina works. By mapping reflection strength value with a scalar texture, users could use variance in the luminance channel independently of the chroma values. Aim of this feature is reinforcing bump, normal and displacement mapping effects and adding variation. One advantatge of these features is that they are phisically correct, therefore they will behave consistently across different lighting setups, camera views and animation frames.

Fine bump mapping with uniform Oren-Nayar sigma. Fine bump mapping reinforced with a coincident map for diffuse reflection strength.
Bump mapping + uniform ON sigma Bump mapping + mapped ON sigma + mapped DRS

Diffuse detail and rich relief mapping seen with a large focal distance produces tones of color made of variations at every pixel level, with means more realism and more light power absorption than just with plain surfaces.


4. Lighting setup with Clay render.

Users often ask about how much lighting power is needed for a given scene or how lighting power settings translate into luminance units (lux) for realistic setups. Lighting power is a relative concept in YafaRay since we are just simulating a scene exposure range, while other concepts like realistic materials reflectances and proper lighting ratios are more important. With realistic reflectances, materials in the scene behave consistently across different camera views, lighting setups or tone mapping ranges. Besides, another important concept is producing realistic lighting ratios, which is amount of fill light related to the key light power.

One result of this mindset is that rendering & lighting setups should be linked to camera views rather than to scenes. The same way photographers change exposure settings with changing lighting conditions for different views of the same subject, the same 3D scene with another view could need different "exposure" settings thus lighting power and sampling requirements.

For achieving correct lighting power for any view, I use a clay render pass in which all materials are using a grey 18 color, which is just a middle gray (#808080). This color uses aprox. 50% in the value component of the HSV coordinates. With this grey, the average surface is reflecting only 1/5th the amount of incoming radiance. This is also the average reflectance used for digital cameras photometers (other sources say is grey 12%). There are several consequences for using this approach: surfaces are more stable to global illumination effects in GI algorithms. For instance, they will likely emit less indirect lighting which also improve key-to-fill ratios and scene contrast.

Comparison of the cornell scene, both using the same lighting and GI parameters:
  • The one on the left uses 100% diffuse strength for white material and the other colors use 100% diffuse strength as well.
  • On the right, white uses only 70% diffuse strenght and the other colors use between 20% and 40% diffuse strength.

Which one is more realistic?

Another consideration in order to produce a clay render pass for lighting tuning is always rendering it with relief mapping enabled, either bump mapping, normal mapping or real displacement. It is also good getting sigma and diffuse reflection strength mapping enabled in the clay pass. All these components will likely affect amount of surface reflectance, that is why I needed them to get a precise lighting power value for the scene. For this purpose, a feature has been added to the clay render section which keeps original relief mapping from each material in the scene. Indeed, working out a good relief mapping (bump, normal, displacement) is the next thing I perform in my scenes right after modelling is finished.

Once the grey 18 material is applied to the whole scene, I adjust lighting power. Although the sky on the background will likely distort relative perception of the average grey 18 reflectance, this is my take on the great scene shared by Olivier Boscournu and posted in our Demo Files repository. All materials are rendered with grey 18 (#808080) and bump & sigma mapping is applied as well in main materials. I have added light power from a outside portal light till all materials more or less look middle grey to me. The next step is exposing for highlights, which is finding a reflectance value for the material in the scene I want in the upper limit of the histogram without actually overexposing it. In this scene, the white walls are the brightest material. This step could require further adjustments of lighting power. User perception space is king !

White is one of the most difficult colors to get right on renders because it is not always easy to land it just on the boundary of the exposure range. I miss a tool in render engines that exist in digital cameras when you review your pics which consist in an intermitent black out on overexposed areas. In the render above, white material is in fact a light grey (#B8B8B8) which only reflects about 1/2 the amount of incoming radiance. In the Zone System, this means approximately two EV stops between grey 18 (#808080) and the light grey (#B8B8B8) used for the white walls, which would fall on the half end of Zone VII. This zone is referred to as Very light skin; shadows in snow with acute side lighting, but Zone VII is actually used in cinematography for textured white. This setup is also coincident with how my DSRL camera sees a scene like this: two full EV stops between a middle grey floor and a white wall with similar lighting conditions and surface properties.

This step will likely set the right amount of power needed for light sources and the next step consist in iterate materials tuning on the clay render pass till all the scene materials are eventually configured. For this purpose, a tool has been added so a material can be rendered with its original settings in the clay render pass.


The most important idea about this workflow is that we set a neutral middle reference every material in the scene is tested against, using our perception of tones and ratios. In fact, this neutral reference could be other grey than grey 18. The result of using another grey is that we will accomodate lighting power to get whites and middle tones on the area of the histogram we want, but it also changes the ability of the balance to cast indirect lighting and its contribution to the final sum. With darker greys there is more contrast. Using other or the another will probably depend on which part of the histogram we want more range to work with. Also, rendering against a neutral reference makes your renders more coherent in post editing work.

Grey 18 clay pass (#808080)
Grey 12 clay pass (#616161)

Texture normalisation

One of the consequences of using gray 18 or a darker grays instead of white as a middle reference is that many times we will need to take light from RGB textures sourced from the internet or taken with our cameras, before lighting them up again in the render engine. This is a process I call 'texture normalisation'. Basically consist in darkening RGB textures using the "reflection strenght" slider, as in the picture below. Reflection strength is a multiplying factor that works on the material cromacity and luminance output.

6. Tone mapping.

If you made it till here, know that this section should be in fact at the beginning. YafaRay uses by default a linear tone mapping for 8 bits encoding, which is used in render display and for 8-bits saving (JPG, PNG image formats). The image below is a overexposed YafaRay render of a greyscale texture mapped on a plane and its corresponding histogram:

The histogram is horizontal because there is the same amount of pixels for all shades of grey and the linear tone mapping treats them equally. Besides, all overexposed shades of grey are contained in that final column of 255 white. This is not how digital cameras work, since it would be difficult tone mapping in a linear way and without heavy clipping many common scenes photographers encounter in real life, particularly skies, highlights and dark areas. Digital cameras use a so called Film Standard s-type curve that compress some extra exposure in the top and bottom ends of the standard low dynamic range. This way cameras can encode more exposure stops and lighting information into the available range, although this kind of tone mapping method has got disadvantages like less contrast on highlights and dark areas. It is also worth noting that digital cameras probably use variants of the film curve depending on the view specifics, but always with the same goal which is encoding more exposure into the rather limited 8-bits dynamic range. A good source of information about this topic is Norman Koren's page.

Typical response curve used in film and digital cameras.

The spike at the end of the histogram above is not natural but shows the typical compression of range produced by a Film Standard Curve 8-bits tonemapping in a Canon EOS system. The 6 stops of exposure that the 8-bits encoding can theoretically display on our monitors and store in JPG and PNG formats would correspond to the range mapped by a typical Film Standar curve. Therefore and by using default linear tonemapping, we are encoding a bit less range than a typical digital camera (about 1-2 stops) and in a different way, with more contrast in highlights and dark areas than in real photographs. So if we want our renders to have photografic qualities, we should use a similar tonemapping curve. Inversely, it is important to realise that with this tonemapping, if we want to reveal a very subtle contrast, we better don't land it in either end of the histogram.

For all purposes, either if you use a filter, levels or a curve method using Blender nodes editor for instance, it is always better working with finalist tone mapping in our render viewport than post processing our renders in an HDR editor afterwards, even in render & lighting preliminary tests. The reason is that tone mapping a HDR output can reveal or hide montecarlo noise and high contrast aliasing artifacts that would help to make decisions about our current render and lighting parameters. This is why render engines are currently implemented in compositing applications as well.

6. Post processing tools.

I am not a big fan of using denoise filters in indoors scenes. Following human perception, color spaces and digital color maths allow for more information processing and encoding in the low range of the histogram. This means that shadowed areas that get only indirect lighting and any surface getting low soft lighting will show more diffuse detail and richer gradients than areas with lot of direct lighting. The problem is that those darker areas are also the ones more likely to show montecarlo noise and therefore get the denoise treatment. Denoise filters have got a much deeper impact on renders than on real photographs: they destroy the otherwise difficult to achieve diffuse detail we put in our works and can kill a lot of realism. Besides, I believe good "biased" algorithms are much of a better solution than any denoise filter, at least for still images.