In the latest release of YafaRay, developer David Bluecame has added several useful features to the clay render pass and new texturing options. Clay render is a feature intended for previsualisation work but it is also an useful feature to set up lighting power of the 3D scene. I use a lighting workflow which takes elements from existing lighting techniques but makes a stronger emphasis on certain steps and procedures in order to achieve a more credible result. The basic 5 elements of this workflow are:
In any production pipeline, every department gather references in pre-production time to set solid foundations about their work. Lighting TDs should as well gather references and basic understanding about a particular lighting condition, particularly anything concerning lighting ratios and exposure values in the scenes we are trying to reproduce. All digital cameras nowadays have a manual configuration that allows for focused scene metering which could be useful to understand tonal proportions and equivalences we can use afterward on our renders.
Reference image for an exterior render, 1/250 F11 ISO 200 Partial Metering.
A basic frame of mind when using a raytracer is thinking of it always as a sum of components instead of a global calculus, such as diffuse + specular, indirect lighting + direct lighting or light source A + light source B. A raytracer is always producing different kind of rays depending on the scene properties and a render always consist of a sum of different components and ray types. A good render engine operator is able to foresee and mentally decouple in a basic way kind and amount of components a given scene will have got before actually rendering it, particularly contribution of the indirect lighting component to the final result and caustic effects. This mindset is also good for any kind of workflow based on render passes.
One of the most difficult tasks for developers and users of render engines is achieving good diffuse shading, which is a basic component of any credible result. One of the main issues is lack of observer-dependant effects such as masking, shadowing and interreflections at any level in bump mapping and normal mapping relief models. This means that light sampled on a fine irregular surface produced with bump or normal mapping does not change its looks depending on the observer's viewpoint, which is not a realistic asumption.
|We are rendering a cube where the 3 seen faces get the same amount of incident lighting from a light type with no attenuation, in this case a sun lamp. However, the camera is in a position so the 3 rendered faces have different angles to it (see image above).|
Comparison between a lambert surface using a fine bump mapping (left) and an orent-nayar surface without bump mapping (right). Notice how bump mapping produces a flat shading regardless of camera position.
Other issues are scale and variation. Oren-Nayar reflectance model will work well in the very micro scale level, while for middle or macro level detail users will have to resort to true displacement for getting realistic gradients, which is often not practical or possible. Oren-Nayar also produces a very uniform effect.
To partially solve these issues, a feaure has been added so Oren-Nayar sigma value can be mapped using a scalar texture. Aim of this feature is reinforcing bump, normal and displacement mapping effects and adding variation.
|Fine bump mapping with uniform Oren-Nayar sigma.||Fine bump mapping reinforced with a coincident map for Oren-Nayar sigma.|
Another new feature is mapping of the overall diffuse reflection strength value with a scalar texture. Again, render engines internally work out surface sampling decoupled in crominance (color) and luminance (brightness) coordinates, to accomodate for digital color maths which in fact are modelled after human perception. The general aim is to allow for more information processing and encoding into luminance coordinates than into chrominance, just as the human retina works. By mapping reflection strength value with a scalar texture, users could use variance in the luminance channel independently of the chroma values. Aim of this feature is reinforcing bump, normal and displacement mapping effects and adding variation. One advantatge of these features is that they are phisically correct, therefore they will behave consistently across different lighting setups, camera views and animation frames.
|Fine bump mapping with uniform Oren-Nayar sigma.||Fine bump mapping reinforced with a coincident map for diffuse reflection strength.|
|Bump mapping + uniform ON sigma||Bump mapping + mapped ON sigma + mapped DRS|
Users often ask about how much lighting power is needed for a given scene or how lighting power settings translate into luminance units (lux) for realistic setups. Lighting power is a relative concept in YafaRay since with it we are just simulating a scene exposure range, while other concepts like realistic materials reflectances and proper lighting ratios are much more important. With realistic reflectances, materials in the scene behave consistently across different camera views or lighting setups. Besides, another important concept is producing realistic lighting ratios, which is amount of fill light related to the key light power.
For achieving correct lighting power for any scene, I use a clay render pass in which all materials are using a grey 18 color, which is a middle gray (#808080). This color uses aprox. 50% in the value component of the HSV coordinates. With this grey, the average surface is only reflecting 1/5th the amount of incoming radiance. This is also the average reflectance used for digital cameras photometers (other sources say is 12%). There are several consequences for using this approach: surfaces are more stable to global illumination effects in GI algorithms. For instance, they will likely emit less indirect lighting which also improve key-to-fill ratios and scene contrast.
|Comparison of the cornell scene, both using the same lighting and GI parameters:
Which one is more realistic?
Another consideration in order to produce a clay render pass for lighting tuning is always rendering it with relief mapping enabled, either bump mapping, normal mapping or real displacement. It is also good getting sigma and diffuse reflection strength mapping enabled in the clay pass. All these components will likely affect amount of surface reflectance, that is why I needed them to get a precise lighting power value for the scene. For this purpose, a feaure has been added to the clay render section which keeps original relief mapping from each material in the scene. Indeed, working out a good relief mapping is the next thing I perform in my scenes right after modelling is finished.
Although the sky on the background will likely distort relative perception of the average grey 18 reflectance, this is my take on the great scene shared by Olivier Boscournu and posted in our Demo Files repository. All materials are rendered with grey 18 (#808080) and bump & sigma mapping is applied as well in main materials. I have added light power from a outside portal light till all materials more or less look middle grey to me. The next step is exposing for highlights, which is finding a reflectance value for the material in the scene I want in the upper limit of the histogram without actually overexposing it. In this scene, the white walls are the brightest material. This step could require further adjustments of lighting power. User perception space is king.
White is one of the most difficult colors to get right on renders because it is not always easy to land it just on the boundary of the exposure range. I miss a tool in render engines that exist in digital cameras when you review your pics which consist in an intermitent black out on overexposed areas. In the render above, white material is in fact a light grey (#B8B8B8) which only reflects about 1/2 the amount of incoming radiance. In the Zone System, this means aprox. two EV stops between grey 18 (#808080) and the light grey (#B8B8B8) used for the white walls, which would fall on the half end of Zone VII. This zone is referred to as Very light skin; shadows in snow with acute side lighting, but Zone VII is actually used in cinematography for textured white. This setup is also coincident with how my DSRL camera sees a scene like this: two EV stops between a middle grey floor and a white wall with similar lighting conditions and surface properties.
This step will likely set the right amount of power needed for light sources and the next step consist in iterate materials tuning on the clay render pass till all the scene materials are eventually configured. For this purpose, a tool has been added so a material can be rendered with its original settings in the clay render pass.
The most important idea about this workflow is that we set a neutral middle reference every material in the scene is tested against. In fact, this neutral reference could be other grey than grey 18. The result using another grey is that we will accomodate lighting power to get whites and middle tones on the area of the histogram we want, but it also changes the ability of the balance to cast indirect lighting and its contribution to the final sum. With darker greys there is more contrast. Using other or the another will probably depend on which part of the histogram we want more range to work with.
|Grey 18 clay pass (#808080)|
|Grey 12 clay pass (#616161)|
One of the consequences of using gray 18 or a darker grays instead of white as a middle reference is that many times we will need to take light from textures before lighting them up again in the render engine, a process I call 'texture normalisation'. Basically consist in darkening textures using the "reflection strenght" slider, as in the picture below. Reflection strength is a multiplying factor that works on the material cromacity and luminance ouput.
YafaRay uses by default linear tone mapping for 8 bits encoding, which is used in render display and for 8-bits saving (JPG, PNG). The image below is a overexposed YafaRay render of a greyscale texture on a plane and its corresponding histogram:
The histogram is horizontal because there is the same amount of pixels for all shades of grey and the linear tone mapping treats them equally. Besides, all overexposed shades of grey are contained in that final colum of 255 white. This is not how digital cameras work, since it would be difficult tonemapping in a linear way without clipping many common scenes photographers encounter in real life, particularly skies, highlights and dark areas. Cameras use a so called Film Standard s-type curve that compress some extra exposure in the top and bottom ends of the standard dynamic range. This way cameras can encode more exposure stops and lighting into the low dynamic range, although this kind of tone mapping method has got disadvantages like less contrast on highlights and dark areas.
Typical Film response curve used in digital cameras.
The spike at the end of the histogram above shows the typical compression of range produced by a Film Standard curve tonemapping in a Canon EOS system. The 6 stops of exposure that the 8-bits encoding can theoretically display on our monitors and store in JPG and PNG formats would correspond in fact to the range mapped by a typical Film Standar curve. Therefore, by using YafaRay linear tonemapping we are reproducing less range than a typical digital camera (about 1-1.3 stops) and in a different way, with more contrast in highlights and dark areas.
So if we want our renders to have photografic qualities, we should use a similar tonemapping curve. For all purposes, either if you use a filter, levels or a curve method using Blender nodes editor for instance, it is always better working with finalist tone mapping in our render viewport than post processing our renders in an HDR editor afterwards, even in render tests. The reason is that editing a HDR output can reveal or hide montecarlo noise and high contrast aliasing artifacts that will help to make decisions about our current render parameters.
I am not a big fan of using denoise filters in indoors scenes. Following human perception, color spaces and digital color maths allow for more information processing and encoding in the low range of the histogram. This means that shadowed areas that get only indirect lighting and any surface getting low soft lighting will show more diffuse detail and richer gradients than areas with lot of direct lighting. The problem is that those darker areas are also the ones more likely to show montecarlo noise and therefore get the denoise treatment. Denoise filters have got much deeper impact on renders than on real photographs: they destroy the otherwise difficult to achieve fine diffuse detail we put in our works and can kill a lot of realism. Besides, I believe good "biased" algorithms are much of a better solution than any denoise filter, at least for still images.