Render Settings

Table of Content:

General settings.

General SettingsGeneral Settings















Ray Depth.

Raydepth means the amount of times that camera rays can be reflected in specular surfaces till a diffuse component is found. Specular surfaces in YafaRay are:

  • Shinydiffuse Mirror
  • Glossy material. 

It also defines the amount of times that camera rays can get through a transparent surfaces, also till a diffuse component is found. Transparent surfaces are:

  • Shinydiffuse with Transparency.
  • Glass material.

This setting makes possible to render inter-reflections or consecutive transparent surfaces down to a certain depth. A typical effect of insufficient ray depth is a transparent surface or a reflection being rendered black.

Related articles:

Shadow Depth and Transparent shadows.

Shadow Depth and Transparent shadows settings work together. Transparent shadows are shadows produced by 'fake' transparent surfaces without an index of refraction (IOR). Transparent shadows enabled makes it possible to produce lighting filtered by a transparent surface, as an alternative to caustics effects produced by GI algorithms. In YafaRay, there are two kinds of 'fake' transparent materials without IOR, which are:

  • Shinydiffuse with a Transparency value > 0.
  • Glass material with Fake glass option enabled.

Shadow Depth controls the amount of 'fake' transparent surfaces that shadow rays can get through to find light sources.

Below there is a comparison between different ways of handling the transparent shadows issue, and where Transparent shadows & Shadows depth options might be useful. Direct lighting is used in all cases, although the results can be extrapolated to the other lighting methods, which have got caustic and raytracing components:

Case 1

No method has been enabled either for light rays (caustics) or for raytracing shadow rays to get through the transparent mesh. The mesh refraction is correctly rendered with raytracing primary rays but shadow rays (blue) are occluded by meshes as they look for light sources.

Case 2

Caustic photons, emitted from an arealight, produce lighting after getting through the glass objects with IOR=1.55. Suzzane's round shape can concentrate the flow of photons so a caustic map is produced with a bit of uniform density.

However, the prismatic glass produces a caustic map with density issues, since there isn't any concentration of the photons flow. The other area types (sphere, mesh) and HDR backgrounds (IBL, Sunsky) are prone to this kind of issue too.

Case 3

An alternative could be enabling the Fake glass option, as well as Transparent Shadows with enough Shadows depth. Caustic photons aren't needed here since the transparent shadows are rendered with raytracing shadow rays.

The transparent shadow produced by the prismatic glass is conveniently uniform, whereas the transparent shadow produced with Suzzane's head becomes completely unreal.

Case 4

Another alternative can be using a spot light instead of area types, shooting caustic photons. The narrower the spot beam the better. It is good for caustics because the spot light beam helps to focus and to concentrate the flow of caustic photons. Notice the superior quality of the caustic effects in both meshes (Fake glass and Transparent shadows are disabled here, so it's a 'real' glass).

Case 5

In this case, objects use a Shinydiffuse material with Transparency. Caustic photons aren't needed here since transparent shadows are rendered with raytracing shadow rays.

However, Transparent shadows must be enabled, and a sufficient Shadow depth should be set for shadow rays to get through all transparent surfaces when looking for light sources.

Related articles:


Output Gamma.

Gamma correction performed on render output to match gamma of the color space the image is inserted in, usually 2.2 for sRGB or linear for HDR. This setting is by default set not to perform gamma correction (gamma output = 1) in order to pass renders in linear space to the Blender Compositor. More information here.

Input Gamma.

Inverse gamma correction performed on render input, which are textures, shader colors and light colors, so the render engine can work internally using the linear space. Gamma input value should equal that of the color space the input is working on, usually 2.2 for the sRGB color space.

Clamp RGB.

When Clamp RGB is enabled, the color depth is reduced to a low dynamic range, for better anti-aliasing on high contrast areas. For instance, the anti-aliasing of visible area light sources and their reflection on specular surfaces. The examples below were made by sevontheweb. The upper image has got aliasing issues in areas with a strong contrast, but colors are crisp. In the lower image Clamp RGB has been enabled. The anti-aliasing is better but colors are duller.

Other ways of solving this issue are:

  • Increasing the render resolution and then scaling back to the desired resolution using a good interpolation algorithm.
  • Multipass rendering with render post processing.
  • Using Gauss filter.

Clay Render.

Produces a clay render overriding all materials.

Related article: Clay Render tutorial, by Bupla.


Ray tracing is a highly parallelizable process.  Most data structures in a typical ray tracer can be shared by all available threads. In fact, Raytracing is among the few software fields that take full advantatge of multi-core CPU, the more cores the better. With this setting, users can fork the rendering calculation into several simultaneously running tasks, depending on CPU specs.

Result to Blender.

When this setting is enabled, apart from the render output, the image is saved as well in the Blender UV/Image editor [1]. This makes it possible to open the image in the Blender Compositor nodes as well [2]. Use Compositor nodes > Add > Input > Image [3] and browse existing choices button [4] to add the YafaRay render to the Blender compositor.

More information about Blender composite nodes in the link below:

Related Tutorial: Compositing render passes.


The render is automatically saved in the PNG format, with a time stamp in the file name so it isn't overwritten. The render file is located in the same folder as the Blender scene file.

Alpha on autosave/animations.

Images automatically saved with Autosave and with Render anim are saved in the PNG format, with RGB and alpha data instead of background.

Related article: Render Windows Settings.

Draw render parameters.

The most important render parameters are written in a badge in the rendered image. Use this feature to compare renders and to ask for advice in the YafaRay forums.

Output to XML.

The scene and render parameters are written in a YafaRay XML file. The XML file is located in the same folder than the Blender scene file.



Aliasing is an approximation error in the discrete sampling of a 3D scene. The most usual manifestation of aliasing is jagged edges. Sources of aliasing are:

  • Geometry.
  • Very small details
  • Textures
  • Visible light sources
  • Highlights in specular objects
  • Contrast between bright and dark objects.
  • Sharp shadows.

Anti-aliasing is a conjunt of sampling and reconstruction techniques used to mask or minimize the aliasing artifacts.

First anti-aliasing pass.

The scene is sampled with a first pass, shooting a number of rays per every camera pixel as specified in AA samples. These rays, also called primary rays, are shot from the camera and intersect with the scene diffuse components. That intersection point is the origin of several types of secondary rays. For instance it is the origin of shadow rays used to sample area lights and backgrounds. It is also the origin of pathtracing random paths, ambient oclussion rays and final gather directional samples.

Anti-aliasing samples in the first pass are therefore connected to sampling values used in every secondary rays algorithm. The more AA samples, the more intersection points as origin of 'secondary' rays. The first anti-aliasing pass is not only reducing aliasing artifacts, it is in fact a multiplication factor for area light, background, Path tracing, Ambient Oclussion and Final Gather samples. Look at the comparisons below:

The image on the left is using 32 arealight samples, and 1 AA samples (1 AA passes, 1 AA samples, time 9s.). The image on the right is using 1 area light sample and 32 AA samples (1 AA passes, 32 AA samples, time 29s). Notice how the arealight soft shadows are almost equal in both images. The number of shadow rays to sample the arealight is in fact the same in both images (32), but the method to achieve each of them is different, as explained below:

On the left, 3 arealight samples and 1 image sample are used. On the right, 1 arealight sample is used and 3 image samples. The number of shadow rays to sample the arealight is the same in both cases (3), and the noise reduction will be the same. However, the method on the left will be always more efficient since there are less intersection points to compute, a bit less noisy and much faster in terms of render times. More examples below, with Photons & Final gather and Pathtracing:

64 FG samples, 1 AA pass, 1 AA samples, 74s.
1 FG sample, 1 AA pass, 64 AA samples, 370s.
128 Pathtracing samples, 1 AA pass, 1 AA samples, 269 s. 1 Pathtracing sample, 1 AA pass, 128 AA samples, 996 s.

The conclusion from the images above is that it is more efficient to use enough light samples to reduce noise and a minimum of AA samples for anti-aliasing, rather than just using more AA samples to reduce noise. A great deal of the anti-aliasing work as well as much of the noise removal could be performed in subsequent AA passes with adaptive sampling, as explained in the next section.

Related articles:

Adaptive sampling.

Anti-aliasing Threshold value is used to compare color of adjacent pixels. If their difference in luminance is higher than the limit defined by AA Threshold, then additional samples are taken in the second and subsequent AA passes untill the discrepancy gets within limits. Sampling happens only in those problematic off-limit areas, without needing to sample on everywhere. This technique is called adaptive sampling. The number of additional samples to be taken in problematic areas is defined by AA inc. samples.

Samples in additional passes will reduce aliasing and noise.

If AA Threshold decreases, the number of areas needing additional sampling will likely increase, as well as the render time. Areas to be resampled decrease as new passes are performed and as more areas get within the threshold. A good AA Threshold will focus anti-aliasing and noise reduction work only on problematic areas, without resampling the whole image in every pass.

If AA Threshold is 0, the whole image is resampled in every pass.

This render is using default 0.05 AA Threshold. The white points tell us what areas are being resampled in every pass.

This render is using 0.01 AA Threshold. Not only more geometry is being resampled, but most of the arealight soft shadows as well, and some of the indirect lighting noise on the ceiling.

Filter type.

There are three reconstruction filters in YafaRay. These filters determine how multiple samples near a pixel are blended together. These filters are:

  • Box. Equal weighting all samples. It is fast, but isn't efficient dealing with certain types of noise and produces post-aliasing.
  • Gaussian. Gives good results. Tends to cause a slight blurring of the final image, but this blurring can help to mask remaining aliasing in the image.
  • Mitchell-Netravali. Good results as well. Improves sharpness of the edges.
Box filter, notice the post aliasing artifacts.
Gaussian filter. Mitchell-Netravali filter.

Filter Size.

Users can change the size of the reconstruction filter. Image samples are averaged inside a box around a pixel. How these pixels  contribute to the final pixel value depends on the filter used (see previous article).

Lowering AA Pixelwidth means that less samples will be averaged, which makes renders sharper. If AA Pixelwidth increases, more image samples are averaged, and the result will be blurrer.

However, reducing pixel width means that the reconstruction filter is less effective against aliasing and high-frequency noise could leak into the image. These issues can be solved by increasing the amount of anti-aliasing samples.

2.5 AA Pixelwidth, 49s.
Default 1.5 AA Pixelwidth, 54s.
1.1 AA Pixelwidth, 82s.

Anti-aliasing strategy.

Based on concepts explained in the previous anti-aliasing sections, a good sampling strategy is divided in two steps, which are:

  • The bulk of noise removal, particularly low-contrast noise in dark areas, is performed in the first AA pass, acumulating samples in light sources (arealights, background) and in indirect lighting algorithms (path tracing, final gathering), while using only a few AA samples (1 - 3) so a basic anti-aliasing is performed.
  • The bulk of anti-aliasing is performed with adaptative sampling, using AA Threshold to determine the areas to sample. Adaptative sampling can also help us to remove high contrast noise on arealight shadows and areas with indirect lighting. Adaptative sampling is more efficient if we use several AA passes to do it. As some of the resampled areas get already within the AA threshold, a next AA pass will be usually faster.
  • The idea is to exchange total amount of samples in the first pass (light sampling x AA samples) with adaptative sampling.
  • If the AA Threshold is low (less than 0.005), then the best technique is using one AA inc. sample, and lots of AA passes. If a pixel needs 6 samples to look good, and we use 5 AA inc. samples in each pass, it means that we will use at least 10 samples in two passes to make that pixel look good. However, it we use only 1 AA inc. samples, that pixel will use just 6 samples to look good, with the 6th. AA pass.

  • 32 Final Gather samples.
  • 3 AA passes.
  • 5 AA samples.
  • 5 AA inc. samples.
  • 0.05 AA Threshold.
  • Time 178 s.
  • 16 Final Gather samples.
  • 5 AA passes.
  • 1 AA samples.
  • 2 AA inc. samples.
  • 0.007 AA Threshold.
  • Time 125 s.