Lighting methods

Table of Content:

Illumination Types.

Objects are lit by several types of light. For instance, light casted from lamps or natural sources (sun, skylight) and scattered light bouncing from nearby objects. Besides, refractive materials such as water and glass can modify the path of light because of refraction. Other lighting effects are color bleeding between adjacent surfaces and light scattering in participating media such as fog, skin or wax. The interaction and sum of all these effects produces a global result that is called global illumination.

Raytracing algorithms try to reproduce global illumination (GI) by means of techniques that follow light natural behaviour, such as casting samples of rays that bounce in the scene observing realistic rules. Those samples are computed so a continous result can be extrapolated from a limited amount of samples. There are more than one way to achieve global illumination effects, each way with its own share of advantages and shortcomings. There isn't such a thing as a perfect GI algorithm and new methods are created for solving the incovenients of already existing techniques.

YafaRay has got a traditional raytracing method which only calculates light cast from light sources and recursive raytracing, called Direct Lighting. Additionally, YafaRay has got four different Global Illumination models,  which are Path tracing, Photon Mapping, Stochastic Progressive Photon Mapping (SPPM) and Bidirectional Path tracing. Below you have a comparison between Direct Lighting (left) and a Global Illumination model, in this case Photon Mapping (right). Notice the differences between them:

A certain amount of borrowing is possible between different lighting methods, since they are more or less based on the same 'ray shooting' concept. So it is possible to implement similar averaging techniques in different GI methods, or mix features from different GI models. This is the case of caustic photons map option in Direct Lighting and Path tracing, for instance.

Lighting Methods Overview.

Direct Lighting

Path Tracing

 How it works:

Direct Lighting only performs recursive raytracing. Primary rays (antialiasing samples) are shot from the camera [1] and they intersect with the scene diffuse surfaces.

Rays are then generated towards light sources to calculate shadows. They're called shadow rays [2].

Primary rays can be transmitted (ray depth) in reflective and refractive surfaces [3].


  • Very fast in scenes where the indirect lighting component is small and not needed.
  • Can render independent caustic photonmaps and ambient oclussion.
  • HDR backgrounds (IBL, Sunsky) can work as light sources, simulating indirect lighting soft shadows.


  • No indirect lighting.
  • Can't render caustics (you'll need to enable caustics photonmap).

How it works:

Primary rays are shot from the camera (antialiasing samples) and they intersect with the scene [1]. These rays compute direct lighting and recursive raytracing.

Each intersection averages path samples as well [2] which describe a random path bouncing on diffuse components a number of times (path depth). In each bounce, shadow rays are calculated [3].


  • It performs Global Illumination.
  • Fast in outdoors and Image Based Lighting, because paths find the background easily.
  • Unbiased, delivers correct results.
  • Soft indirect lighting if enough sampling.


  • Variance shows up as noise.
  • Lot of rays are needed for caustics.
  • Inefficient in indoor scenes, when light sources are too hidden, bright or too small.
  • Doesn't like omni lights (spot, point) and mirror surfaces. Better use area light types and glossy. 

Photon mapping

Bidirectional path tracing

How it works:

Rays (photons) are cast from light sources [1] and bounce around, regardless of the camera position. A photon map is created, based on photon hits.

Then, a standard raytracing pass is performed [2] which, besides direct lighting and recursive raytracing, averages secondary final gather samples as well [3]. These FG samples average photon hits within a circular area [4].


  • It performs Global Illumination.
  • Fast, efficient GI estimation in indoors scenes.
  • Best quality/speed ratio.
  • Fast caustics.


  • Not well suited for outdoors
  • Requires photon map tweaking.
  • Artifacts in animations (shadows flickering).

How it works:

Ray paths are constructed from the camera ray intersections [1] and from light sources [2]. Bounces are connected to each other with visibility rays [3].


  • It performs Global Illumination.
  • Combine advantages from path tracing and photon mapping.
  • More efficient than path tracing for indoors and for caustic effects.
  • Unbiased, delivers correct results.
  • Good for scenes with lot of indirect lighting.


  • Variance shows up as noise.
  • Not well suited for outdoors.
  • Inefficient in indoor scenes, when light sources are too small, bright or hidden.

Related articles:

Use Cases.

GI algorithms are born with shortcomings and compromises in order to solve specific raytracing problems. There isn't such a thing as a universal lighting algorithm for all cases. In fact, the only method that can be used for all cases is Direct Lighting. In the next paragraphs we are going to explain use cases in YafaRay, and what are the best lighting methods for them.

Open scenes:

Open scenes means that the scene is not enclosed by a mesh, and the background can work as a main light source. Open scenes can be used to simulate studio lighting and indoor scenes as well, by using a suitable scene composition and/or a HDR indoor image as a background. The recomended methods for open scenes are:

  • Direct lighting. It delivers fast results and you can use area light types (area, sphere, mesh) to achieve soft shadows.
  • Direct lighting + AO + caustics photons. Fast. If you have refractive and/or reflective surfaces in the scene you can enable caustic photons, it adds realism. Use this 'combo' for fluid animations as well.
  • Direct lighting + IBL with a HDRI texture. HDR images can work as light sources in Direct Lighting. Fast for outdoors and simulated indoors, soft indirect lighting can be simulated with background lighting. HDRI backgrounds can shoot caustic photons as well.
  • Direct lighting + Sunsky. Components of the Sunsky model (Sun and Skylight) work as light sources. Fast for outdoors, soft shadows like in indirect lighting can be simulated with background lighting. Sunsky can shoot caustic photons as well.
  • Path tracing + background (HDRI, Sunsky, Gradient, Single Color). Fast because many pathtracing rays can find a light source (background) after the first bounces. Use it if you need color bleeding or if you want a precise simulation of the indirect lighting component. Adaptive sampling helps reducing path tracing montecarlo noise.

Comparison between Direct Lighting and Pathtracing used in an outdoors scene. Notice indirect light calculations and soft color bleeding effects in Path tracing.

Direct lighting + Ambient Occlusion, render time 42 s.
Direct lighting + Sunsky Skylight, render time 518 s.
Path tracing + Sunsky Skylight, render time 1205 s.

Related articles:

Enclosed scenes:

Enclosed scenes means that the scene is inside a mesh (thickness for walls recommended), which is good for some ray bouncing. This enclosing mesh can have windows to simulate a houseroom. The recommended methods for enclosed scenes are:

  • Direct lighting. Use it for traditional lighting setups and studio lighting, if you don't need the indirect lighting component. You can use area light types (area, sphere, mesh) to achieve soft shadows. To simulate indirect lighting, many times photon mapping will be faster than a complex rig of fill lights.
  • Direct lighting + AO + caustics photons. Fast. If you have refractive and/or reflective surfaces in the scene, enable caustic photons, it adds realism. Use this 'combo' for fluid animations as well.
  • Photon mapping + FG. A good fast GI algorithm for all indoor cases, it isn't afraid of complex lighting cases.
  • Bidirectional path tracing. It works well if there is an optimal situation of light sources. For instance, if light sources can be seen from the camera or from the first bounce of camera rays. Use it in scenes with lot of indirect lighting and/or if you are looking for a correct simulation of the Global Illumination.
  • Path tracing. The least efficient method for indoors. Use it if there is an optimal situation of light sources (see bidirectional) and an even distribution of exposure, for instance scenes with big windows or big area light sources. You'd better use area lights (area, sphere, meshlight) rather than omni lights (spot, point) in path tracing.
Comparison between different lighting methods, rendered on a Pentium IV. Notice lack of color bleeeding in Direct Lighting. Photon mapping is the fastest GI method, and Bidirectional the slowest one. Notice how the two unbiased methods (Path tracing and Bidirectional) struggle in indirect lighting areas (noise). Bidirectional produces the brightest indirect lighting result and more color bleeding than the other GI methods.
Direct lighting + AO + Cphotons. 0:1:35
Photon mapping. 0:9:10
Path tracing + Cphotons. 0:15:20 Bidirectional. 1:32:18

Related articles:


  • Direct lighting + caustics photons. Fast for indoors, even faster with omni lights (point, spot). Caustic photon maps are compatible with frame rendering (no caustics flickering). HDR backgrounds optional (IBL, Sunsky), with caustic photons.
  • Path tracing + background lighting. With adaptive sampling, it can produce fast results in open scenes.
  • Photon mapping + FG. If indirect lighting is needed and the scene is enclosed, PM+FG is the fastest Global Illumination algorithm for animations. It's compatible with both frame-based animation and portion-based distributed rendering. You will need a good and well balanced photon map to avoid shadows flickering though.

Related articles:


Ambient Occlusion

Ambient Occlusion is a shading method that takes into account attenuation of light due to object occlusion. It produces soft shadows though it can not be considered as a global illumination algorithm. Ambient occlusion is most often calculated by casting and averaging rays in every direction from a point on a surface. Rays which reach the background or a certain distance [1] increase the brightness of the surface, whereas a ray which hits any nearby object [2] contributes little or no illumination. As a result, points surrounded by a large amount of geometry are rendered darker, whereas points with little geometry on the visible hemisphere appear light.

Ambient oclussion is often used as a fast approximation of the indirect lighting soft shadows produced by Global Illumination models. It is also used as an independent pass for contrast in render post-processing work, usually with Clay render enabled.

AO settings are:
  • AO Samples: The number of rays used to detect if an object is occluded. Higher numbers of samples give smoother and more accurate results, at the expense of slower render times
  • AO Distance: The length of the occlusion rays. The longer this distance, the greater impact that far away geometry will have on the occlusion effect. A high Dist value also means that the renderer has to search a greater area for geometry that occludes, so render time can be optimized by making this distance as short as possible, for the visual effect that you want.
  • AO Color: Color for ambient occlusion rays. Use this setting to control AO power.
Related articles: Use cases

Caustic photon map

Caustics is a concentration of light, produced by refractive medium (glass, water) and by specular components (mirror, glossy and fresnel) as well. It is possible to produce independent caustic photon maps in YafaRay. This option is available in lighting methods that either can't render caustics (Direct Lighting) or aren't efficient at this kind of task (Path tracing). Caustics add realism and they are relatively cheap to compute. The caustics photon map is visualized and processed directly by the camera rays and this is the reason why the number of photons in the caustics photon map must be high, since a high resolution map is needed.

Mix and Radius are two limit parameters to blur the caustic map, set in scene units. Photon hits [4] will be averaged within this circular area [2]. The center of each circular area is defined by camera rays hits [1][3]. If there isn't enough photon density within each circular area, low frequency noise will appear. Those circular areas will use whatever limit is reached first, either Mix or Radius. Many times incremental changes in a limit won't have any blur effect since the other thresold has been already reached. The main rule about photon mapping is applicable here: the more photons in the photon map the less radius for more precission yet keeping always a good minimun of photons to average (Mix value).

Below are two of examples of caustics:

Example of refractive caustics, using as photons source a spot light. Scene by MarcoA
Example of reflective caustics, using a caustic photon map in pathtracing. The HDRI background is the caustic photons source. Scene by Sevontheweb.

Caustic photon maps work better and more efficiently with concentrated light beams directed towards the 'caustic' surface, usually using a spot light or a photons lamp. Caustic settings are:

  • Photons: Number of photons to shoot. Increases photon map density and render times.
  • Caustic Depth: amount of reflective or refractive events for caustic photons.
  • Caustic Mix: amount of photons to mix (blur). The more photons to search for, the more render time.
  • Caustic Radius: area of photons to mix (blur). The more photons to search for, the more render time.

Related articles:

Materials that can produce caustics:

Path tracing.

Path tracing is a GI unbiased method in which each ray is recursively traced from the camera ray hit [1] along a path [2] until it reaches either a number of bounces set by the user or a light source, usually a background. In each bounce, shadow rays are shot to sample light sources [3]. The light contribution along the path is calculated back to the camera ray hit, taking into account diffuse surface properties. Many samples are need to be taken and interpolated for each camera ray to get a smooth result. A light source can be either a lamp, the scene background or both. Scenes with relatively small light sources and with a high contrast between light sources and their surrounding areas will need more samples to reduce noise. The smaller and the less accessible the light sources are, the more noise will appear. Pathtracing is a GI solution more suited for outdoor scenes and for indoor scenes with big light sources (area lights or big windows, interior HDRI) and a regular distribution of light.

Path tracing caustic component

Pathtracing caustic paths tend to be very noisy and a very big amount of samples is needed to get a smooth result. In YafaRay we have four alternative methods to render the caustic component when pathtracing is used:

  • Path+Photon: a mix of a caustic photon map and path tracing caustic rays are used.
  • Photon: a fast photon map is used to render caustics. Path tracing caustics rays are not rendered.
  • Path: Path tracing caustics rays are rendered.
  • None: the caustic component is not rendered.

This is an example about how methods for the caustics component work in pathtracing. In the first image (upper left), Path is used to get caustics, which are very noisy when a low number of samples is used (16). In the second render, 512 pathtracing samples are used to improve path traced caustics, but it takes much more render time (38 minutes). In the third example, Photons are used to produce the caustics component and the render time is the lowest of them all (Cm stands for Caustic method):

Area light types (sphere and area) with the Make Light visible option enabled produce caustics in Path caustic mode. More information about the caustic photon map settings can be found in the previous section (they are the same)

Related articles:

Path tracing settings

The other components of the global illumination model are rendered as usual. Other pathtracing settings are:

  • Depth: defines the number of rays bounces in order to find the light sources. Higher raydepth produces brighter renders. In scenes with big light sources, like open scenes using background lighting, a high raydepth could not be necessary. This setting controls depth of pathtracing caustic rays as well.
  • Samples: This setting defines the number of samples to take per eye ray hit, the more samples the better render quality and the less noise, but the longer render time as well. The relation between noise reduction and sampling isn't linear: to halve the noise, it is necessary to use four times as many samples. The total amount of Path tracing sampling depends as well on anti-aliasing settings, as explained in this section.
  • No recursion: only path tracing is performed without recursive raytracing.
It is a good practice to increase & decrease your 'samples' settings in base 2 steps (2-4-8-16-32-64-128... etc)

 Related articles:

Photon mapping

In this link you can find an exhaustive article about photon mapping.

Photon mapping was developed as an efficient alternative to path tracing, since certain effects are more efficiently simulated from lights (caustics, indirect lighting) while some others effects are more efficiently sampled from the camera (mirror reflection, direct lighting).

Photon mapping is a two pass technique:
  • The first pass is photon tracing, which consist in building and storing in RAM the photon map by tracing photons from lights. It calculates indirect lighting and caustics effects.
  • The second pass consist in raytracing the scene (camera rays) using the information in the photon map.

Photon mapping is biased, the average values might not be correct within each radius, but it is consistent: with more photons and less radius, it converges to a correct solution. Photon map produces low frequency noise (big patches) in contrast with path tracing, which produces high frequency noise (pixel-level).


Photons propagate flux. They are emitted by light sources and stored in the photon map when they hit a surface. In fact, each emitted photon can be stored several times. The photon map represents the incoming illumination, also called Irradiance (incoming radiance). Two photon maps are produced in fact, which are:

  • A low density photon map for diffuse surfaces called Global photon map.
  • An independent high density map for caustic effects.

In general, the more photons are used, the more accurate is the lighting estimate, but increasing photons increases time to build the photon map. If the number of photons is too low, the irradiance estimation becomes blurry at the edges of sharp features in the illumination, for instance fast transition from light to shadows in corners.

However, in simple scenes with no caustic effects, it is possible to produce relatively good results with very low density maps of 1000-2000 photons. In this case the photon map is rendered very quickly. Below a comparison between two photon mapping cases with different photon counts, notice the bad estimation below the horizontal prism and below the sphere, in the render on the left:

2000 Photons, Depth=2, 213 seconds. 1.000.000 photons, Depth=10, 290 seconds,


Depth controls amount of consecutive bounces for both caustic and diffuse photons. However, it has got a different meaning depending on photon type:

  • Depth for Caustic photons: Amount of consecutive refractive and/or reflective events for caustic photons, until they reach a diffuse surface.
  • Depth for Diffuse photons: Amount of bounces in diffuse surfaces. More depth will produce more color bleeding and a denser photon map. Beyond certain point the amount of depth will have a limited effect in map density or color bleeding, since most photons will be already absorbed by diffuse surfaces.

Comparison between different values of Depth, with the same number of photons shot. Photon hits means the number of photons recorded in the photon map, which is a value shown in the back console. Notice the relatively small difference in hits between depth=10 and depth=50. Almost equal render times in all cases. Scene by Kronos:

Photon Depth=3, photon hits= 410.000
Photon depth=10, photon hits= 610.000
Photon depth=50, photon hits=650.000


The second pass is a traditional ray tracing pass performed by shooting rays from the camera. Based on a single photon, we can not say how much light a region receives. This information is thus provided by the photon density. When a ray hits a point P on a surface, the illumination information of the neighboring photons is collected and interpolated at P. We don't need a photon for every polygon, but instead a few photons to estimate the incoming flux in the region around P. The photon density is higher in areas with strong incoming illumination.
A well chosen radius allows for a good pruning of the search. Two radius are used to perform the search of neighboring photons, whichever is reached first:

  • Diff. Radius. The sphere of fixed radius improves the estimation slightly, but fails in scenes with high variation of density of photons. It gives bad estimates if there are too few photons and blurry estimates when there is a high density of photons.
  • Search. User can specify a desired number of photons. It guarantees that there are K photons in the measurement. The more photons are used, the smoother the estimation will be. If too many are used, the estimation will tend to be blurry, while too few gives a splotchy appearance (low frequency noise). Interpolating 50 to 150 photons is often a good choice.
  • Caustic Mix: Number of photons to mix, produces blur in the caustic photon map.

In general, Radius should be inversely proportional to the number of photons shot: the more photons the less diff. radius. However, a too low radius wihout an adecuate photon density, introduces noise (small patches). Radius settings are one of the main factors in render times. If the number of photons to look up and average increases, so does the render times.

Note: When using photon mapping in enclosed scenes such as house rooms, it is important that objects follow realistic techniques for modelling, such as closed meshes with real thickness. Follow this advice in walls, floor, ceiling and furniture. In this way we avoid estimation problems when searching for neighboring photons.

 Related articles:

Materials that can produce caustics:

Final Gather

Final gather is a caching technique to improve and 'complete' photon mapping by gathering, after photon tracing, an approximation of the local irradiance by using several illumination bounces. This information is used at render time for further interpolation, with the obvious advantage of requiring a less accurate and therefore faster, but yet physically correct, photon map. Noise reduction in FG renders depends on FG samples and anti aliasing settings.

  • FG bounces: in a precomputed phase, determines the number of bounces for Final Gather rays. The first bounce is the most important one; subsequent bounces have got a more modest impact on the result.
  • FG samples: Final Gathering samples for interpolation, the more the better, but the longer it will take to render. The total amount of Final Gather sampling depends as well on anti-aliasing settings, as explained in this section.
  • Show map: Useful to tune the photon map. When using it, you must seek for results with a soft uniform appearance of patches, without noticeable groups of darker patches regularly distributed across surfaces, which is a hint of low-frequency noise. When you have problems of density there are two options, increasing number of photons shot and bounces, or increasing radius settings. A good well-balanced photon map is especially important for animations.

Related articles:

Bidirectional Pathtracing

Bidirectional constructs rays from the camera and from light sources, and connect each others' bounces with visibility rays to ensure they are mutually visible. It is more efficienct than pathtracing for caustics and indoor lighting, but variance show up as noise too.

Sampling of the bidirectional rays is performed by using antialiasing settings, which means that you'll need a big amount of antialiasing sampling. For instance, 16 AA passes x 16 AA samples would mean that 256 bidirectional samples are used. Another strategy is using an extreme amount of AA passes x AA samples, and stopping the render process when it is clean enough. The more AA sampling, the more convergence to the correct solution.

AA threshold must be 0 to sample all pixels in every pass. The proportion between AA passes and AA samples is irrelevant in this case for noise removal, since the whole image is resampled in every pass. Therefore, only the multiplication result matters, the higher the amount of samples, the less variation and the more noise reduction. However, having all samples in one pass will likely be faster than having many passes with one sample each. Rendering a small problematic portion of the image with border rendering (Shift+B) will give you an idea about how the result converges and the amount of sampling needed for the whole render.

 Related articles: