SPPM explained

Note: The whole tutorial is made with "Initial Radius Estimate" disabled.

Introduction to SPPM

SPPM (Stochastic Progressive Photon Mapping) is the lastest development of algorithms based on photon mapping. The proposition is shooting photons in several camera passes instead of doing it using only one pass. This way we can accumulate photons statistics in averaging circumferences for a more precise GI calculation, without hitting RAM memory limits. However, there is a key difference between the standard photon mapping and SPPM, at least regarding "diffuse" photon mapping: there is not an interpolating algorithm such as Final Gather or Irradiance Caching to interpolate the big patches the diffuse photon mapping algorithm usually produces.

Diagram on how SPPM works, notice how stochastic properties are given by means of several subsequent render passes. Hachisuka, Jensen et al.

This is probably the most important source of issues for users of YafaRay, as they find SPPM unpredictable and hard to tune up. The issue here is that we are not used to work without final gather powerful interpolating capabilites, as FG can perfectly work with low quality and low coherence photon maps. It is worth remembering here some general facts about SPPM:

  • SPPM is a derivative from the standard photon mapping algorithm, so many of the rules from the later apply to SPPM.
  • Diffuse photon mapping accumulates indirect lighting and color bleeding on material diffuse components. It is a non-focused low-resolution map if we compare it with other photon mapping algorithms such as caustics and participating media (SSS). It tries to cover the whole scene with photon patches and produces low frequency noise (big patches).
  • Photon mapping, like other raytracing algorithms, suffers from geometric dilution of rays and diminishing returns at patch level. After some initial passes, the amount of photons travelling the 3D scene should be multiplied in order to reach every patch with enought lighting information so the low frequency noise is significatively reduced.
  • Radius is the key setting, and it is related to the scene units. The less radius the more accurate but the more photons hits needed in each patch.
  • Diffuse photon mapping works best in enclosed or semienclosed scenes.

Looking for clues

The previous paragraph should already form a picture of what is really going on with SPPM. Without an interpolating algorithm, SPPM needs very coherent and dense photon maps in each pass in order to converge to a good solution in a reasonable number of passes. To produce those coherent passes, we will use the same techniques than in the standard PM algorithm. Lets take a look:

This is the first pass of a scene with the same SPPM settings but with a different lighting configuration, so on the right there is a less amount of wasted photons. SPPM will have a hard time refining the configuration on the left, even with thousands of passes.

In the case above, to reduce amount of wasted photons, an arealight was used to shoot photons in front of the windows while background photons was disabled. Also the sun light was changed into a directional light. You can take a look at several techniques to reduce amount of wasted photons here, under the "Optimising the flow of photons" section.

Diagram of technique used to reduce amount of wasted photons.

In the first pass we should already look for clues that will tell us whether there will be a good progression or not. One of those clues is for instance black incoherent patches, which are usually a no go for an algorithm like SPPM. Also high levels of low frequency noise means SPPM does not have enough photons hits to work with. In general, the bulk of the refinement will be done in the first 4-8 passes, so this is the temporal limit we should set to see whether the convergence is going to be good or not.

SPPM refinement

Any raytracing algorithm with stochastic properties is afected by diminishing returns in a 3D computation. Geometric dilution is the main factor in these dimishing returns. Geometric dilution not only applies to photons emision but also locally to every photon bounce, so several million photons in one light source of our 3D scene can perfectly turn into just a few photons to work with in some averaging circumferences. Also photon mapping uses a montecarlo algorithm called russian roulette to decide wheter a photon is killed or keeps bouncing on, which also reduces signal and increases low frequency noise.

Users notice how the first passes solve the bulk of the problem but then image refinement untill a noise free result could take an insane amount of passes and render time. In an ideal situation, we would multiply photons emission for a fixed rate in each subsequent pass in order to have some kind of linear reduction of noise but then we would soon hit our RAM memory limits, as the photon map is built in that section of our hardware. So after the first passes are done, we are in a sort of all-in situation, where in order to keep up with the ideal sampling curve, we need very high photon settings along a high number of passes. Basically I shot as many photons as RAM memory allows for each pass.

The scene above started to show noise free results in about 128 passes, with the following settings:


The scene is modelled so 1 meter = 1 Blender unit, so a Search radius = 0.01 means a relative patch precision of 1 cm. in the scene, which is very low. This is a key concept: Search radius is related to our scene units. The number of photons to shoot by all sources is quite hight (20 M in each pass) and Search count is so high (10k) that a lot of photons can accumulate in each patch before this limit is reached. In 128 passes the result is a very clean image; I believe our path tracing algorithm would have a hard time reaching this level of noise in the same time. Also the result is so consistent that it is probably much more compatible with animation works than the standand photon mapping plus final gather. Quality wise, SPPM is a very interesting solution.

I have not used Radius factor because Search radius is already very low. However, we could start with a bigger radius and use a factor to reach the desired precision in the estimated number passes. For instance, with a Search radius= 0.15, a Radius factor= 0.98 and 128 passes we would have a final radius of 0.15 x 0.98128= 0.011, which is our precision target. Be careful not to decrease Search radius in such a rate that it decreases the capability to add photons to the statistics, since it would decrease capability to remove low frequency noise as well.

Once again, Border rendering over a significatively difficult area is an excellent tool to set target settings. Remember that SPPM shoots one ray per pixel per pass, and iteration over several passes is what gives SPPM its stochastic properties, which will turn low frequency noise (big patches) into a more desirable high frequency noise (dots), so it is always better iterating over dozens of passes than reaching results using only a few ones.