(work in progress)
Index:

In YafaRay, sampling settings are scattered around different panels but they are all connected to configure the raytracing tree. Primary rays that start from the camera (green) are configured with antialiasing samples. These rays are set to bounce in the scene until they find a diffuse component in a material, this could happen in the first hit or after several bounces on reflective and refractive components (raydepth parameter). Imagine a 1000x1000 pixels render configured to shoot 10 antialiasing samples for pixel, this means at least:
1000x1000x10= 10 million antialiasing (AA) samples
Secondary rays start from diffuse components and are used to sample light sources with shadow rays, such as mesh light, area lights, sun light and IBL background. Secondary rays are also used for global illumination algorithms like Final Gather and Path tracing. Imagine that the previous example is also set to shoot 32 samples for an IBL background and also 16 samples for a Final Gather algorithm. This means that:
(10 million AA samples x 32 IBL samples) + (10 million AA samples x 16 Final Gather samples) = 480 million samples a.k.a. rays.
This count results from a fairly simple example. With high resolution, path tracing bounces or several light sources on the scene, the final count could soon reach several billion rays per pass. Any increase of antialiasing samples or light sources has got a multiplicaton effect on the final ray count.
For instance, if you add another area light source with 32 samples to the previous scene, this will mean a twofold increase of secondary rays in the direct lighting integrator at least. If pathtracing were used instead of final gather, it would produce a multiplication effect on that integrator too:
(10 million AA samples x 32 IBL samples) + (10 million AA samples x 16 Final Gather samples) + (10 million AA samples x 32 area samples)= 800 million samples
This is why montecarlo raytracers have got a big issue in the "too many light sources" scenario, which is on the other hand a fairly common case in real life, particularly in interior scenes.
YafaRay is flexible enough that users can make a dense shrub with lot of primary branches or more of a tree with sampling concentrated on secondary rays, depending on the scene characteristics. The main factor driving render engine performance is basically calculation of the ray intersection and its derived rendering equation, which could represent up to 80% of the computing time. Rays means intersections. A render with soft tonal transitions is an easy signal to sample, hence less rays. A render with high contrast tonal transitions usually require more samples. In fact, a render could mix at the same time:
Variation in surfaces (modeling, relief mapping, texturing), high contrast in materials and DOF need primary ray sampling. Hasselblad by Aaron Solo.
Montecarlo noise from light sampling (area light, IBL) and Global Illumination algorithms need secondary ray sampling. Interior by Olivier Boscournu.
Montecarlo noise happens because different averages are produced of set of samples which have directions randomly generated. It is like a vibrating signal with averages below and above an ideal continuous result. In a YafaRay scene, there could be several sources of montecarlo noise operating at the same time:
Montecarlo noise produced by an IBL background (left) and montecarlo noise produced by GI Final Gather (right). Both noises are combined in the final render below.  
Reducing montecarlo noise is not an easy task since it is concerned with problems of density in the sampling hemisphere. For the same reason that a point light has got an exponential decay as the lit area quadratically expands with distance, sampling from a point will require exponential rates to significantly fill the hemisphere around them with enough density, particularly when something breaks monotony in the incoming result and requires more sampling in that direction.
Geometric dilution of pointsource radiation into threedimensional space. Image by Borb, CC BYSA 3.0
Notice how the A letter is sampled in third stage only by one ray versus all rays in the first stage. Geometric dilution happens in anti aliasing rays but also in secondary rays used in arealight sampling, IBL backgrounds, path tracing, final gather and photon mapping. Besides, this is only a partial analogy of what really happens as raytracing consist on sampling a 2D signal (rendering equation) across a 3D space, so geometric dilution is coupled with other common issues of signal sampling and reconstruction.
As a general rule in an adaptive strategies, four times the previous amount of samples is needed to halve montecarlo noise, which is in fact a geometric progression. Proportionally, to reduce ¼ the amount of montecarlo noise, 2 times the amount of previous samples is needed, and so on. In other words, in a progressive rendering strategy, the remaining noise needs more computing resources than the one solved so far. In fact, Montecarlo algorithms are afected by diminishing returns.
This is also why adaptive sampling and biased strategies, such as importance sampling and irradiance interpolations, are so useful in montecarlo raytracing, as calculating the last fractions of a global illumination result can take exorbitant resources.
There are also other issues about montecarlo noise worth considering. With more sampling, the amount of montecarlo noise gets reduced respect to our focal parameters and tone mapping range, but as a vibrating signal that expands ad infinitum, with the same sampling parameters in the same scene, noise could appear or dissapear with other focal parameters and tone mapping ranges. For instance, tone mapping up or down an scene could reveal noise that was otherwise hidden in dark or bright areas. Also, a tone mapping shift could reveal fast high contrast artifacts that need more sampling work, or zooming into an area could reveal noise that looked otherwise as a continuous result. Sampling is indeed a view dependant changing variable.
Adaptive sampling works by using a threshold value to compare color of adjacent pixels. If the thresold condition is not met, then additional samples are taken in subsequent AA passes untill the discrepancy gets within limits. In this way, sampling happens only in those problematic offlimit areas without needing to sample on everywhere.
Standar adaptive sampling algorithm working on offlimit areas.
In the standard adaptive sampling strategy above, two samples are added in every pass untill eight passes are completed. Total of AA samples reaching the scene are 16.
In the YafaRay project we have substantially changed our view on the standard adaptive sampling algorithm we have used for years. In the past, using an arithmetic progression for increasing samples in subsequent render passes, we assumed that a diminishing amount of pixels solved was due to the algorithm efficiency, as more pixels were getting within limits. In fact, it was the opposite.
The curve described by the arithmetic progression used by the standard adaptive sampling model separated from the ideal geometric progression needed for solving montecarlo noise, as described in the previous section. The result was that the algorithm was losing efficiency due to an insufficient amount of samples for an increasingly difficult montecarlo noise found in subsequent adaptive passes. Remember, the remaining noise is always more difficult to solve than the one solved so far.
Ideal geometric progression of samples versus standard arithmetic progression in an adaptive sampling strategy.
In the lastest versions of YafaRay, a panel has been added to multiply montecarlo samples for a fixed rate in every subsequent antialiasing pass, all in order to get some linear reduction of montecarlo noise. One could argue that increasing samples in a geometric progression would mean that sampling would soon reach exorbitant numbers. Indeed, this kind of progression would only make sense if increasing samples in such a rate would produce an equally strong rate of pixels solved. And that is what really happens !
(demostration here)