In all previous sections, we considered cases of coherent multimode radiation, that is to say radiation composed of many different modes with well-defined phases. In fact, you should know from your classical optics course that most of the light that you receive, from the sun or from artificial lamps, is made of incoherent components. Actually, before the invention of the laser in 1960, the only available light sources were emitting incoherent light. This did not prevent many optics luminaries, from Young and Fresnel to Zernicke and Gabor, to observe interference, diffraction and other coherent phenomena using such sources. Such sources still play an important role in modern optics technology, and it is important to know how to describe light they emit using the quantum optics formalism. We know that light emitted by an incandescent source, such as a bulb or the sun, can be conveniently described as composed of incoherent elementary waves. By analogy with classical statistical optics, we describe it, in quantum optics, with the tensor product of quasi-classical states, whose characteristic complex numbers alpha_ℓ have phases that must be considered random variables, mutually independent. Each of these random phases is uniformly distributed over an interval of two pi, so the average of its complex exponential is null. The difference phi_ℓ minus phi_m is also a uniformly distributed random variable, so that the average of the complex exponential of phi_ℓ minus phi_m is zero, except of course if ℓ and m are identical. To be specific, we consider elementary modes that are monochromatic running plane waves. But other choices would lead to similar conclusions. The associated classical complex amplitude E-plus class, is then a superposition of such waves with a random phase in each term of the sum. For each calculated quantity, we must then average the result over these random phases. Let us for instance calculate the average rate of single detections. It can be expressed as a function of the classical field associated with the quasi-classical state. To proceed with the calculation, one must average a double sum, and all cross terms involving two different modes disappear, because of the statistical average. You are left with a quadratic sum of terms associated with the various components. This is exactly what one obtains in the classical statistical optics formalism. Similar calculations allow one to describe interference or diffraction with light emitted by classical sources. This is for instance how you can describe Young fringes with light from an extended source with many different frequencies and directions in the light arriving on the holes. Summing over these components, one can calculate the visibility capital V of fringes, which can be significant if the spreads in frequency and direction are not too broad. Since the beginning of this section, I have several times mentioned averaging indicated by an overbar. That overbar refers to ensemble averaging. The notion of ensemble average stems from classical statistics. It is in particular used in Gibbs statistical mechanics. The idea is to think of many hypothetical replicas of the system we are studying. The replicas are different from each other, since we are talking of a stochastic system. But they all fulfill the constraints imposed by the situation. The notion of statistical ensemble can be used for a random variable. For instance, the amplitude alpha_ℓ of a complex random variable, with a constant modulus and a random phase. We think then of many realizations of alpha_ℓ with the same modulus, but phases uniformly distributed over two pi. A representation in the complex plane consists of a circle whose radius is the modulus, on which all the possible values of alpha_ℓ are located. The ensemble average of alpha_ℓ is null, it does not mean that alpha_ℓ is null: indeed, the average of its squared modulus is obviously equal to rho squared. The ensemble average of the square of its real or imaginary part is equal to rho squared over 2. Be sure to know how to demonstrate that result. The notion of ensemble average is also quite useful to describe random processes. Random process is a big name for what was called random functions when I was a student. We think of a statistical ensemble of similar functions obeying the same constraints. As an example of a time-dependent random process, I show here the velocity of a Brownian particle in a liquid. You can see three samples of equally probable replicas, of the evolution of the velocity as a function of time. Just to satisfy your curiosity, it is composed of sudden jumps when the Brownian particle is hit by a particle of the liquid, followed by an exponential decay of the velocity towards zero because of viscous damping. Averaging bears on all the fictitious replicas. These allows you to give a meaning to the notion of average at a given instant. In fact, for this particular example, the average at every instant is null, but the variance is not null, and it can be calculated as shown by Einstein in 1905. This celebrated formula, which is a particular case of the fluctuation dissipation theorem, states that the average of the squared velocity, which is proportional to the temperature, is equal to the ratio of the diffusion coefficient capital D to the damping coefficient Gamma. Notice that the average and the variance are time independent, since this is a stationary random process. The notion of ensemble average is particularly useful if the stationary random process is ergodic. Ergodicity means that the ensemble average is equal to a time average taken on a time interval long enough. This is the case for the random process shown here, provided that you average over a time long compared to the inverse time constant of the damped exponentials, and to the average interval between two collisions. For ergodic processes, you can use the results obtained by ensemble averaging to find properties of the particular sample that you are considering. This is an extraordinary useful tool. The notion of ensemble average is not very different from the notion of quantum average. Consider for instance the quantum average of the electric field observable in a single mode quasi-classical state. It assumes a sinusoidal form, E classical, represented here by the red line. What is the meaning of the quantum average indicated here by brackets? In fact, it implies that if the measurement of the field was repeated many times on the same state at the same instant and position, the results of the measurements would be each time different. They would be distributed around an average. That is the quantum average. The spread of the results around the average is characterized by a variance, which can also be calculated by quantum averaging. That is to say, averaging on many hypothetical repetitions of the experiment. The green lines correspond to the average plus or minus the square root of the variance. The band between the green lines is where most of the results would be found for a quasi-classical state, with the mean number of photons of 10. So quantum average and classical ensemble average have definitely some similarities, but they are nevertheless quite different in the following sense. For a classical random quantity, we consider that the system we are observing is a sample with a well-defined value of the quantity of interest. The reason why we use probabilities, is that we do not know that particular value. In the quantum case, the standard interpretation is that the system cannot be thought of having a defined value of the observable, for instance the electric field, until the moment when we do the measurement. You may think that the distinction is artificial, and that it is only a matter of taste to believe that the electric field was determined before the measurement or not. In fact, this point of view, applied to the quantum state, leads to a contradiction with some experimental results in certain particular quantum situation. A celebrated example of such a situation was discovered by Einstein and collaborators in 1935, and is known as the Einstein-Podolsky-Rosen situation. We will encounter such situations later in this course, in the lesson about Bell's inequalities. So be careful: it is tempting to think of quantum averages as similar to classical ensemble averages, and that is often useful. But keep in mind that there are situations where such a point of view is proven wrong by experimental facts. Being able to recognize such situations is something you should be able to do at the end of this course. Remember, what we have done to calculate the average rate of single photodetections for incoherent light, described as a tensor product of quasi-classical states with random phases. First, we consider the quantum average in which we assume that all the phases phi_ℓ had a given value. Secondly, we performed an ensemble average over the random variables phi_ℓ. So in fact, the two kinds of averaging, quantum and statistical, are involved in such calculations. There is a more advanced formalism of quantum mechanics based on the density matrix, which allows one to effect the two kinds of averaging in a single calculation. When you encounter that formalism, you should remember that, efficient as it is, it involves in fact the two kinds of averages.