Welcome back. We spent a couple of weeks, actually, to learn how to use finite differences. Now, this week, we going to learn another entirely different concept, and that's the so-called pseudospectral method. To do that, let's look again, at the acoustic wave equation. So, in the wave equation, we have this second derivative that we approximated using the finite difference approximation as you see here. We improved also the accuracy by extending the operator in that case from a three point to a five point or even more points, and by that, improved the accuracy of the numerical solution of the wave equation. But can we do better? That question actually led to so-called transform methods because we'll make use of the Fourier transform, later coined also the pseudospectral method. By introducing these concepts, we're actually going to learn new strategies. For example, for function interpolation, exact interpolation on regular and partly also irregular grids that will play an important role in methods that we encounter later, such as Galerkin-type methods, finite element methods, and other type of methods. So, let's get started. So, the first concept we want to learn is actually function interpolation. Often, in physics, we try to approximate a known function, say, f of x, by something else. Why do we want to do that, if we actually know the function? Well, sometimes, in physics, we encounter functions that have discontinuities, and a couple of examples are shown here. If we think of earth sciences, this could, for example, be a discontinuous Earth model, where suddenly, the physical properties change abruptly. Now, actually, most phenomena in physics are described by partial-differential equations. Now, as you probably know very well, at these discontinuities, the derivatives are not defined, which is why we actually would be very much interested in replacing these step-like functions with something that is differentiable, and that's the reason. Now, how do we do this? We seek to replace our known function f of x by a function we call G subscript N of X, which will be a sum over some basis functions phi I of X weighted by coefficients A. The problem now, is of course, how to find these but we haven't decided yet what basis functions we'll use, and then we need to find out what are actually these coefficients. We will try to do this in an example later on. A very interesting aspect is, of course these basis functions will be known analytically, but that means, as you see here, that then, the derivatives of that function G N of X or approximation, will be actually very easy to calculate. So, let's make an example and find out what are good basis functions phi I and how to actually calculate these coefficients A. So, what is a good choice for the basis functions? Well, there are several possibilities. One are trigonometric functions sine, cosine, Lagrange polynomials, Chebyshev polynomials, and the latter ones we'll actually encounter later with other methods. Now, let's start with sine and cosine functions. So, the base functions would be defined as sin (nx), N starting from one, two, three, four until infinity, and cosine (nx), N starting from zero, one, to infinity. Now, these are orthogonal basis functions. How can we see that? Well, the orthogonality is defined by taking the integral over some interval from, for example, cosine (ix) multiplying cosine (jx). Now, if I not equals J, then these intervals are always zero, and if I equals J, they are actually not zero and well defined. Now, how to understand the orthogonality of functions? It's more easy with vectors. So, if we have two orthogonal vectors, it's actually quite clear. They span up a finite-dimensional vector space. So, to some extent, orthogonal functions are the equivalent. They span up an infinite-dimensional function space. So, let's have a look graphically at those, take the cosine functions and how they look like. Here, you see them basically developing from N, starting with zero, one. We just increase basically the frequency, and you can see how beautiful these functions are. Now, we will use those functions to interpolate an arbitrary function, and that's what we want to do in the next step.