So, spectral elements in one-dimension, a quite complicated code, let's see how we can implement this in Python. Again, we recall the equations and the steps leading up to the final algorithm in the notebook. You can go through this if you want. But, let's start with the initialization of the setup. We will have NE equals 125 elements here. We start with an order of Lagrange polynomial is two, so that is the lowest possible. Well, one would also be possible. We will calculate 600 time steps. Our physical domain is 10 kilometers, 10,000 meters, and we start with a source wavelet, over here is given as a dominant period of 0.1 second, that, of course, would correspond to 10 hertz. Now, we get the co-location points inside each element by this GLL routine, we've seen this before in the Lagrange interpolation, and we can now calculate by knowing the physical domain the number of elements we can calculate the length of each element that's here constant. So, this is not - if you want - h-adaptive, h used to be something that denotes the length of the element, we keep the length of the element constant but this can be easily relaxed. So, a key point is the global coordinates and that's actually initiated here in a vector called xg, which is basically also then used later for plotting. Now, the Jacobian, remember, depends on the length of the element, so it's LE, length of the element divided by two and also we need the inverse Jacobian. For the calculation of the stiffness matrix, we need the derivative values of the Lagrange polynomials at the co-location points. We get this by, we write these in an array L1d, which is given here and it's calculated by calling a subroutine that I encourage you to look at, it's called a Lagrange first. Now, let's go through how we initialize the various matrices. So, first, of course, this is very simple. Remember, the mass matrix is diagonal, and one of the diagonal elements is simply calculated by Rho multiplying the weights, the integration weights, at the co-location points times the Jacobian, J. This now needs to be put together to get the global symmetric mass matrix with the double for-loop here, which is giving you and you can appreciate that at some point the two end points of the mass matrix elements are added. So, remember we need the inverse of the mass matrix which we can do trivially by just calculating the inverse values in the diagonal of the mass matrix, but we also do write it into this matrix Minv. Again, it's important to know this is not what we would do for a realistic simulation, but it allows us to actually write the algorithms in exactly the way we have developed the mathematical formula. Now the stiffness matrix is calculated at this block here, and you can see that the final algorithm is implemented basically as we've written it in mathematical notation, we multiply, we have a sum over Mu, which is constant. So, this is now modified for each co-location point times the integration weights W, times the inverse of the Jacobian, times the derivative values of the Lagrange polynomials. Always we first calculate it on an elemental level, and then we need to add everything up to come up with the final global stiffness matrix that we will look at it in a second. So, this is the mass and stiffness matrix, and then, basically, we have the solution here. The solution algorithm will be very, very, very simple. So, it's just, basically, this one line extrapolating the displacement vector into unew, and otherwise, it's exactly the equation that we have written. The new Python implementations allow this at sign to be used for matrix-vector operations. This might not be computationally efficient, but it's, of course, a very dense way of writing this formula. So, let's just look how this works. So, we've initialized everything at the moment. Remember, we use a order two on this, so let's see what happens. So, we inject the source at the center of the domain, and, of course, this is intentional. We see the waves propagating away from the source in both directions, that's what we expect. But if we inject a Ricker first derivative of a Gaussian, we would expect a Gaussian shape to propagate. This is not what we see. So, this setup is not accurate. It doesn't lead to an accurate result. So, of course, what the spectral element method offers, and I say this again and again, it's very powerful, we just increase one value here, and that's the maximum order of the Lagrange polynomials inside our elements. Of course, beware what this means. It means we suddenly, we increase the global number of points. The smallest distance between the points is decreased. That means, if we assume a constant Courant criterion and it also means the time step is decreased. So, we pay a price but, hopefully, and let's see what happens here. Our simulation becomes more accurate, and you can immediately appreciate. Well, there's still a little wiggly signature here in between the wave field, but this, obviously, has dramatically improved the accuracy of the simulation going from an order two to an order three. So, we increase the order of the Lagrange polynomials to four to try and see whether we can get rid of this very strong numerical dispersion that we saw. So, let's have a look at the simulation and whether that is the case. So, source time function is injected at the center. Well, it's looking much, much better as before. But at the center of the domain in-between the two pulses that propagate away from each other, you can still see these small ripples and, of course, they don't go away, they will get larger. These effects will be larger the more time steps you propagate. So, let's further increase the polynomial order now to, let's go up to six, and because as discussed before, now the time step also gets smaller. Let's increase the number of time steps here and run again. It also, of course, means because we're always showing the snapshots after certain time steps, this is now a little slower. We need more floating point operations. Again, as we always say, there is no free lunch. If we are more accurate, usually, we have to pay more floating point operations. Now, it really looks like we have a very clean and accurate simulation. There is no signs of numerical dispersion at least in this initial stage of our simulation. Again, that's very nice to see, we've just changed one parameter, actually, which is the order of the Lagrange polynomial, and imagine this means in each element, we have more co-location points. So, in every part of the physical domain, we have more accurate simulation at the price of overall globally increased number of degrees of freedom, because the physical space remains, of course, the same. The smallest space increment is getting smaller, and remember, the Courant criterion was C times dt by dx smaller than some Epsilon, which we have specified and we keep also constant, and that means that our time step is getting smaller. Hidden in all this is the fact that actually, because the differences in the space increments between the largest distance and the smallest distance inside the elements becomes bigger and bigger, in actual real simulations, the order of the Lagrange polynomials is often not beyond four. So, that means you simply have to adapt your parameter setup and your physical models, such that you can have an accurate simulation with n equals four. So, now, you've seen how the spectrum element method works at least for the homogeneous case using our Python code, and I just want to bring some of the points home by showing these results here, you can see them in the background for the homogeneous case comparing with an analytical solution. Remember, when we inject a source time function, we actually will record the integral of that in the one-dimensional case. So, as we've seen here in the results, you see the first derivative of Gaussian, actually, the second derivative of a Gaussian was injected into the wave equation. Now, the title of each subplot tells you the order of the scheme, and you can clearly see here that an order two scheme does not provide the correct result. So, it's really wrong. Again, one of the really nice thing about the spectral element code is, you change one parameter with the order of the scheme, of course, the number of degrees of freedom, it increases the number of points in your domain, and because if you keep, of course, the physical domain constant, that means your dx or the distance between the co-location points become smaller, and that means you have to employ a smaller time step. But it's really nice you only change one parameter, let's say, from order two to order three or order four, and you can do this, you can play around with this in your Python code. As you can see here, if you go to higher and higher order, the solution converges and you have a very accurate solution. So, we will now look at an example for a heterogeneous model in one dimension, and afterwards, we will discuss the problem of how actually to find out for a complicated model whether a solution is accurate or not.