So, the purpose of this next finite element Jupiter Notebook, is actually to implement with Python our algorithm, our finite element algorithm for the elastic wave equation. Also, something I really want to stress, we compare it with a finite difference solution, also formulated basically as matrix vector calculations and you can then appreciate how similar the linear finite element method is actually to the finite difference method in its simplest form. So, let's have a look at the code. Again, we start with a formulation of the basic equations that of course we've gone through during the lecture, but they are also state it here in the notebook. Now, in terms of the setup, we are solving a fairly sizable problem in, one-dimension it's not so difficult. We take 1,000 grid points. Again, remember these are the element boundaries. So, in fact, here we have 999 elements. We solve a physical domain of 10,000 meters and assume a shear velocity of 3,000 meters per second. By the way, that's a reasonable velocity of shear waves, for example, propagating in the earth's crust. As density value, we take 2,500 kilogram per meter cubed. Using that information, we can actually then calculate the shear modulus, mu. We initialize the source at the center of our domain in terms of grid points. This is at ISX equals 500. We initialize the space domain. So, space coordinate x as being defined from, we first calculate dx which is the increment, and then initialize x from zero to xmax, which is 10,000 meters, 10 kilometers. Here's an interesting operation we do. We use the intrinsic np.diff function to actually calculate the size of the elements. So, the diff function basically takes just the difference between adjacent elements. That code is general in the sense that you could in principle also have an irregular grid meaning that the size of the element actually changes with space. We initialize the parameters ro and mu. Let's calculate a time step using the finite difference based CFL criterion and then we actually can get started. So, as you know, we need the source time function. I'm not going to go into this, you can unravel here the code and look how it's initialized again. It's the first derivative of a Gaussian in time, and now what is important, for the any finite element method, we have to initialize the system matrices here is the mass matrix. Again the equations are given, and you see that the mass matrix here is not a diagonal matrix, it is a banded matrix and we do have to estimate, we have to calculate the inverse of that matrix. It is initialized with a code given here, and very important aspect here is that we actually have to take to calculate the inverse. Now, again, this is now an inversion of a matrix that is 1,000 by 1,000. That of course goes almost instantly today on a computer, but I would like to remind you that for two-dimensional, three-dimensional problems, these matrix will be huge and it actually would not, you will not be able to even define it as a matrix structure, you would have to resort to different strategies, but we're not going to go into this during this course but that's important. We initialize the mass matrix, basically we calculate the inverse, and then initialize the inverse of it here in the matrix called Minv. The second system matrix we have to initialize is the stiffness matrix, well known even from the static case. Again, I remind you that it looks like the second derivative operator and finite differences, and the code to initialize basically the same as in the static case is given here. Now, what I would like to do here is also to show these matrices graphically. So, on the left-hand side you see a small section of the finite element stiffness matrix and on the right-hand side the mass matrix. Now, the mass matrix is also banded. So, it's actually the band is wider than the stiffness matrix, and we will now start to compare this with the equivalent formulation of the finite difference method in terms of matrix vector operations, and then we can compare. So, we can define a finite difference operator as well as a matrix vector product, and the matrix is given here analytically. So, it's 1 minus 2-1 divided by actually tx square, and it's very simply initialized in this form here, which is the difference matrix D. We also need an equivalent mass matrix which is in this case then a diagonal matrix, and we call this Mfd is simply the inverse of the densities at the element boundaries, so it's 1 over Rho Pi. If we look at those, you can appreciate that actually the finite difference derivative matrix looks here in color exactly the same as the finite mfd stiffness matrix. The only difference is that the mass matrix for the finite difference case is actually now diagonal, and we can now proceed to actually look at the extrapolation algorithm of both finite element and finite difference approximations with these matrix vector notation. The finite element solution is again given here. We've just derived this in the course. The interesting aspect of this is now that the finite difference solution to that problem actually looks exactly the same. We can see this here in the code. This is the solution scheme. First, the finite element method it's simply an implementation of the above equation. In Python, elegantly with this at sign, that is an implicit matrix vector calculations that you also might know from Matlab, and the beautiful aspect here is that if we now use the finite difference method, and I'm using here for the vector field p instead of u, just to distinguigh it from the finite element method and vectors, you can see it looks exactly the same, except for a minus sign here, but of course the matrix also has a different sign. But this serves to show that the finite element method in its basic form, in the linear form, and the finite difference algorithm in its simplest form, in it's linear form are basically almost identical. But now let's see, let's look at the simulation. So, we run the code now, and we are comparing the finite difference solution with the finite element solution. So, we're expecting again the integral of the source time function, so a Gaussian propagating to the left and to the right. The black curve is the finite element solution, the red curve is the finite difference solution. Now, something very interesting happens. They're both dispersive, that's what we expect, but they are dispersive in different directions. So, the finite difference solution actually is dispersive in one direction, the finite element solution is dispersive in the other direction. They have almost the same dispersive character. So, in terms of quality of the solution, they're basically identical. So, that's a very important point. To conclude here, finite element implementation, linear finite element implementation, and the linear finite difference solution are basically identical. It's one of the reason the finite element method that is the key difference is that here we need to do a matrix inverse problem of a potentially very huge system matrix, and in finite differences we do not have to do that. It's part of the reason why at least in seismology, geophysics, the advantage of the finite element method versus the finite difference method was not strong enough to basically lead to the finite element method being widely used in seismology and computational wave propagation. We'll get to that in the next week when we talk about the spectral element method which is a slight modification of the finite, the linear finite elements scheme with different basis functions, because then suddenly that problem with the matrix inversion actually can be removed, can be solved, leading to a very, very attractive scheme that's widely in use today.