[MUSIC] So we're getting towards the end of our finite element chapter. But before we end, I'd like to talk about a general way of deriving the basis functions that are also called shape functions because that will play an important role later if you're looking into finding element methods for the higher dimensions to 2D or 3D. We're also trying to find an answer to the question whether there is a similar way of extending the accuracy of our finite element solutions and making it more accurate just like in finite difference methods where we actually used longer operators to become more accurate. Now, recall that our aim is that basically we're replacing our displacement field here in the continuous description by a sum over some basis functions weighted by coefficients. We often call Ui because they're multiplying the basis functions so they're actually one at a certain element boundary. So we've seen these basis functions before in the static case and also in the dynamic case. But the question is now is there a formal way of deriving these basis functions that help us maybe extending this to higher order or better accuracy? Now, the first thing, we have to again change the coordinate system. We look at this at element level, and we replace the global x coordinate by xi, which is actually x minus xi divided by xi plus 1 minus xi. And that means, basically, then, the space inside the element is defined between 0 and 1. So xi is an element of 0. And we're going to use this definition, then, to develop a system of equations allowing us to derive in a formal way so-called shape or basis functions. So let's look at this graphically first. We're seeking a linear function inside an element. Here defined now in the local coordinate system from 0 to 1. And on the left boundary, we have a value u1 at the boundary. On the right side we have a value u2 at that boundary. So we're seeking formally a function of u of xi which is actually a linear function, so which is c1 plus c2 times xi. So that's a linear function. And we're now seeking to find the coefficients c1 and c2 given that we know these functional values u1 and u2 at the boundaries. Well, that's very simple. Let's assume xi equals 0, and then we know u1 equals c1. And basically, we already know the coefficient c1. If xi equals 1, then u2 is equal to c1 plus c2. So as you can see, this is actually formally a system of two equations with two unknowns. And we can also write this which is more elegant as a matrix-vector system and solve it formally. And that's given here. You can see, basically, the vector of u, in this case u1, u2. You have a system matrix, we can call A multiplying the vector of coefficients c1 and c2. And of course, formally, now we can solve for the coefficients by matrix inverse techniques. Very simple, you can actually do this with a pencil and a piece of paper. And to obtain the solution formally for the coefficients, c1 and c2, as written here. So with the solution of the coefficient c1 equals u1, and c2 equals minus u1 plus u2, we can now go back into the original equation. Now, what happens with a little modifying with that equation on the right-hand side, we obtain basically two functions. u1 is multiplying one function, actually 1 minus xi plus u2 multiplying xi. And we now define 1 minus xi as 1 shape function N1. xi as a second shape function N2. Now, this reminds us actually of the cardinal functions that we encountered also previously, in particular in the week where we discussed the spectral method. So basically, now we have to find a formal way of developing these basis shape or cardinal functions with which we multiply the actual values of our solution. In this case, it would be u1 is the displacement value at the left boundary, and u2 is the displacement value at the right boundary. So that's a very elegant way of deriving that. But the question now is we've seen these linear basis functions before, but can we extend those to higher orders, higher order polynomials that are potentially more accurate than those linear basis functions? Yes, it does. It is possible to extend this concept to higher orders by actually adding one or more co-location points inside of our element. So we put this in the middle. So now, we have actually three points. We have xi equals 0, a point. We have one point at xi equals 0.5. And at xi equals 1, which is the right boundary of the element. Now, just let's do exactly the same way. We now have three equations for our coefficients. So we have u1 is equal, so we're given xi equals 0. u1 equals c1 and so forth. We build up that whole system for u2 and u3. So this can be, again, also be formulated as a matrix-vector system, so we develop the system matrix for, well, vector u. Well, the system matrix multiplying vector u is equal to the coefficients. And then, we invert that system of equations for the coefficient c1, c2, and c3. And that's given here. Now, with the solutions for the coefficients, we go back to the original equation that we posed at the very beginning, and we come up as we did before. This time, now with three so-called shape functions, N1, N2, and N3. And they're shown in this graphic here. Again, just like we had with cardinal functions of the spectral method and before, we can see that inside the element, the value of the shape function is 1 at, for example, N1 is 1 at the point xi equals 0, and 0 at all other co-location points. N2 and N3 in a similar way. So this is an example of how to extend the linear classic element scheme to higher orders, making it more accurate. And actually, that's kind of the road to making this the Galerkin-type methods, more accurate. Which eventually will lead to the Spectral Element method that we'll discuss next week.