Okay, so our next topic is about finding an inverse. So very quickly we will show you that given a matrix we define what we mean by an inverse? Inverse is obviously important. And finding an inverse pretty much is the same as solving a linear system. So that's the what's this. So, a matrix A is invertible if there exists a matrix B, such that the multiplication of A and B or B and A is identical, and equal to the identity matrix. So for the identity matrix, I we mean, all the numbers on the diagonals are one and all the others are zero, okay? So that's the identity matrix. So this is something we define for each matrix A we define an inverse as B if the thing happens. So if such a matrix B exists, we call it the inverse of A. And we don't really call it B because B is weird, right? We give it a special name, which is A inverse. So when you see A inverse, that means the inverse of A, okay? So if that's the case the inverse of A multiplied by A, must be the identity matrix and A multiplied by A inverse must be identity matrix. So there are several properties of inverse, but what's most important is that for any matrix A its inverse is unique. There is only one inverse if it exists. To see these we need a very short proof. So suppose that you have a matrix B such that BA is I. We also have another one that is AC is I. So if that's the case, then that show you that B and C are actually the same thing. So we know B should be equal to B times I, right? Because identity matrix multiplied by everything is still that thing. So, B equals to BI and then I equals to AC, right? As according to our definition. So in that case, you get to your B multiplied by AC. And of course, this means BA multiplied by C or through some matrix multiplication arithmetics. And then here is I, IC is C. So this means if there is a matrix multiplied at the left hand side or right hand side of your matrix A, the two things must be identical. So I understand that this is not really a formal proof showing that A inverse is unique, okay? But somehow I hope this just gives you some intuition somehow to convince you that A inverse is unique if it exists. So if you're interested in a formal proof, there are some other linear algebra formal courses that should we agree. So the thing is that here we want to show you how to find A inverse, okay? So the process is called Gauss-Jordan elimination. So obviously this is based on Gaussian elimination. So let's see how to do this. The idea of course Gauss-Jordan elimination is very simple. It says, so first, you give me a matrix A, obviously, this matrix must be a square matrix, n by n, okay? So if you have a matrix A, I'm going to put it somewhere at the left hand side columns. And then for the right hand side column, I'm going to put an identity matrix, all right? And then I'm going to do row operations, row operations, row operations, forward elimination, backwards substitution. I'm going to make this A becomes identity. And once I do that, during the process, the right hand side values will also be modified. And eventually, whatever thing they get at the end, that's A inverse. As long as you make A becomes identity, that identity becomes A inverse. So this seems to be magic. So before we explain why let's see a numeric example. So suppose your A is this one, okay? 1,4,2,1,3,2,0,5,1 okay? This is your matrix A. Suppose you want to solve for the identity A inverse from this A. You first put your A at the left hand side of the linear system. And then for your right hand side, you put your identity matrix and then you start to do operations. You eliminate those numbers in the first column. You eliminate a number in the second column. And the while you do all these eliminations, when you multiply something by a row by something, change everything all together. When you subtract one row from the other, do the thing for every elements. So the right hand side values, they change, they change. So you keep doing this and that. So now you have a triangular system here. You keep doing this. Now you should do backward substitution. So you keep doing backward substitution backward substitution until you get an identity matrix at the left hand side. If that's the case, the theory tells us that the right hand side resulting columns gives you A inverse. So very quickly, we may do verifications by ourselves. This is our matrix A, this is A inverse. So, no matter where you put a inverse at the right hand side or left hand side of your matrix A, the result must all be identity. So for example, well 1,1,0 multiplied by seven, negative 6, and the negative 2. 7 minus 6 gives you 1, which is the first element in the identity matrix. If you multiply by the first row by the second column here, you are going to get 0, which is exactly the second element in your identity matrix. So you may do all the verifications, you're going to see that indeed, the product here gives you the identity. So this is a numerical example showing you how this may be calculated, how this may be carried over. Now it's time to show you why it is correct. Basically we want to find A inverse, which should appear at here so that A times A inverse becomes identity. It may also be a inverse times A becomes identity, but we don't need to worry about this because if this happens, then that also happens. So I want to find a matrix A inverse of here, so that A times A inverse becomes I. If that happens, that's pay attention to the thing here. This is A, this is A inverse. And then I'm going to get the identity matrix, all right? So if that's the case, obviously, what we are trying to do is that here we have three numbers. These three numbers can be used as coefficients to do linear combinations for the three columns of A staying out of game. When you consider A and the first column of A inverse, the first column of A inverse contains three values, if this is a three by three system, right? So pretty much what you are doing is to use these three values to do a linear combination for the three columns in A to get to a result. What's the result? The first column here. So that means, if you consider A inverse and if you call the first column as x1. You are actually solving this particular linear system. Your A times your x 1, A times your first column in your A inverse is going to give you 1,0,0,0,0 which we denote as e1, okay? So if that's the case then similarly the same thing happens for column two. Column two is another three values that can be used as the coefficients for linear combination. They should get you the result of the second column in your identity matrix which is e2, all right? So the same thing happens for every columns. And then that somehow means, if we want to solve for A inverse, we do this goes through the Gaussian elimination to get x1. We do this Gaussian elimination to get x2 and so on, and so on, and so on. We should do unseparate Gaussian elimination to find all your x1, x2, up to xn. So we solve one system, solve another system, and solve the last system. Whenever we change A to I, your e3 would become your x3, just like Gaussian elimination. But the thing is that if you realize what we need to do is this thing, then exactly you would also realize that the process of row exchange, the process of row operations for all these systems would be identical. Because your left hand side matrix is always A. So what you may really do is actually, you put A at the left hand side, and then column one, column two, column three, column four. You put e1, e2, e3, at your right hand side and then solve all the system at once. If you do that, and then make this identity, the right hand side columns must be x1, x2, x3. Eventually they give you A inverse. So that's the idea. That's why it Gaussian-Jordan elimination would work. It's pretty much a collection, a combination of several almost identical Gaussian elimination. Some remarks, basically, if we agree that Gauss-Jordan elimination is based on Gaussian elimination. And when you have A, when you want to do Gauss-Jordan elimination, you basically look at A and do one time Gaussian elimination. That's why the complexity of finding an inverse, according to Gauss-Jordan elimination is also proportional to n to the power of 3. And so that's how you get the complexity analysis for finding an inverse for using Gauss-Jordan elimination, big O n to the power of 3. Also because we are talking about Gauss-Jordan eliminations, so it's possible that our square matrix A cannot be changed to an identity during the process. If A can become an identity matrix, we know in this case this is non-singular. And if this is non singular, that means we can find an inverse. Non singular means invertible. On the other hand, if A cannot be make an identity, this means our A, the system is singular. And the singular means non-invertible, Okay? So non-singular, invertible, singular, non-invertible. These concepts now becomes equivalent, and you have the one to one mapping because everything depends on Gaussian elimination.