In this video, we're going to extend the concept of the Jacobian, from vectors up to matrices, which will allow us to describe the rates of change of a vector valued function. However, before we do this, we are first going to recap by applying what we've learned so far about Jacobians, to another simple system. If we consider the function f of x and y, equals e to the power of minus x squared, plus y squared. Then using our knowledge of partial differentiation, it's fairly straightforward to find its Jacobian vector. So, we're now going to do the reverse of our approach from the last time and start by looking at the vector field in the Jacobians, then see if we can understand how the function must look. Let's start by finding the Jacobian at a few specific points. Firstly, the point minus one one, which we'll highlight on our axis in pink. Substituting these coordinates into our Jacobian expression and simplifying, we can see a vector pointing directly towards the origin. Next, if we move further out to the point two two, find the Jacobian, we are now going to get a much smaller vector but pointing once again directly at the origin. Lastly, before I reveal the whole vector field, let's look at what's going on at the origin itself. Subbing in the point zero zero returns the zero vector, suggesting that the function is flat at this point, which must mean one of three things. Either, this point is a maximum, minimum, or something called a saddle, which we'll cover later in this module. However, if we now reveal the rest of the Jacobian vector field, it becomes clear that the origin must be the maximum of this system. Let's now move back to the colour map representation of this function, where the brighter regions represent high values of f. So, finally, we can remove the vector field and observe the function in 3D, which I hope matches up with your expectations. Next, we're going to build a Jacobian matrix which describes functions that take a vector as an input, but unlike our previous examples, also give a vector as the output. If we consider the two functions, u of x y, equals x plus two y, and v of x and y, equals three y minus two x, we can think of these as two vector spaces, one containing vectors with coordinates in u v and the other with coordinates in x and y. Each point in x y has a corresponding location in u v. So, as we move around x y space, we would expect our corresponding path in u v space to be quite different and so it is. For those of you who are familiar with linear algebra, you may have guessed already where this is going. We can of course, make separate row vector Jacobians for u and v. However, as we are considering u and v to be components of a single vector, it makes more sense to extend our Jacobian by stacking these vectors as rows of a matrix like this. So, now that we have the structure and motivation for building a Jacobian matrix for vector value functions, let's apply this to our example functions and see what we get. So, we have u of x y, equals x minus two y, and we've also got v of x y, equals three y minus two x. We can build the Jacobian from these two, by simply saying well, the Jacobian is going to be, let's open some big brackets and say, well it's du dx, du dy, dv dx, dv dy. So, let's just work through it. So, du dx, is just going to be one, du dy, is just minus two, dv dx is minus two again, and dv dy is three. Our Jacobian matrix no longer even contains any variables, which is what we should expect when we consider that clearly, both u and v are linear functions of x and y. So the gradient must be constant everywhere. Also, this matrix is just the linear transformation from xy space to uv space. So, if we were to apply the xy vector two three, we'd get the following, two, three and that's going to equal two minus six, that's minus four and minus four, plus nine, that's five. Now, this is all well and good. But of course, many of the functions that you'll be confronted with, will be highly nonlinear, and generally much more complicated than the simple linear example we've just looked at here. However, often they may still be smooth, which means that if we zoom in close enough, we can consider each little region of space to be approximately linear and therefore, by adding up all the contributions from the Jacobian determinants at each point in space, we can still calculate the change in the size of a region after transformation. A classic example of this occurs when transforming between cartesian and polar coordinate systems. So, if we have a vector expressed in terms of a radius r, and the angle up from the x-axis is theta, but we'd like them expressed in terms of x and y instead. We can write the following expressions just by thinking about trigonometry. Now, we can build the Jacobian matrix and take its determinant. The fact that the result is simply the radius r, and not the function theta, tells us that as we move along r, away from the origin, small regions of space will scale as a function of r, which I hope will make a lot of sense to you when we look at our little animation here. That's all for this video. I hope you will now be able to build Jacobian vectors and matrices for yourself, with confidence in the exercises. And more importantly, that your intuition on the meaning of this concept, is starting to develop. See you next time.