[MUSIC]

Hopefully, you will all now have a reasonable feeling for

what an eigen-problem looks like geometrically.

So in this video, we're going to formalise this concept into an algebraic expression,

which will allow us to calculate eigenvalues and

eigenvectors whenever they exist.

Once you've understood this method, we'll be in a good position to see

why you should be glad that computers can do this for you.

If we consider a transformation A,

what we have seen is that if it has eigenvectors at all, then these are simply

the vectors which stay on the same span following a transformation.

They can change length and even point in an opposite direction entirely.

But if they remain in the same span, they are eigenvectors.

If we call our eigenvector x, then we can say the following expression.

Ax = lambda x.

Where, on the left hand side, we're applying the transformation matrix A to a vector x.

And on the right-hand side,

we are simply stretching a factor x by some scalar factor lambda.

So lambda is just some number.

We're trying to find values of x that make the two sides equal.

Another way of saying this is that for our eigenvectors,

having A apply to them just scales their length or does nothing at all,

which is the same as scaling the length by a factor of 1.

So in this equation, A is an n dimensional transform,

meaning it must be an n by n square matrix.

The eigenvector x must therefore be an n-dimensional vector.

To help us find the solutions to this expression,

we can rewrite it by putting all the terms on one side and then factorizing.

So

(A - lambda I) x = 0.

If you're wondering where the I term came from,

it's just an n by n identity matrix, which means it's a matrix the same size as A but

with ones along the leading diagonal and zeros everywhere else.

We didn't need this in the first expression we wrote,

as multiplying vectors by scalars is defined.

However, subtracting scalars from matrices is not defined, so

the I just tidies up the maths, without changing the meaning.

Now that we have this expression we can see that for the left-hand side to

equal 0, either the contents of the brackets must be 0 or the vector x is 0.

So we're actually not interested in the case where the vector x is 0.

That's when it has no length or direction and is what we call a trivial solution.

Instead, we must find when the term in the brackets is 0.