Hello, and welcome back. We are going to now give an example of
Calculus of variations used in image processing, and these are the techniques
we just saw in the previous video. Let's apply them to image processing now.
And we're going to do that with one of the most famous equations in image
processing, in the area of partial differential equations in image
processing, and that's Anisotropic Diffusion.
What is that? If you remember, when we discussed the
Gaussian filtering and averaging images, we talked that those are, were like
diffusion of the pixel values all across the image.
What was happening is that we obtained blurring.
Because if there were edges basically, we were averaging across edges.
We were letting the pixels' values across edges to be basically mixed up and that's
when we obtained blurry. And that was Isotropic Smoothing.
It was just going all around. It didn't matter if it had boundaries or
not boundaries. What Anisotropic Smoothing or Anisotropic
Diffusion is trying to do is going to try to average pixels value, pixel values,
only on the right side of the object, on the right side of the edge, on the
correct object. So, pixel values here are going to be
averaged among themselves, to basically denoise or enhance the image.
Pixel values here are going to be basically mixed among themselves.
How do we do that? We do that with equations,
partial differential equations as we see here. Before we look at the equations,
let us look at the images, this is an original image.
And if we do basically, Gaussian Smoothing or Averaging, we know that we
are going to blur. We blur across the boundaries because
it's just mixing pixels from different objects and that's what we see here.
On the other hand, if we try not to blur, try not to mix, we only mix pixels on the
same side of the boundary without going in this direction, then we get the much
sharper. We still see that we have removed noise
inside the object, inside the grey matter of these MRI picture of the brain, we see
that this is much smoother, that we are still preserving the boundaries very,
very, nicely. The difference between these two is here,
we have what we have marked here with red background, is isotropic diffusion, heat
equation. We actually already talked about it that
we're going to to derive it in the next slide using Calculus of variations.
Here, we have added isotropic before we take the second derivative which is a
divergence. I have to basically remind you that the
Laplacian of the image is the divergence of the gradient.
And we are ready to define gradient and the divergence in the previous slides.
So, before we take it as here, we introduce a function in between here.
This is a very similar function as the one that we used for active counters.
It's, for example, one over the gradient, we are going to see that in the next
slide. So, a function is going to say,
wait a second. Don't diffuse if there is a strong
gradient. Stop the diffusion.
And that's why there is diffusion in certain directions when there is not a
strong gradient. And there is no diffusion or reduced
diffusion where there is a strong gradient, meaning across edges.
And that's where we get very sharp boundaries.
We preserve them while at the same time, regularizing inside the objects.
And that's what we want. These equations are the results of
Calculus of variations. So, if we basically define any function
of the gradient, so here,
this is what we saw the last time that we were basically taking functions of u or
derivatives of u. And basically, the gradient is the
derivative of the image and rho here takes the role of f.
So, we take any function of the magnitude of the gradient.
If we compute the Euler-Lagrange, this is what we get.
That's the Euler-Lagrange of that equation if we follow exactly the formula
that we show in the previous video. And remember, the partial differential
equation is obtained by deforming the image equal to the Euler-Lagrange.
And then, when this doesn't change anymore, we have
solved the Euler-Lagrange and we have obtained may be only local, but at least
we obtained that minimizer of this functional.
So, once again, this is just by doing the Euler-Lagrange that we learned in the
previous video. We just pick a couple of examples of this
raw function to show how nice this is. For example, let us consider rho of a to
be a squared. So here, I am trying to minimize the
magnitude of the gradient squared. And then, the Euler-Lagrange,
so ir rho of a is a squared, then rho prime is 2a,
okay? So, instead of rho prime, I have to put
two times whatever is inside rho, which is the gradient,
okay? So basically, have here, and then the
Euler-Lagrange that I get is, the Euler-Lagrange equation is pie t,
this is this, equal to divergence of rho prime, basically 2a.
Let me ignore the two, it's just scale. So, a time, we mean, replace the
gradient, okay?