Welcome to the second video in the model of a medical image analysis. In this video, the objective is you get an idea how to transform, how to modify the information contained in a particular digital image. Now you have an idea of what a digital image is and its main characteristics. We use this information to modify this information contained in the image and obtain a filtering that it a process of image with a particular purpose. For that first, I am going to define what digital image processing is. We present you are a taxonomy of the different operations inside image processing area. And at the end of the video, we will describe three different kind of operations, operation based on histogram, filtering in the spatial domain and filtering in the frequency domain. In the literature, you can get a lot of definition of digital image processing. In a very simple way, we can understand that image processing is any algorithm that produce an output also called filtrate image just modifying the information contained in the input of the original image. The information that we can modify are degree values, are the shape of the object, the relationship of the object, etc. In this slide, you have a taxonomy of the main operations that are included in the area of digital image processing. As you can see, there are five main groups: image enhancement, image correction, image analysis, image compression and image reconstruction. In this course, we are only talking about the first group. I hope you remember what the histogram of a digital image is, what kind of information it represents. As you can see, in the left of the slide, we have an original image with its histogram. From the information that the histogram provides us, we know that most of the pixels in this image have high grid values. In this sense, we say that is a clear or light image. If, for some reason, we want to obtain a darker version of this image, we just can move the histogram to the left. But usually, the kind of operation that we do modify in the histogram of an image has the objective of increasing the contrast of the original image. Inside this group of operation, the more known is the equalization of the histogram. The idea of the equalization of the histogram is to balance the number of pixels with respect the grid values in the images. That is, the final objective is we have the same number of pixel for each grid value. This is not completely possible because, in this case, the information of the image can be Q but we can get some approach. And, in this case, as you can see, the result of performing this operation and also you can see that the contrasts of the original image has been improved. Now we are defining the filtering in this spatial domain. This spatial image is the domain that we can identify the object. When we talk about a coordinates x and y, they are spatial variables. Then the filtering in this spatial domain is defined like the set of operations that are performed directly on degree levels of the pixels. This kind of operation has two main objective. One is smoothing the images. That is, eliminate noise in most of the cases. And the other one is to detect the edge inside the images. From a mathematical point of view, the white model this kind of filtering is using the mathematical operation called convolution. Then, from a particular image, the original image, we can combine it with a function that it is called filter for spatial function mask, we can use different names. The result of the convolution is the filtrating image. In the next slide, we are solving particular filtering in this spatial domain. In the case of the smoothing, the kind of filter used is like you can see in this slide some matrix. In this case, five by five and with one, one in all positions. In most cases at this point, but its function is defined by mask or by matrix three by three five by five or seven by seven but always, they are smaller than the original image. The idea to convolt this function, these metrics with the original image is that we are moving this mark, this mask, sorry, through all pixels in our image. When we put the mask in a particular pixel in the image, each position in the matrix is multiplied by the corresponding pixel in the image and all these products are summed. Then what we have, at the end, is an average value of the neighborhood of the anchor pixel. When we do that for all pixels in our image, the final version, the filtered image, the output image is a blurred version of the original one. However, when we want the edge of the image, the kind of filter that we need to use in this spatial domain is a matrix like this that you can see in this slide. In this case, you can see that the anchor pixel is weighted by 24 by the 24 value and the rest of them are multiplying by minus one. The result of the convolution for each pixel is that we are analyzing, obtaining the difference of the pixels in the neighborhood of the anchor pixel. So when we do a complete convolution, the complete operation for all pixels of the image, the result is this that you can see in this slide. The background of the image has disappeared and now we only have the edge in the images. Now, we are going to the last point in this video that is the definition of the filtering in the frequency domain. Before, let me define Fourier Transform. The Fourier Transform is a mathematical function that transport the information in an image in this particular domain in another domain, less intuitive for us that is called the Frequency Domain or Fourier Domain. The spatial frequency is not an intuitive concept. At this point, I ask you to believe that the background of an image you represent it at the origin of the the Fourier Domain. And the details in the image in the spatial Domain are far from this origin. I'm sure that when we define the filtering in this domain, this concept will be clearer for you. That we see in the previous slides is the definition of the Fourier Transform in a continuous form. However, if we want to compute this function in our computer, we need to obtain, or to work with a discrete version of it. Let us call it the Discrete Fourier Transform. You can see the first equation that define how a digital image is transformed to the Fourier Domain. As you can see, the Fourier Domain is also at UV space. But in this case the variable are no spatial variable, are frequency variable. One of the principal characteristic of properties of the Fourier Transform and also the Discrete Fourier Transform, is that this transform is invertible. What does it mean? It means that once we have the information of our image in the Fourier Domain, it is possible to come back to the Spatial Domain. You have also in these slides the equation that allows do these inverse operation. Why is very important this property? Because only in this case when we modify the information in the Fourier domain that this is what filtering in the frequency domain does. It is possible to go back to the Spatial Domain to obtain the filtered image. Then as I mentioned before, the filtering in the frequency domain is define it as the operations that are performed on the spatial frequencies of an image. And also in this case, the two main objective are the smoothing of the images and the detection of the edge. You can see in this slide, a scheme of how the filtering in the frequency domain is done. We start with an original image obtained it's Discrete Fourier Transform, so we have represented information in the image in the Fourier Domain. We define a filter in this space. In this case, the name usually is the transfer function of the filter, multiply this two function. The site is the Fourier transform of the filtered image. However, we can not identify object, color, texture, and in this space then we need to apply again the Discrete Fourier Transform, but the Inverse Discrete Fourier Transform in order to go back to the Spatial Domain and then obtain the filtered image. The question now can say, how can we define the filter in the frequency domain? In fact the definition of the filter in the frequency domain is more intuitive the definition of the filter in the Spatial domain. The first figure in this slide. So the image of a filter that allow only pass in the frequency domain the low frequencies. The frequencies that are close to the origin. This is why this kind of filter are called low pass filter or Gaussian filter. I am going to show you in the next slide the effect of this filter. That means when I filter the frequency domain an image with this kind of filter is equivalent to the average filter that we have define it in the Spatial Domain. Here in this slide, you can see first the Fourier Transform of the original image. The transfer function of the filter that we are using, they are low pass filter. We multiply it, both function, and we perform the inverse Discrete Fourier Transform. The result as you can see, is a blu-ray version of the original image as in the case of the Spatial Domain. When we want to extract it, filtering an image in the Fourier Domain, we use a complementary filter to the low pass filter. In this case, the name is high pass filter, because in this case the low frequencies are eliminated and the only frequencies that go after the filter are the high frequencies. We are seeing in the next slide that the effect of these high pass filter is equivalent to the spatial filter that we use to that edge. Here in an analogous way that we have seen in the low pass filter we have the Fourier Transform of the original image, multiply it by the transfer function of the high pass filter. Perform again the inverse of the Discrete Fourier Transform and then we have the filtered image. The filtered image in this case only present the information relative to the edge in the original image. Remember that I say that the high spatial frequencies are represented far from the origin. I hope you get great idea regarding what processing, or filtering digital image is. But you can consolidate all this content consulting this reference. See you again in the next video.