Artificial Intelligence for breast cancer detection building classifier. In this lesson, we will discuss the objective of the Artificial Intelligence for breast cancer detection and understand the role of the AI can and should play. In Module 2, we have discussed the AI processing paradigm, and in this lesson, we will discuss these processing elements, specifically for the breast cancer detection applications. The objectives of AI for the breast cancer detection are to provide efficient computation for the computer aided diagnosis and to generate consistent classification results. The idea of replacing radiologist with an AI system is far fetched. However, there have been significant progress made in the AI technology in recent years. The AI system can serve the radiologist as a value added tool, generating classification results with high detection and the low false alarm rates. The role of the AI in the breast cancer screening should be played as a second reader. This processing diagram should look familiar to you. We will discuss each processing element with the application to the breast cancer screening. The sensor modality is X-ray Mammography. In this application, the processing will be focused on the classification in a single image. Based on the results of the single image, the radiologists would then fuse the information and make the high level decision on the exams. For example, the radiologists will look into the symmetry between the left and the right breasts, and they will compare the results from the slices of the 3D tomosynthesis imagery. As we have discussed before, the process of data conditioning and the feature extraction are identical between the modeling in training and the object detection in test. For the algorithm performance assessment, it is based on the sequestered image. I want to emphasize that the ultimate performance assessment of the AI for breast cancer detection needs to be done by the clinical trial. In the image acquisition step, the system will obtain the digital mammograms at the appropriate image resolutions for diagnosis. As we have mentioned in lesson two of module two, the image resolution has a direct impact on the classification performance. In this acquisition step, we will group image based on the detection categories such as microclassification, mass, cysts, normal, etc. Images are labeled for the algorithm training and performance assessment. In the figures shown here, the mediolateral oblique (MLO) view of the breast is shown on the left and the craniocaudal (CC) view is on the right. In the data conditioning step, we will perform the object segmentation process to separate object of interest from the background clutter. We will also perform the focus of attention process, which is to select pixel locations for further downstream processing. The objective of the data conditioning is to prepare the data such that the feature extraction will measure the characteristics of the object of interest. The object of interest includes both the cancerous and the normal types. We will discuss the object segmentation and the focus of attention in the following slides. The objective of the object segmentation is to delineate the region for further processing. In doing so, the number of pixels to be processed will be reduced. The number of false positives will be minimized. When we look at the image of the craniocaudal view, the processing will detect edges of the breast and define the region of interest. When we have image of the mediolateral oblique view, we will also detect the boundary between the chest muscle and the breast. The algorithm would exclude the region between the boundary and the image edges. As shown in the figure here, once the boundary is defined, the object segmentation is complete. The focus of attention needs to be run, in general, on every other pixel in a segmented image. Therefore, the processing has to be simple, yet effective, in order to be computationally efficient. The FOA is to select potential detections which has high intensity or high contrast. The goal is to achieve high detection rate because the pixel discarded at this stage will not be recovered by the downstream processes. A reasonable false positive rate at this stage is acceptable. What the FOA wants to accomplish is data rate reduction about 1,000 times. In this figure, the panel on the left shows the segment of the craniocaudal mammogram. The blue region will not be processed. The panel on the right shows the output of the FOA. The yellow crosshairs are the potential detections for further processing. As shown in the figure on the right, the feature extraction will process a sub image array, which is centered at the location selected by the FOA. Two approaches are commonly taken for the feature extraction. One is to design features that are based on radiologists' judgment, such as the intensity, contrast, size, shape, edges, and the texture, et cetera. These features taking into account of a statistical variation due to the object environment. In the recent years, the deep neural network has been employed to perform the feature extraction. It is accomplished by presenting the labeled images to the network as the supervised training. The feature extraction is done by the network parameters as the result of the training of the deep neural network. In this lesson, we have discussed that the AI tools should be value-added to the radiologists. The algorithm focuses on the individual image. The radiologists will make the final call on the exam based on the classification results of the individual image. We explained the objectives of the object segmentation and the focus of attention. We also introduced two common approaches for the feature extraction. In the next two lessons, we will discuss the modeling and the classification of the AI processes. This concludes the lesson of Building Classifier.