An Automatic Segmentation of Breast Ultrasound Images Using U-Net Model
Main Article Content
Abstract
Medical imaging, like ultrasound, gives a good visual picture of how an organ works. However, a radiologist has a hard time and takes a long time to process these images, which delays the diagnosis. Several automated methods for detecting and segmenting breast lesions have been developed. Nevertheless, due to ultrasonic artifacts and the intricacy of lesion forms and locations, the segmentation of lesions or tumors from breast ultrasonography remains an open issue. Medical image segmentation has seen a breakthrough thanks to deep learning. U-Net is the most noteworthy deep network in this regard. The traditional U-Net design lacks precision when dealing with complex data sets, despite its exceptional performance in segmenting multimedia medical images. To reduce texture detail redundancy and avoid overfitting, we suggest developing the U-Net architecture by including dropout layers after each max pooling layer. Batchnormalization layers and a binary cross-entropy loss function were used to preserve breast tumor texture features and edge attributes while decreasing computational costs. We used the breast ultrasound dataset of 780 images with normal, benign, or malignant tumors. Our model showed superior segmentation results for breast ultrasound pictures compared to previous deep neural networks. Quantitative measures, accuracy, and IoU values were utilized to evaluate the suggested model’s effectiveness. The results were 99.34% and 99.60% for accuracy and IoU. The results imply that the augmented U-Net model that has been suggested has high diagnostic potential in the clinic since it can correctly segment breast lesions.
Article Details
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.