Back to Search View Original Cite This Article

Abstract

<jats:p>Breast cancer detection using ultrasound is often hindered by speckle noise, image variability, and reliance on subjective expert interpretation, particularly in women with dense breast tissue. To address these challenges, this study proposes a deep learning-based classification system using a pre-trained VGG16 model with transfer learning, enhanced by median filtering for noise reduction and Grad-CAM for interpretability. The model architecture consists of a frozen convolutional base with a lightweight fully connected classifier designed to minimize overfitting. We introduce a novel integration strategy by merging the BUSI and Mendeley ultrasound datasets to evaluate model generalization across diverse imaging sources. The system was trained and tested on 897 breast ultrasound images, achieving a testing accuracy of 91.1%, sensitivity of 94%, specificity of 88.75%, F1-score of 0.89, and an AUC of 0.97. This study contributes a robust, interpretable, and computationally efficient deep learning pipeline for breast cancer diagnosis that leverages dataset integration, optimized preprocessing, and attention-based visualization. Grad-CAM overlays highlight class-discriminative regions in both benign and malignant cases, supporting clinical explainability. The system’s accuracy and transparency suggest its potential for real-world application, particularly in settings where access to expert radiologists is limited</jats:p>

Show More

Keywords

breast ultrasound model cancer using

Related Articles