Abstract
<jats:p>The widespread use of radiation in many areas, such as industry, medicine, agriculture, and nuclear energy, poses significant risks to human health. Therefore, the need for effective shielding solutions is increasing day by day. In this context, the linear attenuation coefficient, which determines the effectiveness of materials against gamma rays, stands out as a critical parameter in environments containing hazardous radiation for the public. In this study, a dataset consisting of linear attenuation coefficient values was obtained from the NIST XCOM database, which covers the energy range of 0.01–5.0 MeV. This data was used to train machine learning models, including support vector regression (SVR), kernel ridge regression, and nearest neighbor algorithms. The performance of these models was evaluated using mean absolute error, mean square error, five-fold cross-validation, and R-squared metrics. To test generalization, the trained models were applied to praseodymium hexaboride. The predicted linear attenuation coefficient curves closely matched the reference data for all models (R >0.97). The best performance was obtained with SVR and kernel ridge regression (R ≈ 0.9885), followed by k-nearest neighbors (R ≈ 0.9780). By employing such a newly operated procedure, the high costs of experimental facilities can be reduced, and significant time consumption can be eliminated. In conclusion, this study demonstrates that machine learning is a powerful tool for accurately and rapidly estimating gamma radiation attenuation coefficients. This approach can provide scientific and industrial contributions to the radiation-driven systems’ design and optimization processes.</jats:p>