COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, cilt.129, ss.102727, 2026 (SCI-Expanded, Scopus)
Automatic segmentation of brain tumors from Magnetic Resonance (MR) images is crucial for diagnosing and developing personalized treatment plans. Nevertheless, inter-patient variability in tumor location and size makes automatic MR image segmentation a challenging problem. This study introduces a deep learning-based multi-scale automatic 3D lightweight and cascaded model to accurately separate tumorous regions from healthy tissues. The lightweight and cascaded architecture helps minimize the risk of overfitting and offers more stable and reliable predictions. Since each MR sequence highlights different anatomical and pathological features, the model combines these channels and feeds them into the network encoder to utilize all available information. The proposed model employs a multi-step cascaded estimation method that acknowledges the hierarchical structure of brain tumor substructures, breaking them down into regions from coarse to fine detail. During segmentation estimation, the result of the whole tumor region (WT) is used as a priori information for tumor core (TC) and enhancing tumor region (ET) estimation to guide the finer segmentation process. This lightweight model contains only 1.58 Million (M) parameters and 247.09 Giga floating-point operations per second (GFLOPs) for each region, making it a convenient solution for real-world clinical applications, especially in limited environments. The model achieved promising results on the BRATS 2020 dataset with Dice scores of 0.9285, 0.8871, 0.8694, and Hausdorff95 distances of 4.26, 6.46, 4.55 for WT, TC, and ET in the test set, respectively. These findings indicate that the proposed approach significantly improves the segmentation performance of brain tumor regions compared to existing state-of-the-art techniques.