Automated Brain Tumor Segmentation via YOLOv8-Derived Spatial Prompts for the Segment Anything Model
DOI:
https://doi.org/10.66279/vv8n5p41Keywords:
Brain Tumor Segmentation, Magnetic Resonance Imaging, Segment Anything Model, YOLOv8, Object DetectionAbstract
Brain tumor segmentation from magnetic resonance imaging~(MRI) is a critical step in the diagnosis and treatment planning of intracranial malignancies. Although supervised convolutional networks achieve strong benchmark performance within their training distribution, they exhibit limited transferability across acquisition protocols. Conversely, foundation models such as the Segment Anything Model~(SAM) encode rich visual representations but produce unreliable masks in the absence of accurate spatial guidance. The present work introduces a fully automated, end-to-end pipeline that couples YOLOv8 object detection with SAM-based segmentation without modifying the parameters of either network. A lightweight preprocessing stage comprising skull stripping and Contrast Limited Adaptive Histogram Equalization~(CLAHE) conditions each MRI slice; the resulting image is forwarded to a trained YOLOv8 detector whose highest confidence bounding box is passed directly to SAM's prompt encoder as the sole spatial cue.
Evaluation on 1,226 held-out images from the publicly available Cheng et. al. benchmark, partitioned by patient identity to prevent data leakage, yields a mean Dice Similarity Coefficient~(DSC) of 0.8153 pm 0.032$ and a mean Intersection over Union (IoU) of 0.7136 pm 0.028, with a total inference latency of 473.76 ms per image on an NVIDIA~T4 GPU. An ablation study confirms that each pipeline stage contributes positively to segmentation performance. YOLOv8 detection achieves a mean Average Precision~(mAP@0.5) of 0.91, precision of 0.88, and recall of 0.86. The results demonstrate that high-quality, automatically generated spatial prompts can substitute for costly parameter adaptation of general-purpose foundation models in specialized medical imaging tasks.
Downloads
References
[1] M. C. Mabray, R. F. Barajas Jr, and S. Cha, “Modern brain tumor imaging,” Brain tumor research and treatment, vol. 3, no. 1, p. 8, 2015. DOI: https://doi.org/10.14791/btrt.2015.3.1.8
[2] N. Gordillo, E. Montseny, and P. Sobrevilla, “State of the art survey on mri brain tumor segmentation,” Magnetic resonance imaging, vol. 31, no. 8, pp. 1426–1438, 2013. DOI: https://doi.org/10.1016/j.mri.2013.05.002
[3] U. Tatli and C. Budak, “Biomedical image segmentation with modified u-net,” Traitement du Signal, vol. 40, no. 2, pp. 523–531, 2023. DOI: https://doi.org/10.18280/ts.400211
[4] H. Moradmand, S. M. R. Aghamiri, and R. Ghaderi, “Impact of image preprocessing methods on reproducibility of radiomic features in multimodal magnetic resonance imaging in glioblastoma,” Journal of applied clinical medical physics, vol. 21, no. 1, pp. 179–190, 2020. DOI: https://doi.org/10.1002/acm2.12795
[5] S. Kumari and P. Singh, “Addressing label scarcity and domain shift in medical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 34–44, Springer, 2025. DOI: https://doi.org/10.1007/978-3-032-04981-0_4
[6] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, et al., “Segment anything,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 4015–4026, 2023. DOI: https://doi.org/10.1109/ICCV51070.2023.00371
[7] J. Ma, Y. He, F. Li, L. Han, C. You, and B. Wang, “Segment anything in medical images,” Nature communications, vol. 15, no. 1, p. 654, 2024. DOI: https://doi.org/10.1038/s41467-024-44824-z
[8] S. Ren, F. Luzi, S. Lahrichi, K. Kassaw, L. M. Collins, K. Bradbury, and J. M. Malof, “Segment anything, from space?,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 8355–8365, 2024. DOI: https://doi.org/10.1109/WACV57701.2024.00817
[9] Y. Zhang, S. Hu, L. Xue, S. Ren, Z. Hu, Y. Cheng, and Y. Qi, “Enhancing the reliability of auto-prompting sam for medical image segmentation with uncertainty estimation and rectification,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1282–1291, 2025. DOI: https://doi.org/10.1109/ICCVW69036.2025.00138
[10] M. Sohan, T. Sai Ram, and C. V. Rami Reddy, “A review on yolov8 and its advancements,” in International conference on data intelligence and cognitive informatics, pp. 529–545, Springer, 2024. DOI: https://doi.org/10.1007/978-981-99-7962-2_39
[11] P. Jeyaraj M and S. Kumar M, “Automated brain tumor segmentation using hybrid yolo and sam,” Current Medical Imaging, vol. 21, no. 1, p. E15734056392711, 2025. DOI: https://doi.org/10.2174/0115734056392711250718201911
[12] S. Pandey, K.-F. Chen, and E. B. Dam, “Comprehensive multimodal segmentation in medical imaging: Combining yolov8 with sam and hq-sam models,” in Proceedings of the IEEE/CVF international conference on computer vision, pp. 2592–2598, 2023. DOI: https://doi.org/10.1109/ICCVW60793.2023.00273
[13] J. Wu, Z. Wang, M. Hong, W. Ji, H. Fu, Y. Xu, M. Xu, and Y. Jin, “Medical sam adapter: Adapting segment anything model for medical image segmentation,” Medical image analysis, vol. 102, p. 103547, 2025. DOI: https://doi.org/10.1016/j.media.2025.103547
[14] P. Kaur, A. Kaushik, I. Singhal, A. Pandey, R. Singhal, et al., “Advancing brain mri segmentation using segment anything model,” Procedia Computer Science, vol. 260, pp. 110–117, 2025. DOI: https://doi.org/10.1016/j.procs.2025.03.183
[15] J. D. Gutiérrez, E. Delgado, C. Breuer, J. M. Conejero, and R. Rodriguez-Echeverria, “Prompt once, segment everything: leveraging sam 2 potential for infinite medical image segmentation with a single prompt,” Algorithms, vol. 18, no. 4, p. 227, 2025. DOI: https://doi.org/10.3390/a18040227
[16] W. Xie, N. Willems, S. Patil, Y. Li, and M. Kumar, “Sam fewshot finetuning for anatomical segmentation
in medical images,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 3253–3261, 2024.
[17] S. M. Smith, “Fast robust automated brain extraction,” Human brain mapping, vol. 17, no. 3, pp. 143–155, DOI: https://doi.org/10.1002/hbm.10062
2002. [18] F. Isensee, M. Schell, I. Pflueger, G. Brugnara, D. Bonekamp, U. Neuberger, A. Wick, H.-P. Schlemmer,
S. Heiland, W. Wick, et al., “Automated brain extraction of multisequence mri using artificial neural
networks,” Human brain mapping, vol. 40, no. 17, pp. 4952–4964, 2019. DOI: https://doi.org/10.1002/hbm.24750
[19] J. B. Zimmerman, S. M. Pizer, E. V. Staab, J. R. Perry, W. McCartney, and B. C. Brenton, “An evaluation of the effectiveness of adaptive histogram equalization for contrast enhancement,” IEEE Transactions on Medical Imaging, vol. 7, no. 4, pp. 304–312, 1988. DOI: https://doi.org/10.1109/42.14513
[20] J. Cheng, “Brain tumour dataset.” https://doi.org/10.6084/m9.figshare.1512427.v8, 2017.
[21] A. A. Taha and A. Hanbury, “Metrics for evaluating 3d medical image segmentation: analysis, selection, and tool,” BMC medical imaging, vol. 15, no. 1, p. 29, 2015. DOI: https://doi.org/10.1186/s12880-015-0068-x
[22] D. Shen, G. Wu, and H.-I. Suk, “Deep learning in medical image analysis,” Annual review of biomedical engineering, vol. 19, pp. 221 248, 2017. DOI: https://doi.org/10.1146/annurev-bioeng-071516-044442
Downloads
Published
Data Availability Statement
The dataset used in this study is publicly available at the following DOI: https://doi.org/10.6084/m9.
figshare.1512427.
Issue
Section
Categories
License
Copyright (c) 2026 Computational Discovery and Intelligent Systems (CDIS)

This work is licensed under a Creative Commons Attribution 4.0 International License.
Computational Discovery and Intelligent Systems (CDIS) content is published under a Creative Commons Attribution License (CCBY). This means that content is freely available to all readers upon publication, and content is published as soon as production is complete.
Computational Discovery and Intelligent Systems (CDIS) seeks to publish the most influential papers that will significantly advance scientific understanding. Selected articles must present new and widely significant data, syntheses, or concepts. They should merit recognition by the wider scientific community and the general public through publication in a reputable scientific journal.









