Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
EnsUNet: Enhancing Brain Tumor Segmentation Through Fusion of Pre-trained Models
Kasdi Merbah University, Algeria.
Kasdi Merbah University, Algeria.
Kasdi Merbah University, Algeria.
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.ORCID iD: 0000-0002-4390-411X
Show others and affiliations
2024 (English)In: Proceedings of Ninth International Congress on Information and Communication Technology / [ed] Xin-She Yang, Simon Sherratt, Nilanjan Dey, Amit Joshi, Springer Science+Business Media B.V., 2024, Vol. 1013, p. 163-174Conference paper, Published paper (Refereed)
Abstract [en]

Brain tumor segmentation, among various tasks in medical image analysis, has garnered significant attention in the research community. Despite continuous efforts by researchers, accurate brain tumor segmentation remains a key challenge. This challenge arises due to various factors, including location uncertainty, morphological uncertainty, low contrast imaging, annotation bias, and data imbalance. Magnetic resonance imaging (MRI) plays a vital role in providing detailed images of the brain, enabling the extraction of crucial information about the tumor’s shape, size, and location. In literature, deep learning algorithms have shown their efficiency in dealing with semantic segmentation, particularly the U-Net architecture. The latter has demonstrated impressive performance in Medical image segmentation. In this paper, a U-Net-based architecture for brain tumor segmentation is proposed. To further enhance the segmentation performance of our model, a novel ensemble learning method, EnsUNet, is introduced by integrating four pre-trained networks namely MobileNet, DeepLabV3+, ResNet, and DenseNet as the encoder within the U-Net architecture. The conducted experimental evaluation demonstrates promising results, achieving an Intersection over Union (IoU) score of 0.86, a Dice Coefficient (DC) of 0.92, and an accuracy of approximately 0.99. These findings underscore the effectiveness of our proposed EnsUNet for accurately segmenting brain tumors. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.

Place, publisher, year, edition, pages
Springer Science+Business Media B.V., 2024. Vol. 1013, p. 163-174
Series
Lecture Notes in Networks and Systems, ISSN 2367-3370, E-ISSN 2367-3389
Keywords [en]
Brain tumor segmentation, Ensemble learning, EnsUNet, Magnetic resonance imaging, Pre-trained models, U-Net, Brain, Deep learning, Learning algorithms, Learning systems, Medical imaging, Network architecture, Semantic Segmentation, Semantics, Tumors, Location uncertainty, Medical image analysis, NET architecture, Pre-trained model, Research communities, Uncertainty
National Category
Medical Imaging
Identifiers
URN: urn:nbn:se:bth-26851DOI: 10.1007/978-981-97-3559-4_13Scopus ID: 2-s2.0-85200997935ISBN: 9789819735587 (print)OAI: oai:DiVA.org:bth-26851DiVA, id: diva2:1892882
Conference
9th International Congress on Information and Communication Technology, ICICT 2024, London, Feb 19-22 2024
Available from: 2024-08-28 Created: 2024-08-28 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Cheddad, Abbas

Search in DiVA

By author/editor
Cheddad, Abbas
By organisation
Department of Computer Science
Medical Imaging

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 59 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf