Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Tubule-U-Net: a novel dataset and deep learning-based tubule segmentation framework in whole slide images of breast cancer
Virasoft Corporation, USA.
Virasoft Corporation, USA.
Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.ORCID iD: 0000-0001-7536-3349
Acibadem University Teaching Hospital, TUR.
Show others and affiliations
2023 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 13, no 1, article id 128Article in journal (Refereed) Published
Abstract [en]

The tubule index is a vital prognostic measure in breast cancer tumor grading and is visually evaluated by pathologists. In this paper, a computer-aided patch-based deep learning tubule segmentation framework, named Tubule-U-Net, is developed and proposed to segment tubules in Whole Slide Images (WSI) of breast cancer. Moreover, this paper presents a new tubule segmentation dataset consisting of 30820 polygonal annotated tubules in 8225 patches. The Tubule-U-Net framework first uses a patch enhancement technique such as reflection or mirror padding and then employs an asymmetric encoder-decoder semantic segmentation model. The encoder is developed in the model by various deep learning architectures such as EfficientNetB3, ResNet34, and DenseNet161, whereas the decoder is similar to U-Net. Thus, three different models are obtained, which are EfficientNetB3-U-Net, ResNet34-U-Net, and DenseNet161-U-Net. The proposed framework with three different models, U-Net, U-Net++, and Trans-U-Net segmentation methods are trained on the created dataset and tested on five different WSIs. The experimental results demonstrate that the proposed framework with the EfficientNetB3 model trained on patches obtained using the reflection padding and tested on patches with overlapping provides the best segmentation results on the test data and achieves 95.33%, 93.74%, and 90.02%, dice, recall, and specificity scores, respectively. © 2023, The Author(s).

Place, publisher, year, edition, pages
Nature Portfolio , 2023. Vol. 13, no 1, article id 128
Keywords [en]
Breast Neoplasms, Deep Learning, Female, Humans, Image Processing, Computer-Assisted, Semantics, breast tumor, diagnostic imaging, human, image processing, procedures
National Category
Medical Imaging
Identifiers
URN: urn:nbn:se:bth-24235DOI: 10.1038/s41598-022-27331-3ISI: 001003343100022PubMedID: 36599960Scopus ID: 2-s2.0-85145532555OAI: oai:DiVA.org:bth-24235DiVA, id: diva2:1730875
Note

Open access

Available from: 2023-01-25 Created: 2023-01-25 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

fulltext(7296 kB)116 downloads
File information
File name FULLTEXT01.pdfFile size 7296 kBChecksum SHA-512
9bbbc9a2e24243b321551732ce70fa509e953f14b62dcb4191cffb19b67b8d0cb5eb2396e4439bb2c56fcfa218a10095ccb6f2f0a742186d7a7b61943325ef20
Type fulltextMimetype application/pdf

Other links

Publisher's full textPubMedScopus

Authority records

Kusetogullari, Hüseyin

Search in DiVA

By author/editor
Kusetogullari, Hüseyin
By organisation
Department of Computer Science
In the same journal
Scientific Reports
Medical Imaging

Search outside of DiVA

GoogleGoogle Scholar
Total: 116 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
pubmed
urn-nbn

Altmetric score

doi
pubmed
urn-nbn
Total: 464 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf