ID | 118693 |
著者 |
Zhou, Yuxiang
Tokushima University
Lu, Huimin
Kyushu Institute of Technology
Nakagawa, Satoshi
The University of Tokyo
|
キーワード | U-Net
Dual attention
Attention gate
Depthwise separable convolution
Medical image segmentation
|
資料タイプ |
学術雑誌論文
|
抄録 | Automatic medical image segmentation method is highly needed to help experts in lesion segmentation. The deep learning technology emerging has profoundly driven the development of medical image segmentation. While U-Net and attention mechanisms are widely utilized in this field, the application of attention, albeit successful in natural scene image segmentation, tends to inflate the number of model parameters and neglects the potential for feature fusion between different convolutional layers. In response to these challenges, we present the Multi-Attention and Depthwise Separable Convolution U-Net (MDSU-Net), designed to enhance feature extraction. The multi-attention aspect of our framework integrates dual attention and attention gates, adeptly capturing rich contextual details and seamlessly fusing features across diverse convolutional layers. Additionally, our encoder integrates a depthwise separable convolution layer, streamlining the model’s complexity without sacrificing its efficacy, ensuring versatility across various segmentation tasks. The results demonstrate that our method outperforms state-of-the-art across three diverse medical image datasets.
|
掲載誌名 |
Neurocomputing
|
ISSN | 09252312
18728286
|
cat書誌ID | AA10827402
AA11540468
|
出版者 | Elsevier
|
巻 | 564
|
開始ページ | 126970
|
発行日 | 2023-10-29
|
備考 | 論文本文は2025-10-29以降公開予定
|
権利情報 | © 2023. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/
|
EDB ID | |
出版社版DOI | |
出版社版URL | |
言語 |
eng
|
著者版フラグ |
その他
|
部局 |
理工学系
病院
|