ID | 115024 |
Author |
Kusunose, Kenya
Tokushima University
Tokushima University Educator and Researcher Directory
KAKEN Search Researchers
Inoue, Mizuki
Tokushima University
Yamada, Hirotsugu
Tokushima University
Tokushima University Educator and Researcher Directory
KAKEN Search Researchers
Sata, Masataka
Tokushima University
Tokushima University Educator and Researcher Directory
KAKEN Search Researchers
|
Keywords | echocardiography
artificial intelligence
view classification
|
Content Type |
Journal Article
|
Description | A proper echocardiographic study requires several video clips recorded from different acquisition angles for observation of the complex cardiac anatomy. However, these video clips are not necessarily labeled in a database. Identification of the acquired view becomes the first step of analyzing an echocardiogram. Currently, there is no consensus whether the mislabeled samples can be used to create a feasible clinical prediction model of ejection fraction (EF). The aim of this study was to test two types of input methods for the classification of images, and to test the accuracy of the prediction model for EF in a learning database containing mislabeled images that were not checked by observers. We enrolled 340 patients with five standard views (long axis, short axis, 3-chamber view, 4-chamber view and 2-chamber view) and 10 images in a cycle, used for training a convolutional neural network to classify views (total 17,000 labeled images). All DICOM images were rigidly registered and rescaled into a reference image to fit the size of echocardiographic images. We employed 5-fold cross validation to examine model performance. We tested models trained by two types of data, averaged images and 10 selected images. Our best model (from 10 selected images) classified video views with 98.1% overall test accuracy in the independent cohort. In our view classification model, 1.9% of the images were mislabeled. To determine if this 98.1% accuracy was acceptable for creating the clinical prediction model using echocardiographic data, we tested the prediction model for EF using learning data with a 1.9% error rate. The accuracy of the prediction model for EF was warranted, even with training data containing 1.9% mislabeled images. The CNN algorithm can classify images into five standard views in a clinical setting. Our results suggest that this approach may provide a clinically feasible accuracy level of view classification for the analysis of echocardiographic data.
|
Journal Title |
Biomolecules
|
ISSN | 2218273X
|
Publisher | MDPI
|
Volume | 10
|
Issue | 5
|
Start Page | 665
|
Published Date | 2020-04-25
|
Rights | © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
|
EDB ID | |
DOI (Published Version) | |
URL ( Publisher's Version ) | |
FullText File | |
language |
eng
|
TextVersion |
Publisher
|
departments |
University Hospital
Medical Sciences
|