ID | 116758 |
著者 |
Fujisawa, Akira
Aomori University
|
キーワード | emoticon
emotion estimation
multimodal information
|
資料タイプ |
学術雑誌論文
|
抄録 | This paper proposes an emotion recognition method for tweets containing emoticons using their emoticon image and language features. Some of the existing methods register emoticons and their facial expression categories in a dictionary and use them, while other methods recognize emoticon facial expressions based on the various elements of the emoticons. However, highly accurate emotion recognition cannot be performed unless the recognition is based on a combination of the features of sentences and emoticons. Therefore, we propose a model that recognizes emotions by extracting the shape features of emoticons from their image data and applying the feature vector input that combines the image features with features extracted from the text of the tweets. Based on evaluation experiments, the proposed method is confirmed to achieve high accuracy and shown to be more effective than methods that use text features only.
|
掲載誌名 |
Applied Sciences
|
ISSN | 20763417
|
出版者 | MDPI
|
巻 | 12
|
号 | 3
|
開始ページ | 1256
|
発行日 | 2022-01-25
|
権利情報 | This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
|
EDB ID | |
出版社版DOI | |
出版社版URL | |
フルテキストファイル | |
言語 |
eng
|
著者版フラグ |
出版社版
|
部局 |
理工学系
|