ID | 118006 |
著者 |
Amitani, Reishi
Tokushima University
|
キーワード | multi-modal buzz prediction
information diffusion
affective analysis
|
資料タイプ |
学術雑誌論文
|
抄録 | This paper propose a method to predict the stage of buzz-trend generation by analyzing the emotional information posted on social networking services for multimodal information, such as posted text and attached images, based on the content of the posts. The proposed method can analyze the diffusion scale from various angles, using only the information at the time of posting, when predicting in advance and the information of time error, when used for posterior analysis. Specifically, tweets and reply tweets were converted into vectors using the BERT general-purpose language model that was trained in advance, and the attached images were converted into feature vectors using a trained neural network model for image recognition. In addition, to analyze the emotional information of the posted content, we used a proprietary emotional analysis model to estimate emotions from tweets, reply tweets, and image features, which were then added to the input as emotional features. The results of the evaluation experiments showed that the proposed method, which added linguistic features (BERT vectors) and image features to tweets, achieved higher performance than the method using only a single feature. Although we could not observe the effectiveness of the emotional features, the more emotions a tweet and its reply match had, the more empathy action occurred and the larger the like and RT values tended to be, which could ultimately increase the likelihood of a tweet going viral.
|
掲載誌名 |
Electronics
|
ISSN | 20799292
|
出版者 | MDPI
|
巻 | 11
|
号 | 21
|
開始ページ | 3431
|
発行日 | 2022-10-23
|
権利情報 | This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
|
EDB ID | |
出版社版DOI | |
出版社版URL | |
フルテキストファイル | |
言語 |
eng
|
著者版フラグ |
出版社版
|
部局 |
理工学系
|