直近一年間の累計
アクセス数 : ?
ダウンロード数 : ?
ID 119712
著者
Shi, Xuefeng Nantong University
Yang, Ming Hefei University of Technology
Hu, Min Hefei University of Technology
任, 福継 University of Electronic Science and Technology of China 徳島大学 教育研究者総覧 KAKEN研究者をさがす
Ding, Weiping Nantong University
キーワード
Multi-modal Aspect-based Sentiment Analysis
Affective knowledge
Bi-directional learning
Cross-attention
Cosine similarity
資料タイプ
学術雑誌論文
抄録
As a fine-grained task in the community of Multi-modal Sentiment Analysis (MSA), Multi-modal Aspect-based Sentiment Analysis (MABSA) is challenging and has attracted numerous researchers’ attention, and prominent progress has been achieved in recent years. However, there is still a lack of effective strategies for feature alignment between different modalities, and further exploration is urgently needed. Thus, this paper proposed a novel MABSA method to enhance the sentiment feature alignment, namely Affective Knowledge-Assisted Bi-directional Learning (AKABL) networks, which learn sentiment information from different modalities through multiple perspectives. Specifically, AKABL gains the textual semantic and syntactic features through encoding text modality via pre-trained language model BERT and Syntax Parser SpaCy, respectively. And then, to strengthen the expression of sentiment information in the syntactic graph, affective knowledge SenticNet is introduced to assist AKABL in comprehending textual sentiment information. On the other side, to leverage image modality efficiently, the pre-trained model Visual Transformer (ViT) is employed to extract the necessary image features. Additionally, to integrate the obtained features, this paper utilizes the module Single Modality GCN (SMGCN) to achieve the joint textual semantic and syntactic representation. And to bridge the textual and image features, the module Double Modalities GCN (DMGCN) is devised and applied to extract the sentiment information from different modalities simultaneously. Besides, to bridge the alignment gap between text and image features, this paper devises a novel alignment strategy to build the relationship between these two representations, which measures that difference with the Jensen-Shannon divergence from bi-directional perspectives. It is worth noting that cross-attention and cosine distance-based similarity are also applied in the proposed AKABL. To validate the effectiveness of the proposed method, extensive experiments are conducted on two widely used and public benchmark datasets, and the experimental results demonstrate that AKABL can improve the tasks’ performance obviously, which outperforms the optimal baseline with accuracy improvement of 0.47% and 0.72% on the two datasets.
掲載誌名
Computer Speech & Language
ISSN
08852308
10958363
cat書誌ID
AA10677208
出版者
AA11545097
91
終了ページ
101755
発行日
2024-12-10
備考
論文本文は2026-12-10以降公開予定
権利情報
© 2024. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/
EDB ID
出版社版DOI
出版社版URL
言語
eng
著者版フラグ
その他
部局
理工学系