ID | 119712 |
Author |
Shi, Xuefeng
Nantong University
Yang, Ming
Hefei University of Technology
Hu, Min
Hefei University of Technology
Ren, Fuji
University of Electronic Science and Technology of China
Tokushima University Educator and Researcher Directory
KAKEN Search Researchers
Ding, Weiping
Nantong University
|
Keywords | Multi-modal Aspect-based Sentiment Analysis
Affective knowledge
Bi-directional learning
Cross-attention
Cosine similarity
|
Content Type |
Journal Article
|
Description | As a fine-grained task in the community of Multi-modal Sentiment Analysis (MSA), Multi-modal Aspect-based Sentiment Analysis (MABSA) is challenging and has attracted numerous researchers’ attention, and prominent progress has been achieved in recent years. However, there is still a lack of effective strategies for feature alignment between different modalities, and further exploration is urgently needed. Thus, this paper proposed a novel MABSA method to enhance the sentiment feature alignment, namely Affective Knowledge-Assisted Bi-directional Learning (AKABL) networks, which learn sentiment information from different modalities through multiple perspectives. Specifically, AKABL gains the textual semantic and syntactic features through encoding text modality via pre-trained language model BERT and Syntax Parser SpaCy, respectively. And then, to strengthen the expression of sentiment information in the syntactic graph, affective knowledge SenticNet is introduced to assist AKABL in comprehending textual sentiment information. On the other side, to leverage image modality efficiently, the pre-trained model Visual Transformer (ViT) is employed to extract the necessary image features. Additionally, to integrate the obtained features, this paper utilizes the module Single Modality GCN (SMGCN) to achieve the joint textual semantic and syntactic representation. And to bridge the textual and image features, the module Double Modalities GCN (DMGCN) is devised and applied to extract the sentiment information from different modalities simultaneously. Besides, to bridge the alignment gap between text and image features, this paper devises a novel alignment strategy to build the relationship between these two representations, which measures that difference with the Jensen-Shannon divergence from bi-directional perspectives. It is worth noting that cross-attention and cosine distance-based similarity are also applied in the proposed AKABL. To validate the effectiveness of the proposed method, extensive experiments are conducted on two widely used and public benchmark datasets, and the experimental results demonstrate that AKABL can improve the tasks’ performance obviously, which outperforms the optimal baseline with accuracy improvement of 0.47% and 0.72% on the two datasets.
|
Journal Title |
Computer Speech & Language
|
ISSN | 08852308
10958363
|
NCID | AA10677208
|
Publisher | AA11545097
|
Volume | 91
|
End Page | 101755
|
Published Date | 2024-12-10
|
Remark | 論文本文は2026-12-10以降公開予定
|
Rights | © 2024. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/
|
EDB ID | |
DOI (Published Version) | |
URL ( Publisher's Version ) | |
language |
eng
|
TextVersion |
その他
|
departments |
Science and Technology
|