直近一年間の累計
アクセス数 : ?
ダウンロード数 : ?
ID 117612
タイトル別表記
無条件のテキスト生成のための敵対的生成ネットワークに関する研究
著者
焦, 子韵 徳島大学大学院先端技術科学教育部(システム創生工学専攻)
キーワード
unconditional text generation
GAN
NLG
Transformer
Wasserstein distance
資料タイプ
学位論文
抄録
Language and writing play an irreplaceable role in human communication as natural products of civilization. As a branch of natural language processing (NLP), natural language generation (NLG) has received extensive attention since its inception. In the process of human communication, NLG and natural language understanding (NLU) are the two most essential components. In modern human-computer interactions, NLG is also a core functional requirement of machines. As an automated process that generates human-readable text from input information with specific interaction goals, NLG employs different inputs for different tasks. From the perspective of input information, NLG can be classified as text-to-text, data-to-text, multimodality-to-text, or zero-to-text, also known as unconditional text generation. Because no input is provided in the task of unconditional text generation, the model is required to generate natural language text freely. The Generative Adversarial Network (GAN) for text is a standard model for unconditional text generation tasks.
Initially proposed in 2014, GANs have been widely used in Computer Vision (CV) tasks. However, the development of GANs for text generation has progressed slowly. On one hand, the guidance information passed by a discriminator to the generator is generally extremely weak. On the other hand, gradients cannot be transferred appropriately between the generator and discriminator, which prohibits normal gradient-based training. In response to these issues, the key contributions of this thesis are summarized below.
(1) Compared with the conventional loss function, the Wasserstein Distance can provide more information to the generator. We proposed a new architecture based on RelGAN and WGAN-GP, dubbed WRGAN. The discriminator network structure of WRGAN uses the 1-dimensional convolution of multiple kernel sizes and residual modules. Correspondingly, we adjusted the network’s loss function with the gradient penalty Wasserstein loss. This thesis provides and analyzes the experimental results on multiple datasets and the influence of hyperparameters on the model. The experiments demonstrated that our model outperformed most current models on real-world data.
(2) We improved TILGAN for unconditional text generation by refactoring the generator. In short, we implemented Multi-head Self-Attention to replace the linear and BN layers to endow the generator with superior text generation capabilities. Our model consists of three components: a Transformer autoencoder, a Multi-head Self-Attention-based generator, and a linear discriminator. In the transformer autoencoder, the encoder component encodes the distribution of real samples, whereas the decoder decodes real or generated sentence vectors into text. The loss functions for autoencoder and GAN are cross entropy and KL divergence, respectively. On the MSCOCO and EMNLP WMT News datasets, the proposed model has achieved a higher BLEU score than TILGAN. Our ablation experiments also demonstrate the effectiveness of the proposed generator network for the unconditional text generation.
発行日
2022-09-20
備考
内容要旨・審査要旨・論文本文の公開
フルテキストファイル
言語
eng
著者版フラグ
博士論文全文を含む
文科省報告番号
甲第3655号
学位記番号
甲先第439号
学位授与年月日
2022-09-20
学位名
博士(工学)
学位授与機関
徳島大学