Total for the last 12 months
number of access : ?
number of downloads : ?
ID 117612
Title Alternative
無条件のテキスト生成のための敵対的生成ネットワークに関する研究
Author
Jiao, Ziyun Tokushima University
Keywords
unconditional text generation
GAN
NLG
Transformer
Wasserstein distance
Content Type
Thesis or Dissertation
Description
Language and writing play an irreplaceable role in human communication as natural products of civilization. As a branch of natural language processing (NLP), natural language generation (NLG) has received extensive attention since its inception. In the process of human communication, NLG and natural language understanding (NLU) are the two most essential components. In modern human-computer interactions, NLG is also a core functional requirement of machines. As an automated process that generates human-readable text from input information with specific interaction goals, NLG employs different inputs for different tasks. From the perspective of input information, NLG can be classified as text-to-text, data-to-text, multimodality-to-text, or zero-to-text, also known as unconditional text generation. Because no input is provided in the task of unconditional text generation, the model is required to generate natural language text freely. The Generative Adversarial Network (GAN) for text is a standard model for unconditional text generation tasks.
Initially proposed in 2014, GANs have been widely used in Computer Vision (CV) tasks. However, the development of GANs for text generation has progressed slowly. On one hand, the guidance information passed by a discriminator to the generator is generally extremely weak. On the other hand, gradients cannot be transferred appropriately between the generator and discriminator, which prohibits normal gradient-based training. In response to these issues, the key contributions of this thesis are summarized below.
(1) Compared with the conventional loss function, the Wasserstein Distance can provide more information to the generator. We proposed a new architecture based on RelGAN and WGAN-GP, dubbed WRGAN. The discriminator network structure of WRGAN uses the 1-dimensional convolution of multiple kernel sizes and residual modules. Correspondingly, we adjusted the network’s loss function with the gradient penalty Wasserstein loss. This thesis provides and analyzes the experimental results on multiple datasets and the influence of hyperparameters on the model. The experiments demonstrated that our model outperformed most current models on real-world data.
(2) We improved TILGAN for unconditional text generation by refactoring the generator. In short, we implemented Multi-head Self-Attention to replace the linear and BN layers to endow the generator with superior text generation capabilities. Our model consists of three components: a Transformer autoencoder, a Multi-head Self-Attention-based generator, and a linear discriminator. In the transformer autoencoder, the encoder component encodes the distribution of real samples, whereas the decoder decodes real or generated sentence vectors into text. The loss functions for autoencoder and GAN are cross entropy and KL divergence, respectively. On the MSCOCO and EMNLP WMT News datasets, the proposed model has achieved a higher BLEU score than TILGAN. Our ablation experiments also demonstrate the effectiveness of the proposed generator network for the unconditional text generation.
Published Date
2022-09-20
Remark
内容要旨・審査要旨・論文本文の公開
FullText File
language
eng
TextVersion
ETD
MEXT report number
甲第3655号
Diploma Number
甲先第439号
Granted Date
2022-09-20
Degree Name
Doctor of Engineering
Grantor
Tokushima University