Publications

Neural Sequence Model Training via α-divergence Minimization

Workshop on Learning to Generate Natural Language in ICML 2017

By : Sotetsu Koyamada, Yuta Kikuchi, Atsunori Kanemura, Shin-ichi Maeda, Shin Ishii

Abstract

We propose a new neural sequence model training method in which the objective function is defined by α-divergence. We demonstrate that the objective function generalizes the maximum-likelihood (ML)-based and reinforcement learning (RL)-based objective functions as special cases (i.e., ML corresponds to α→0 and RL to α→1). We also show that the gradient of the objective function can be considered a mixture of ML- and RL-based objective gradients. The experimental results of a machine translation task show that minimizing the objective function with α>0 outperforms α→0, which corresponds to ML-based methods.

  • Twitter
  • Facebook