Explore our Research
Most existing Neural Machine Translation models use groups of characters or whole words as input and output units. We propose a hierarchical char2word encoder model that takes individual characters both as input and output.
We first argue that this hierarchical representation of the character encoder reduces computational complexity and shows that it improves translation performance.
Secondly, by qualitatively studying attention plots from the decoder, the model learns to compress common words into a single embedding, whereas rare words, such as names and places, are represented character by character.