Neural machine translation with characters and hierarchical encoding
Neural machine translation with characters and hierarchical encoding
Neural machine translation with characters and hierarchical encoding

Neural machine translation with characters and hierarchical encoding

We propose a Neural Machine Translation model with a hierarchical char2word encoder that takes individual characters as input and output.
Article
Reading time:
By
Alexander Rosenberg Johansen, Jonas Meinertz Hansen, Elias Khazen Obeid, Casper Kaae Sønderby, Ole Winther
TABLE OF CONTENTS

Explore our Research

Most existing Neural Machine Translation models use groups of characters or whole words as input and output units. We propose a hierarchical char2word encoder model that takes individual characters both as input and output.

We first argue that this hierarchical representation of the character encoder reduces computational complexity and shows that it improves translation performance.

Secondly, by qualitatively studying attention plots from the decoder, the model learns to compress common words into a single embedding, whereas rare words, such as names and places, are represented character by character.

Download

Get an AI assistant for your website

An AI search engine trained on YOUR content.
Neural machine translation with characters and hierarchical encoding
Neural machine translation with characters and hierarchical encoding

Neural machine translation with characters and hierarchical encoding

We propose a Neural Machine Translation model with a hierarchical char2word encoder that takes individual characters as input and output.

Explore our Research

Most existing Neural Machine Translation models use groups of characters or whole words as input and output units. We propose a hierarchical char2word encoder model that takes individual characters both as input and output.

We first argue that this hierarchical representation of the character encoder reduces computational complexity and shows that it improves translation performance.

Secondly, by qualitatively studying attention plots from the decoder, the model learns to compress common words into a single embedding, whereas rare words, such as names and places, are represented character by character.

Download

Reduce your support load by helping users find immediate answers.