Hash embeddings for efficient word representations
Hash embeddings for efficient word representations
Hash embeddings for efficient word representations

Hash embeddings for efficient word representations

Learn more about hash embeddings, an efficient method for representing words in a continuous vector form.
Article
Reading time:
By
Dan Tito Svenstrup, Jonas Hansen, Ole Winther
TABLE OF CONTENTS

Discover Raffle Search

An AI search engine that simplifies data management, analysis, and insights for smarter business decisions and market strategies.
Discover Now

Conference: Advances in Neural Information Processing Systems

A hash embedding may be seen as an interpolation between a standard word embedding and a word embedding created using a random hash function (the hashing trick).

In hash embeddings, each token is represented by one-dimensional embedding vectors and a one-dimensional weight vector. The final dimensional representation of the token is the product of the two. Rather than fitting the embedding vectors for each token, they are selected by the hashing trick from a shared pool of embedding vectors.

Our experiments show that hash embeddings can easily deal with huge vocabularies consisting of millions of tokens. When using a hash embedding, there is no need to create a dictionary before training or perform any kind of vocabulary pruning after training. We show that models trained using hash embeddings exhibit at least the same level of performance as models trained using regular embeddings across a wide range of tasks.

Furthermore, the number of parameters needed by such an embedding is only a fraction of what a regular embedding requires. Since standard embeddings and embeddings constructed using the hashing trick are just special cases of a hash embedding, hash embeddings can be considered an extension and improvement over the existing regular embedding types.

Download

Hash embeddings for efficient word representations
Hash embeddings for efficient word representations

Hash embeddings for efficient word representations

Learn more about hash embeddings, an efficient method for representing words in a continuous vector form.

Conference: Advances in Neural Information Processing Systems

A hash embedding may be seen as an interpolation between a standard word embedding and a word embedding created using a random hash function (the hashing trick).

In hash embeddings, each token is represented by one-dimensional embedding vectors and a one-dimensional weight vector. The final dimensional representation of the token is the product of the two. Rather than fitting the embedding vectors for each token, they are selected by the hashing trick from a shared pool of embedding vectors.

Our experiments show that hash embeddings can easily deal with huge vocabularies consisting of millions of tokens. When using a hash embedding, there is no need to create a dictionary before training or perform any kind of vocabulary pruning after training. We show that models trained using hash embeddings exhibit at least the same level of performance as models trained using regular embeddings across a wide range of tasks.

Furthermore, the number of parameters needed by such an embedding is only a fraction of what a regular embedding requires. Since standard embeddings and embeddings constructed using the hashing trick are just special cases of a hash embedding, hash embeddings can be considered an extension and improvement over the existing regular embedding types.

Download

Don't miss any update!
SOC2 badge