Semi-supervised generation with cluster-aware generative models
Semi-supervised generation with cluster-aware generative models
Semi-supervised generation with cluster-aware generative models

Semi-supervised generation with cluster-aware generative models

Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning.
Article
Reading time:
By
Lars Maaløe, Marco Fraccaro, Ole Winther
TABLE OF CONTENTS

Discover Raffle Search

An AI search engine that simplifies data management, analysis, and insights for smarter business decisions and market strategies.
Discover Now

Many real-life data sets contain a small number of labeled data points that are typically disregarded when training generative models.

We propose the Cluster-aware Generative Model that uses unlabelled information to infer a latent representation that models the natural clustering of the data and additional labeled data points to refine this clustering.

The generative performances of the model significantly improve when labeled information is exploited, obtaining a log-likelihood of-79.38 nat on permutation invariant MNIST while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised and still improve the log-likelihood performance with respect to related methods.
Download

Semi-supervised generation with cluster-aware generative models
Semi-supervised generation with cluster-aware generative models

Semi-supervised generation with cluster-aware generative models

Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning.

Many real-life data sets contain a small number of labeled data points that are typically disregarded when training generative models.

We propose the Cluster-aware Generative Model that uses unlabelled information to infer a latent representation that models the natural clustering of the data and additional labeled data points to refine this clustering.

The generative performances of the model significantly improve when labeled information is exploited, obtaining a log-likelihood of-79.38 nat on permutation invariant MNIST while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised and still improve the log-likelihood performance with respect to related methods.
Download

Don't miss any update!
SOC2 badge