Semi-supervised generation with cluster-aware generative models
Semi-supervised generation with cluster-aware generative models
Semi-supervised generation with cluster-aware generative models

Semi-supervised generation with cluster-aware generative models

Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning.
Article
Reading time:
By
Lars Maaløe, Marco Fraccaro, Ole Winther
TABLE OF CONTENTS

Many real-life data sets contain a small number of labeled data points that are typically disregarded when training generative models.

We propose the Cluster-aware Generative Model that uses unlabelled information to infer a latent representation that models the natural clustering of the data and additional labeled data points to refine this clustering.

The generative performances of the model significantly improve when labeled information is exploited, obtaining a log-likelihood of-79.38 nat on permutation invariant MNIST while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised and still improve the log-likelihood performance with respect to related methods.
Download

Get an AI assistant for your website

An AI search engine trained on YOUR content.
Semi-supervised generation with cluster-aware generative models
Semi-supervised generation with cluster-aware generative models

Semi-supervised generation with cluster-aware generative models

Deep generative models trained with large amounts of unlabelled data have proven to be powerful within the domain of unsupervised learning.

Many real-life data sets contain a small number of labeled data points that are typically disregarded when training generative models.

We propose the Cluster-aware Generative Model that uses unlabelled information to infer a latent representation that models the natural clustering of the data and additional labeled data points to refine this clustering.

The generative performances of the model significantly improve when labeled information is exploited, obtaining a log-likelihood of-79.38 nat on permutation invariant MNIST while also achieving competitive semi-supervised classification accuracies. The model can also be trained fully unsupervised and still improve the log-likelihood performance with respect to related methods.
Download

Together, we will fuel your mission towards digitalization