leo.blog();

Autoencoder

An autoencoder is a neural network that tries to reconstruct its own input through a bottleneck. It’s a dimensionality reduction technique.

Since it doesn’t require labeling, it’s an unsupervised machine learning method.

A linear autoencoder (an autoencoder without activation functions) is roughly equivalent to a PCA.

Sparse autoencoder

A sparse autoencoder, or an SAE, is an autoencoder with a sparsity term added to the loss. Usually this is an L1 loss to encourage a representation with mostly 0 values.

When building a sparse autoencoder, instead of picking a hidden layer size that’s smaller than the input data, you pick a larger one. The bottleneck is created by the sparsity constraint rather than the hidden dimension size.

Backlinks

Leave a Comment