A Vector-Quantized Variational Autoencoder (VQ-VAE) is a type of variational autoencoder (VAE) that replaces the standard continuous latent distribution with a discrete codebook learned via vector quantization. The encoder outputs a continuous vector, which is then mapped to the nearest entry in this codebook; the resulting discrete code is passed to the decoder for reconstruction. This discrete bottleneck forces the model to learn a compressed, structured latent representation, making it highly effective for tasks requiring efficient memory encoding of images, audio, or text.
