Vector quantization (VQ) is a signal processing and data compression technique that maps high-dimensional, continuous-valued vectors to discrete indices from a finite, learned set of prototype vectors called a codebook. This process discretizes the latent space, creating a compressed representation where each input is approximated by its nearest codebook entry. In machine learning, it is a foundational component of models like the Vector-Quantized Variational Autoencoder (VQ-VAE), enabling efficient representation learning for images, audio, and text by learning a structured, discrete latent space.
