A Variational Autoencoder (VAE) is a generative neural network that learns a continuous, structured latent space by encoding input data into a probability distribution rather than a single point. It consists of an encoder that maps data to parameters of a Gaussian distribution, a latent space sampled from this distribution, and a decoder that reconstructs the input from the sample. This probabilistic formulation, enforced by the Kullback-Leibler (KL) divergence loss, encourages the latent space to be smooth and well-organized, enabling meaningful interpolation and data generation.
