Hyperparameters are the external configuration variables that govern the training process itself, such as learning rate, network depth, or regularization strength. Unlike model parameters learned from data, hyperparameters are set prior to training. HPO treats model performance as a black-box objective function to be maximized, where each evaluation involves training a model with a candidate hyperparameter set. This process is fundamental to Automated Machine Learning (AutoML) and is a prerequisite capability for systems pursuing Recursive Self-Improvement (RSI), as it automates a core aspect of model design.
