Hyperparameter tuning is the process of selecting the best settings for the knobs that control how a machine-learning model learns. Unlike model weights, which are learned during training, hyperparameters are set before training begins. Examples include learning rate, tree depth, number of hidden units, and regularization strength. The goal is finding the combination that yields the best performance on unseen data. Common approaches include grid search (trying all combinations), random search (sampling combinations randomly), and Bayesian optimization (using past results to guide the next trial). Proper tuning can transform a mediocre model into a competitive one. Many algorithms are extremely sensitive to these settings. A well-tuned random forest might hit 90% accuracy on a diagnosis task where default parameters produce 70%. In finance, healthcare, and autonomous driving, those percentage points translate directly into cost savings, better outcomes, and safer operation.
Back to Glossary