machine-learning optimisation

Definition

Regularisation

Regularisation is a technique used to prevent overfitting by penalising model complexity within the objective function. The objective is to identify a hypothesis that achieves a low empirical risk while maintaining the simplest possible structure. Formally, a regularisation term is added to the loss function, resulting in the structural risk:

where is the regularisation parameter that controls the tradeoff between fitting the training data and keeping the model parameters small.

Penalisation Schemes

The choice of determines the geometric properties of the parameter space and the resulting model characteristics.

L1 Regularisation (Lasso): Utilises the norm of the weights, , which promotes sparsity and performs automatic feature selection.

L2 Regularisation (Ridge): Utilises the squared norm, , which penalises large weights more heavily and improves model stability and generalisation.