machine-learning statistics optimisation
Definition
Cross-Entropy Loss
Cross-entropy loss (or log-loss) is a loss function used to measure the performance of a classification model whose output is a probability value between 0 and 1. Formally, for a binary classification task with true label and predicted probability , the loss is:
Statistical Foundation
Likelihood Maximisation: Minimising the cross-entropy loss is mathematically equivalent to maximising the likelihood of the observed data under a Bernoulli distribution.
Application in Deep Learning: It is the standard loss function for logistic regression and neural networks using a sigmoid or softmax output layer. Unlike mean squared error, cross-entropy provides larger gradients when the model is highly confident but incorrect, leading to faster convergence during backpropagation.