Definition
Continual Learning
Continual learning is about learning new data distributions without the necessity of re-training the entire model, which is more resource-efficient than existing foundational approaches.
Formally, an incoming batch of training samples belonging to a task can be represented as , where is the input data, are the data labels, is the task identity and is the batch index. Here, “task” refers to training samples following the distribution . Under realistic constraints, however, the data labels and the task identity might not be always available.
Catastrophic Forgetting
Definition
Catastrophic Forgetting
Catastrophic forgetting is a problem in continual learning where learning from dynamic data distributions results in a reduced ability to capture the old ones. 1
Link to original Footnotes
Plasticity vs. Memory Stability
Plasticity, the ability to adapt to new data distribution, and memory stability, the ability to recover old data distributions, largely compromise each other. 1
Typical Scenarios
Approaches
Regularisation-based Approach
Regularisation-based Continual Learning
Regularisation-based approaches add regularisation terms with reference to the old model.
Replay-based Approach
Replay-based Continual Learning
Replay-based approaches approximate and recover old data distributions. 1
Optimisation-based Approach
Optimisation-based Continual Learning
Optimisation-based approaches explicitly design and manipulate optimisation programs. 1
Representation-based Approach
Representation-based Continual Learning
Representation-based approaches create robust and well-distributed representations. 1
Architecture-based Approach
Architecture-based Continual Learning
Architecture-based approaches construct task-adaptive parameters with a properly-designed architectures. 1