Reducing the stability gap for continual learning at the edge with class balancing
Wei Wei, Matthias Hutsebaut-Buysse, Tom De Schepper, Kevin Mets
ESANN25, Bruges, Belgium, April 23-25, 2025.
Abstract: Continual learning (CL) at the edge requires the model to learn from sequentially arriving small batches of data. A naive online learning strategy fails due to the catastrophic forgetting phenomenon. Previous literature introduced the `latent replay’ for CL at the edge, where the input is transformed into latent representations using a pre-trained feature extractor. These latent representations are used, in combination with the real inputs, to train the adaptive classification layers. This approach is prone to the stability gap problem, where the accuracies of learned classes drop when learning a new class, and they only recover during subsequent training iterations. We hypothesize that this is caused by the class imbalance between new class data from the new task, and the old class data in the replay memory. We validate this by applying two class balancing strategies in a latent replay-based CL method. Our empirical results demonstrate that class balancing strategies provide a notable accuracy improvement, and a reduction of the stability gap when using a latent replay-based CL method with a small replay memory size.