Unimore logo AImageLab

Apprendimento Continuo mediante Metodi Rehearsal

Abstract: Artificial Neural Networks (ANNs) have been established as the centrepiece of contemporary Artificial Intelligence, steadily raising the bar for what can be accomplished by computer programs thanks to their effectiveness and versatility. While they shine especially for their capability for generalisation, these systems impose the strict requirement that their training procedure should insist on independent and identically distributed data. In contrast with human intelligence - which seamlessly allows us to acquire knowledge continuously - ANNs forget previously acquired knowledge catastrophically whenever their training data distribution changes over time. Such a fundamental limitation prevents the development of intelligent systems capable of quick adaptation, crucially tying model updates to a cumbersome offline retraining procedure. Continual Learning (CL) is a rapidly growing area of machine learning whose aim is counteracting the catastrophic forgetting phenomenon in ANNs through purposefully designed approaches. Among these, a prominent role is played by Rehearsal-Based Methods (RBM), which operate by storing few pieces of previously encountered data for later re-use, thus striking a favourable balance between efficacy and efficiency. This thesis encompasses the contributions to CL made by the candidate during his doctoral studies. Starting from a review of recent literature, it highlights the relevance of RBMs and shows that the decades-old Experience Replay baseline is competitive with current state-of-the-art approaches when carefully trained. Subsequently, this manuscript focuses on the proposal of novel RBMs, which expand on the basic replay formula by leveraging knowledge distillation ([X-]DER), implicit dynamic adaptation of network capacity (LiDER) and geometric regularisation of the model's latent space (CaSpeR). Extensive experimental analyses highlight the merits of the proposed approaches, shedding light on the specific properties they confer on the in-training model. Finally, this thesis investigates the applicability of RBMs beyond the typical incremental classification setting. Namely, a novel CL experimental scenario is introduced to provide more realistic evaluations w.r.t. common benchmarks in literature, an investigation is presented concerning the viability of CL when limited supervision is available, a thorough study is conducted on the interplay between pre-training and CL. As a result, architectures and best practices are introduced that bridge the gap between standard CL evaluations and real-world applications.


Citation:

Boschini, Matteo "Apprendimento Continuo mediante Metodi Rehearsal" 2023

 not available

Paper download:

  • Author version: