|
# Temporary housing for papers re: learning techniques for future deep-dives. |
|
|
|
### 2025 |
|
- <a name="todo"></a> Prototype antithesis for biological few-shot class-incremental learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=bRqaHn3J5I)] |
|
- <a name="todo"></a> Coreset Selection via Reducible Loss in Continual Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=mAztx8QO3B)][[code](https://github.com/RuilinTong/CSReL-Coreset-CL)] |
|
- <a name="todo"></a> LOIRE: LifelOng learning on Incremental data via pre-trained language model gRowth Efficiently (**ICLR 2025**) [[paper](https://openreview.net/forum?id=F5PlYMC5ik)] |
|
- <a name="todo"></a> Active Learning for Continual Learning: Keeping the Past Alive in the Present (**ICLR 2025**) [[paper](https://openreview.net/forum?id=mnLmmtW7HO)] |
|
- <a name="todo"></a> TSVD: Bridging Theory and Practice in Continual Learning with Pre-trained Models (**ICLR 2025**) [[paper](https://openreview.net/forum?id=bqv7M0wc4x)][[code](https://github.com/liangzu/tsvd)] |
|
- <a name="todo"></a> On Large Language Model Continual Unlearning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=Essg9kb4yx)][[code](https://github.com/GCYZSL/O3-LLM-UNLEARNING)] |
|
- <a name="todo"></a> SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=5U1rlpX68A)][[code](https://github.com/WuYichen-97/SD-Lora-CL)] |
|
- <a name="todo"></a> Federated Class-Incremental Learning: A Hybrid Approach Using Latent Exemplars and Data-Free Techniques to Address Local and Global Forgetting (**ICLR 2025**) [[paper](https://openreview.net/forum?id=ydREOIttdC)] |
|
- <a name="todo"></a> ADAPT: Attentive Self-Distillation and Dual-Decoder Prediction Fusion for Continual Panoptic Segmentation (**ICLR 2025**) [[paper](https://openreview.net/forum?id=HF1UmIVv6a)][[code](https://github.com/Ze-Yang/ADAPT)] |
|
- <a name="todo"></a> Semantic Aware Representation Learning for Lifelong Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=WwwJfkGq0G)][[code](https://github.com/NeurAI-Lab/SARL.git)] |
|
- <a name="todo"></a> Spurious Forgetting in Continual Learning of Language Models (**ICLR 2025**) [[paper](https://openreview.net/forum?id=ScI7IlKGdI)][[code](https://github.com/zzz47zzz/spurious-forgetting)] |
|
- <a name="todo"></a> CLDyB: Towards Dynamic Benchmarking for Continual Learning with Pre-trained Models (**ICLR 2025**) [[paper](https://openreview.net/forum?id=RnxwxGXxex)][[code](https://github.com/szc12153/CLDyB)] |
|
- <a name="todo"></a> Theory on Mixture-of-Experts in Continual Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=7XgKAabsPp)] |
|
- <a name="todo"></a> Boosting Multiple Views for pretrained-based Continual Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=AZR4R3lw7y)] |
|
- <a name="todo"></a> STAR: Stability-Inducing Weight Perturbation for Continual Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=6N5OM5Duuj)][[code](https://github.com/Gnomy17/STAR_CL)] |
|
- <a name="todo"></a> Optimal Protocols for Continual Learning via Statistical Physics and Control Theory (**ICLR 2025**) [[paper](https://openreview.net/forum?id=rhhQjGj09A)] |
|
- <a name="todo"></a> Convergence and Implicit Bias of Gradient Descent on Continual Linear Classification (**ICLR 2025**) [[paper](https://openreview.net/forum?id=DTqx3iqjkz)] |
|
- <a name="todo"></a> Unlocking the Power of Function Vectors for Characterizing and Mitigating Catastrophic Forgetting in Continual Instruction Tuning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=gc8QAQfXv6)][[code](https://github.com/GangweiJiang/FvForgetting)] |
|
- <a name="todo"></a>A Second-Order Perspective on Model Compositionality and Incremental Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=OZVTqoli2N)][[code](https://github.com/aimagelab/mammoth)] |
|
- <a name="todo"></a> C-CLIP: Multimodal Continual Learning for Vision-Language Model (**ICLR 2025**) [[paper](https://openreview.net/forum?id=sb7qHFYwBc)][[code](https://github.com/SmallPigPeppa/C-CLIP)] |
|
- <a name="todo"></a> Prevalence of Negative Transfer in Continual Reinforcement Learning: Analyses and a Simple Baseline (**ICLR 2025**) [[paper](https://openreview.net/forum?id=KAIqwkB3dT)][[code](https://github.com/hongjoon0805/Reset-Distill.git)] |
|
- <a name="todo"></a> Adapt-$\infty$: Scalable Continual Multimodal Instruction Tuning via Dynamic Data Selection (**ICLR 2025**) [[paper](https://openreview.net/forum?id=EwFJaXVePU¬eId=BvVfLgvs4D)][[code](https://github.com/adymaharana/adapt-inf)] |
|
- <a name="todo"></a> Learning Continually by Spectral Regularization (**ICLR 2025**) [[paper](https://openreview.net/forum?id=Hcb2cgPbMg)] |
|
- <a name="todo"></a> Self-Normalized Resets for Plasticity in Continual Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=G82uQztzxl)][[code](https://github.com/ajozefiak/SelfNormalizedResets)] |
|
- <a name="todo"></a> PseDet: Revisiting the Power of Pseudo Label in Incremental Object Detection (**ICLR 2025**) [[paper](https://openreview.net/forum?id=Iu8FVcUmVp)][[code](https://github.com/wang-qiuchen/PseDet)] |
|
- <a name="todo"></a> Meta-Continual Learning of Neural Fields (**ICLR 2025**) [[paper](https://openreview.net/forum?id=OCpxDSn0G4)][[code](https://github.com/seungyoon-woo/MCL-NF)] |
|
- <a name="todo"></a> Vision and Language Synergy for Rehearsal Free Continual Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=9aZ2ixiYGd)][[code](https://github.com/anwarmaxsum/LEAPGEN)] |
|
- <a name="todo"></a> Advancing Prompt-based Methods for Replay-Independent General Continual Learning (**ICLR 2025**) [[paper](https://openreview.net/forum?id=V6uxd8MEqw)][[code](https://github.com/kangzhiq/MISA)] |
|
|
|
|
|
- <a name="todo"></a> LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual Learning (**CVPR2025**) [[paper](https://arxiv.org/abs/2503.18985)][[code](https://github.com/scarlet0703/LoRA-Sub-DRS)] |
|
- <a name="todo"></a> KAC: Kolmogorov-Arnold Classifier for Continual Learning (**CVPR2025**) [[paper](https://arxiv.org/pdf/2503.21076)][[code](https://github.com/Ethanhuhuhu/KAC)] |
|
- <a name="todo"></a> HiDe-PET: Continual Learning via Hierarchical Decomposition of Parameter-Efficient Tuning (**TPAMI 2025**) [[paper](https://github.com/thu-ml/HiDe-PET)] |
|
|
|
- <a name="todo"></a> Adaptive Score Alignment Learning for Continual Perceptual Quality Assessment of 360-Degree Videos in Virtual Reality (**VR-TVCG 2025**) [[paper](https://arxiv.org/abs/2502.19644)][[code](https://github.com/ZhouKanglei/ASAL_CVQA)] |
|
|
|
### 2024 |
|
|
|
- <a name="todo"></a> Mask and Compress: Efficient Skeleton-based Action Recognition in Continual Learning (**ICPR2024**) [[paper](https://arxiv.org/pdf/2407.01397)][[code](https://github.com/Sperimental3/CHARON)] |
|
- <a name="todo"></a> CLIP with Generative Latent Replay: a Strong Baseline for Incremental Learning (**BMVC24**)[[paper](https://arxiv.org/abs/2407.15793)][[code](https://github.com/aimagelab/mammoth)] |
|
- <a name="todo"></a> Learning a Low-Rank Feature Representation: Achieving Better Trade-Off between Stability and Plasticity in Continual Learning (**ICASSP2024**) [[paper](https://arxiv.org/abs/2312.08740)] [[code](https://github.com/Dacaidi/LRFR)] |
|
- <a name="todo"></a> Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class Incremental Learning (**AAAI2024**)[[paper](https://arxiv.org/abs/2312.12722)] |
|
- <a name="todo"></a> GACL: Exemplar-Free Generalized Analytic Continual Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/7d7f8049c3a8d96f5824e696ca7a41551b337c51.pdf)][[code](https://github.com/CHEN-YIZHU/GACL)] |
|
- <a name="todo"></a> Continual Audio-Visual Sound Separation (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/fd25d9e8abc814ee3c5d1d374c127ffdda6c023a.pdf)][[code](https://github.com/weiguoPian/ContAV-Sep_NeurIPS2024)] |
|
- <a name="todo"></a> F-OAL: Forward-only Online Analytic Learning with Fast Training and Low Memory Footprint in Class Incremental Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/e226708f2029c076b49ce0f8780b0b25c1a15cb8.pdf)][[code](https://github.com/liuyuchen-cz/F-OAL)] |
|
- <a name="todo"></a> Adaptive Visual Scene Understanding: Incremental Scene Graph Generation (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/8e577312ef669ee933f93b7513fcff4d94b2b848.pdf)][[code](https://github.com/ZhangLab-DeepNeuroCogLab/CSEGG)] |
|
- <a name="todo"></a> Task Confusion and Catastrophic Forgetting in Class-Incremental Learning: A Mathematical Framework for Discriminative and Generative Modelings (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/57bd4f3d82cf6a55dece2ea557e36fce58d61778.pdf)] |
|
- <a name="todo"></a> Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/f234665a6b29bf4968da01a5adc0303e595efb5c.pdf)][[code](https://github.com/wxr99/Forgetting-Ignorance-or-Myopia-Revisiting-Key-Challenges-in-Online-Continual-Learning)] |
|
- <a name="todo"></a> Continual Learning in the Frequency Domain (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/b2bc1d3b1cda2ea1944e24b02553a19d2c513437.pdf)][[code](https://github.com/EMLS-ICTCAS/CLFD.git)] |
|
- <a name="todo"></a> Saliency-driven Experience Replay for Continual Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/f3e3bc516755d6efa43fb0c62dea0d705efacfe7.pdf)][[code](https://github.com/perceivelab/SER)] |
|
- <a name="todo"></a> PCoTTA: Continual Test-Time Adaptation for Multi-Task Point Cloud Understanding (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/10d19bfe9a28e15074dbb79450ad0e5bc9dde6e4.pdf)][[code](https://github.com/Jinec98/PCoTTA)] |
|
- <a name="todo"></a> Advancing Cross-domain Discriminability in Continual Learning of Vision-Language Models (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/f13992ea7e554b8fcfa2b120be55eeb89c25643f.pdf)][[code](https://github.com/linghan1997/Regression-based-Analytic-Incremental-Learning)] |
|
- <a name="todo"></a> Continual Learning with Global Alignment (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/0b2a82c75f549856c3b133f08c9abe7349c018d7.pdf)] |
|
- <a name="todo"></a> Disentangling and mitigating the impact of task similarity for continual learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/a615623ba5e9b57a77694d9816984ebb20ebf11f.pdf)] |
|
- <a name="todo"></a> Replay-and-Forget-Free Graph Class-Incremental Learning: A Task Profiling and Prompting Approach (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/db512259110b000f82fd2052e9432dd693af4137.pdf)][[code](https://github.com/mala-lab/TPP)] |
|
- <a name="todo"></a> Make Continual Learning Stronger via C-Flat (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/be179393fb5b55da27facef791300b7cea7f22b0.pdf)][[code](https://github.com/WanNaa/C-Flat)] |
|
- <a name="todo"></a> ViLCo-Bench: VIdeo Language COntinual learning Benchmark (**NeurIPS 2024**) [[paper](https://arxiv.org/pdf/2406.13123)][[code](https://github.com/cruiseresearchgroup/ViLCo)] |
|
- <a name="todo"></a> Not Just Object, But State: Compositional Incremental Learning without Forgetting (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/e422a66571f0dfafe75b5c8ba1f75cb365fda448.pdf)][[code](https://github.com/Yanyi-Zhang/CompILer)] |
|
- <a name="todo"></a> Task-recency bias strikes back: Adapting covariances in Exemplar-Free Class Incremental Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/3b3eb9b8951efe8f03c3881dd21dd13da86a9383.pdf)][[code](https://github.com/grypesc/AdaGauss)] |
|
- <a name="todo"></a> Random Representations Outperform Online Continually Learned Representations (**NeurIPS 2024**) [[paper](https://arxiv.org/abs/2402.08823)][[code](https://github.com/drimpossible/RanDumb)] |
|
- <a name="todo"></a> Mixture of Experts Meets Prompt-Based Continual Learning (**NeurIPS 2024**) [[paper](https://arxiv.org/abs/2405.14124)][[code](https://github.com/Minhchuyentoancbn/MoE_PromptCL)] |
|
- <a name="todo"></a> SAFE: Slow and Fast Parameter-Efficient Tuning for Continual Learning with Pre-Trained Models (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/0d0cbd6d4b593d16bd3e4fb3e1b7c2e737e4a5c5.pdf)][[code](https://github.com/MIFA-Lab/SAFE)] |
|
- <a name="todo"></a> Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/2d2fc4beb4ba2418dd2a4c680959b5708e85b13e.pdf)][[code](https://github.com/ybseo-ac/TAALM)] |
|
- <a name="todo"></a> Persistence Homology Distillation for Semi-supervised Continual Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/5f6f27efbe92894d0e9f59692d4fee8538becfa1.pdf)][[code](https://github.com/fanyan0411/PsHD)] |
|
- <a name="todo"></a> Continual learning with the neural tangent ensemble (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/7ca99de5c6a3ad1c6a158db1bba6a3eb0841e7bc.pdf)] |
|
- <a name="todo"></a> Vector Quantization Prompting for Continual Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/fe56049dfd050804f643de97820660c0ab7ace62.pdf)][[code](https://github.com/jiaolifengmi/VQ-Prompt)] |
|
- <a name="todo"></a> CLAP4CLIP: Continual Learning with Probabilistic Finetuning for Vision-Language Models (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/649fc2bc1d6ab7ff1bb07d921e2180c36c2ccf3b.pdf)][[code](https://github.com/srvCodes/clap4clip)] |
|
- <a name="todo"></a> An Efficient Memory Module for Graph Few-Shot Class-Incremental Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/31dab3747597e8e344a3f1a522e12afb94978737.pdf)][[code](https://github.com/Arvin0313/Mecoin-GFSCIL.git)] |
|
- <a name="todo"></a> Label Delay in Online Continual Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/170ed172ba6d096fafba4f36e54f20fec4c17dcf.pdf)][[code](https://github.com/botcs/label-delay-exp)] |
|
- <a name="todo"></a> Visual Prompt Tuning in Null Space for Continual Learning (**NeurIPS 2024**) [[paper](https://arxiv.org/abs/2406.05658)][[code](https://github.com/zugexiaodui/VPTinNSforCL)] |
|
- <a name="todo"></a> Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/4f1691d6d916f1f457cca7c2a9184a4911758a80.pdf)] |
|
- <a name="todo"></a> A Topology-aware Graph Coarsening Framework for Continual Graph Learning (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/406408d7839e9d5c643715d8429ea93609e08c84.pdf)][[code](https://github.com/hanxiaoxue114/TACO)] |
|
- <a name="todo"></a> What Matters in Graph Class Incremental Learning? An Information Preservation Perspective (**NeurIPS 2024**) [[paper](https://openreview.net/pdf/f16531b97e4d2d6593f7273b8e6bb7292070ff71.pdf)][[code](https://github.com/Jillian555/GSIP)] |
|
- <a name="todo"></a> Happy: A Debiased Learning Framework for Continual Generalized Category Discovery (**NeurIPS 2024**) [[paper](https://arxiv.org/abs/2410.06535)][[code](https://github.com/mashijie1028/Happy-CGCD)] |
|
|
|
- <a name="todo"></a> MAGR: Manifold-Aligned Graph Regularization for Continual Action Quality Assessment (**ECCV 2024**) [[paper](https://arxiv.org/abs/2403.04398)][[code](https://github.com/ZhouKanglei/MAGR_CAQA)] |
|
- <a name="todo"></a> Early Preparation Pays Off: New Classifier Pre-tuning for Class Incremental Semantic Segmentation (**ECCV24**)[[paper](https://arxiv.org/abs/2407.14142)][[code](https://github.com/zhengyuan-xie/ECCV24_NeST)] |
|
- <a name="todo"></a> Class-Incremental Learning with CLIP: Adaptive Representation Adjustment and Parameter Fusion (**ECCV24**)[[paper](https://arxiv.org/abs/2407.14143)][[code](https://github.com/linlany/RAPF)] |
|
- <a name="todo"></a> Bridge Past and Future: Overcoming Information Asymmetry in Incremental Object Detection (**ECCV24**)[[paper](https://arxiv.org/abs/2407.11499)][[code](https://github.com/iSEE-Laboratory/BPF)] |
|
- <a name="todo"></a> Confidence Self-Calibration for Multi-Label Class-Incremental Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2403.12559v2)] |
|
- <a name="todo"></a> Rethinking Few-shot Class-incremental Learning: Learning from Yourself (**ECCV24**)[[paper](https://arxiv.org/pdf/2407.07468)][[code](https://github.com/iSEE-Laboratory/Revisting_FSCIL)] |
|
- <a name="todo"></a> Versatile Incremental Learning: Towards Class and Domain-Agnostic Incremental Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2409.10956)][[code](https://github.com/KHU-AGI/VIL)] |
|
- <a name="todo"></a> Scene Coordinate Reconstruction: Posing of Image Collections via Incremental Learning of a Relocalizer (**ECCV24**)[[paper](https://arxiv.org/abs/2404.14351)][[code](https://github.com/nianticlabs/acezero)] |
|
- <a name="todo"></a> Mitigating Background Shift in Class-Incremental Semantic Segmentation (**ECCV24**)[[paper](https://arxiv.org/abs/2407.11859)][[code](http://github.com/RoadoneP/ECCV2024_MBS)] |
|
- <a name="todo"></a> Personalized Federated Domain-Incremental Learning based on Adaptive Knowledge Matching (**ECCV24**)[[paper](https://arxiv.org/abs/2407.05005)] |
|
- <a name="todo"></a> Learning from the Web: Language Drives Weakly-Supervised Incremental Learning for Semantic Segmentation (**ECCV24**)[[paper](https://arxiv.org/abs/2407.13363)][[code](https://github.com/dota-109/Web-WILSS)] |
|
- <a name="todo"></a> Tendency-driven Mutual Exclusivity for Weakly Supervised Incremental Semantic Segmentation (**ECCV24**)[[paper](https://arxiv.org/abs/2404.11981)] |
|
- <a name="todo"></a> Cs2K: Class-specific and Class-shared Knowledge Guidance for Incremental Semantic Segmentation (**ECCV24**)[[paper](https://arxiv.org/abs/2407.09047)] |
|
- <a name="todo"></a> DiffClass: Diffusion-Based Class Incremental Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2403.05016)][[code](https://github.com/cr8br0ze/DiffClass-Code)] |
|
- <a name="todo"></a> PILoRA: Prototype Guided Incremental LoRA for Federated Class-Incremental Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2401.02094)][[code](https://github.com/Ghy0501/PILoRA)] |
|
- <a name="todo"></a> Few-shot Class Incremental Learning with Attention-Aware Self-Adaptive Prompt (**ECCV24**)[[paper](https://arxiv.org/abs/2403.09857)][[code](https://github.com/DawnLIU35/FSCIL-ASP)] |
|
- <a name="todo"></a> iNeMo: Incremental Neural Mesh Models for Robust Class-Incremental Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2407.09271)][[code](https://github.com/Fischer-Tom/iNeMo)] |
|
- <a name="todo"></a> Background Adaptation with Residual Modeling for Exemplar-Free Class-Incremental Semantic Segmentation (**ECCV24**)[[paper](https://arxiv.org/abs/2407.09838)][[code](https://github.com/ANDYZAQ/BARM)] |
|
- <a name="todo"></a> Continual Learning for Remote Physiological Measurement: Minimize Forgetting and Simplify Inference (**ECCV24**)[[paper](https://arxiv.org/abs/2407.13974)][[code](https://github.com/MayYoY/rPPGDIL)] |
|
- <a name="todo"></a> Semantic Residual Prompts for Continual Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2403.06870)][[code](https://github.com/aimagelab/mammoth)] |
|
- <a name="todo"></a> Category Adaptation Meets Projected Distillation in Generalized Continual Category Discovery (**ECCV24**)[[paper](https://arxiv.org/abs/2308.12112)][[code](https://github.com/grypesc/CAMP)] |
|
- <a name="todo"></a> CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2407.12188)][[code](https://github.com/ErumMushtaq/CroMo-Mixup)] |
|
- <a name="todo"></a> Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2407.10281)] |
|
- <a name="todo"></a> Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models (**ECCV24**)[[paper](https://arxiv.org/abs/2407.05342)][[code](https://github.com/lloongx/DIKI)] |
|
- <a name="todo"></a> Reshaping the Online Data Buffering and Organizing Mechanism for Continual Test-Time Adaptation (**ECCV24**)[[paper](https://arxiv.org/abs/2407.09367)][[code](https://github.com/z1358/OBAO)] |
|
- <a name="todo"></a> Revisiting Supervision for Continual Representation Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2311.13321)][[code](https://github.com/danielm1405/sl-vs-ssl-cl)] |
|
- <a name="todo"></a> Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models (**ECCV24**)[[paper](https://arxiv.org/abs/2403.09296)][[code](https://github.com/chu0802/SnD)] |
|
- <a name="todo"></a> PromptFusion: Decoupling Stability and Plasticity for Continual Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2303.07223)][[code](https://github.com/HaoranChen/PromptFusion)] |
|
- <a name="todo"></a> One-stage Prompt-based Continual Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2402.16189)] |
|
- <a name="todo"></a> Preventing Catastrophic Forgetting through Memory Networks in Continuous Detection (**ECCV24**)[[paper](https://arxiv.org/abs/2403.14797)][[code](https://github.com/GauravBh1010tt/MD-DETR)] |
|
- <a name="todo"></a> Exemplar-free Continual Representation Learning via Learnable Drift Compensation (**ECCV24**)[[paper](https://arxiv.org/abs/2407.08536)][[code](https://github.com/alviur/ldc)] |
|
- <a name="todo"></a> Open-World Dynamic Prompt and Continual Visual Representation Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2409.05312)] |
|
- <a name="todo"></a> Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2409.01128)][[code](https://github.com/jinglin-liang/DDDR)] |
|
- <a name="todo"></a> PromptCCD: Learning Gaussian Mixture Prompt Pool for Continual Category Discovery (**ECCV24**)[[paper](https://arxiv.org/abs/2407.19001)][[code](https://visual-ai.github.io/promptccd)] |
|
- <a name="todo"></a> MagMax: Leveraging Model Merging for Seamless Continual Learning (**ECCV24**)[[paper](https://arxiv.org/abs/2407.06322)][[code](https://github.com/danielm1405/magmax)] |
|
- <a name="todo"></a> Anytime Continual Learning for Open Vocabulary Classification (**ECCV24**)[[paper](https://arxiv.org/abs/2409.08518v1)][[code](https://github.com/jessemelpolio/AnytimeCL)] |
|
- <a name="todo"></a> Weighted Ensemble Models Are Strong Continual Learners (**ECCV24**)[[paper](https://arxiv.org/abs/2312.08977)][[code](https://github.com/IemProg/CoFiMA)] |
|
- <a name="todo"></a> CLEO: Continual Learning of Evolving Ontologies (**ECCV24**)[[paper](https://arxiv.org/abs/2407.08411)][[code](https://github.com/longrongyang/RCS-Prompt)] |
|
- <a name="todo"></a> UNIKD: UNcertainty-Filtered Incremental Knowledge Distillation for Neural Implicit Representation (**ECCV24**)[[paper](https://arxiv.org/abs/2212.10950)][[code](https://dreamguo.github.io/projects/UNIKD)] |
|
- <a name="todo"></a> Canonical Shape Projection is All You Need for 3D Few-shot Class Incremental Learning (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/05717.pdf)][[code](https://github.com/alichr/C3PR)] |
|
- <a name="todo"></a> STSP: Spatial-Temporal Subspace Projection for Video Class-incremental Learning (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/04106.pdf)] |
|
- <a name="todo"></a> Non-Exemplar Domain Incremental Learning via Cross-Domain Concept Integration (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06534.pdf)] |
|
- <a name="todo"></a> CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06497.pdf)][[code](https://github.com/JungHunOh/CLOSER_ECCV2024)] |
|
- <a name="todo"></a> On the Approximation Risk of Few-Shot Class-Incremental Learning (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06766.pdf)][[code](https://github.com/xwangrs/Approximation_FSCIL-ECCV2024.git)] |
|
- <a name="todo"></a> Adapt without Forgetting: Distill Proximity from Dual Teachers in Vision-Language Models (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/07052.pdf)][[code](https://github.com/myz-ah/AwoForget)] |
|
- <a name="todo"></a> Information Bottleneck Based Data Correction in Continual Learning (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/11862.pdf)] |
|
- <a name="todo"></a> Continual Learning and Unknown Object Discovery in 3D Scenes via Self-Distillation (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/09349.pdf)][[code](https://github.com/aminebdj/OpenDistill3D)] |
|
- <a name="todo"></a> Pick-a-back: Selective Device-to-Device Knowledge Transfer in Federated Continual Learning (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/07822.pdf)][[code](https://github.com/jinyi-yoon/Pick-a-back.git)] |
|
- <a name="todo"></a> Human Motion Forecasting in Dynamic Domain Shifts: A Homeostatic Continual Test-time Adaptation Framework (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/04599.pdf)] |
|
- <a name="todo"></a> CLIFF: Continual Latent Diffusion for Open-Vocabulary Object Detection (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/07221.pdf)][[code](https://github.com/CUHK-AIM-Group/CLIFF)] |
|
- <a name="todo"></a> RCS-Prompt: Learning Prompt to Rearrange Class Space for Prompt-based Continual Learning (**ECCV24**)[[paper](https://www.ecva.net/papers/eccv_2024/papers_ECCV/papers/06307.pdf)][[code](https://github.com/longrongyang/RCS-Prompt)] |
|
|
|
|
|
|
|
- <a name="todo"></a> Online Continuous Generalized Category Discovery (**ECCV24**)[[paper](https://arxiv.org/abs/2408.13492)][[code](https://github.com/KHU-AGI/OCGCD)] |
|
- <a name="todo"></a> Class-incremental Learning for Time Series: Benchmark and Evaluation (**KDD24**)[[paper](https://dl.acm.org/doi/abs/10.1145/3637528.3671581)][[code](https://github.com/zqiao11/TSCIL)] |
|
- <a name="todo"></a> Harnessing Neural Unit Dynamics for Effective and Scalable Class-Incremental Learning (**ICML24**)[[paper](https://arxiv.org/abs/2406.02428)] |
|
- <a name="todo"></a> Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning (**ICML24**)[[paper](https://openreview.net/pdf?id=aksdU1KOpT)][[code](https://github.com/bwnzheng/MRFA_ICML2024)] |
|
- <a name="todo"></a> Regularizing with Pseudo-Negatives for Continual Self-Supervised Learning (**ICML24**)[[paper](https://arxiv.org/abs/2306.05101)] |
|
- <a name="todo"></a> Learning to Continually Learn with the Bayesian Principle (**ICML24**)[[paper](https://arxiv.org/abs/2405.18758)][[code](https://github.com/soochan-lee/SB-MCL)] |
|
- <a name="todo"></a> Rethinking Momentum Knowledge Distillation in Online Continual Learning (**ICML24**)[[paper](https://arxiv.org/abs/2309.02870)][[code](https://github.com/Nicolas1203/mkd_ocl)] |
|
- <a name="todo"></a> Layerwise Proximal Replay: A Proximal Point Method for Online Continual Learning (**ICML24**)[[paper](https://arxiv.org/abs/2402.09542)] |
|
- <a name="todo"></a> Bayesian Adaptation of Network Depth and Width for Continual Learning (**ICML24**)[[paper](https://openreview.net/pdf?id=c9HddKGiYk)] |
|
- <a name="todo"></a> STELLA: Continual Audio-Video Pre-training with SpatioTemporal Localized Alignment (**ICML24**)[[paper](https://arxiv.org/pdf/2310.08204)][[code](https://github.com/G-JWLEE/STELLA_code)] |
|
- <a name="todo"></a> On the Diminishing Returns of Width for Continual Learning (**ICML24**)[[paper](https://arxiv.org/abs/2403.06398)][[code](https://github.com/vihan-lakshman/diminishing-returns-wide-continual-learning)] |
|
|
|
- <a name="todo"></a> Compositional Few-Shot Class-Incremental Learning (**ICML24**)[[paper](https://openreview.net/attachment?id=t4908PyZxs&name=pdf)][[code](https://github.com/Zoilsen/Comp-FSCIL)] |
|
- <a name="todo"></a> Rapid Learning without Catastrophic Forgetting in the Morris Water Maze (**ICML24**)[[paper](https://openreview.net/attachment?id=i9C4Kwm56G&name=pdf)][[code](https://github.com/raymondw2/seq-wm)] |
|
- <a name="todo"></a> Understanding Forgetting in Continual Learning with Linear Regression (**ICML24**)[[paper](https://openreview.net/attachment?id=89kZWloYQx&name=pdf)] |
|
- <a name="todo"></a> Mitigating Catastrophic Forgetting in Online Continual Learning by Modeling Previous Task Interrelations via Pareto Optimization (**ICML24**)[[paper](https://openreview.net/attachment?id=olbTrkWo1D&name=pdf)] |
|
- <a name="todo"></a> Task-aware Orthogonal Sparse Network for Exploring Shared Knowledge in Continual Learning (**ICML24**)[[paper](https://openreview.net/attachment?id=tABvuya05B&name=pdf)] |
|
- <a name="todo"></a> Provable Contrastive Continual Learning (**ICML24**)[[paper](https://openreview.net/attachment?id=V3ya8RlbrW&name=pdf)] |
|
- <a name="todo"></a> Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method (**ICML24**)[[paper](https://openreview.net/attachment?id=1AAlMSo7Js&name=pdf)][[code](https://github.com/NeurAI-Lab/DARE)] |
|
- <a name="todo"></a> An Effective Dynamic Gradient Calibration Method for Continual Learning (**ICML24**)[[paper](https://openreview.net/attachment?id=q14AbM4kdv&name=pdf)] |
|
- <a name="todo"></a> Federated Continual Learning via Prompt-based Dual Knowledge Transfer (**ICML24**)[[paper](https://openreview.net/attachment?id=Kqa5JakTjB&name=pdf)][[code](https://github.com/piaohongming/Powder)] |
|
- <a name="todo"></a> COPAL: Continual Pruning in Large Language Generative Models (**ICML24**)[[paper](https://openreview.net/attachment?id=Lt8Lk7IQ5b&name=pdf)] |
|
- <a name="todo"></a> One Size Fits All for Semantic Shifts: Adaptive Prompt Tuning for Continual Learning (**ICML24**)[[paper](https://openreview.net/attachment?id=WUi1AqhKn5&name=pdf)] |
|
|
|
|
|
- <a name="todo"></a> Hierarchical Augmentation and Distillation for Class Incremental Audio-Visual Video Recognition (**TPAMI2024**)[[paper](https://ieeexplore.ieee.org/document/10497880)] |
|
- <a name="todo"></a> AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/html/Liang_AIDE_An_Automatic_Data_Engine_for_Object_Detection_in_Autonomous_CVPR_2024_paper.html)] |
|
- <a name="todo"></a> DELTA: Decoupling Long-Tailed Online Continual Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024W/CLVISION/papers/Raghavan_DELTA_Decoupling_Long-Tailed_Online_Continual_Learning_CVPRW_2024_paper.pdf)][[Code](https://gitlab.com/viper-purdue/delta)] |
|
- <a name="todo"></a> Continual Segmentation with Disentangled Objectness Learning and Class Recognition (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.03477)][[code](https://github.com/jordangong/CoMasTRe)] |
|
- <a name="todo"></a> Interactive Continual Learning: Fast and Slow Thinking (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.02628)][[code](http://github.com/ICL)] |
|
- <a name="todo"></a> InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2404.00228)][[code](https://github.com/liangyanshuo/InfLoRA)] |
|
- <a name="todo"></a> Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.19979)][[code](https://github.com/HAIV-Lab/SSIAT)] |
|
- <a name="todo"></a> Traceable Federated Continual Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Traceable_Federated_Continual_Learning_CVPR_2024_paper.pdf)][[code](https://github.com/P0werWeirdo/TagFCL)] |
|
- <a name="todo"></a> Defense without Forgetting: Continual Adversarial Defense with Anisotropic & Isotropic Pseudo Replay (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_Defense_without_Forgetting_Continual_Adversarial_Defense_with_Anisotropic__Isotropic_CVPR_2024_paper.pdf)] |
|
- <a name="todo"></a> Learning Continual Compatible Representation for Re-indexing Free Lifelong Person Re-identification (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Cui_Learning_Continual_Compatible_Representation_for_Re-indexing_Free_Lifelong_Person_Re-identification_CVPR_2024_paper.pdf)][[code](https://github.com/PKU-ICST-MIPL/C2R)] |
|
- <a name="todo"></a> Towards Backward-Compatible Continual Learning of Image Compression (**CVPR2024**)[[paper](https://arxiv.org/abs/2402.18862)][[code](https://gitlab.com/viper-purdue/continual-compression)] |
|
- <a name="todo"></a> Class Incremental Learning with Multi-Teacher Distillation (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Wen_Class_Incremental_Learning_with_Multi-Teacher_Distillation_CVPR_2024_paper.pdf)][[code](https://github.com/HaitaoWen/CLearning)] |
|
- <a name="todo"></a> Towards Efficient Replay in Federated Incremental Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Towards_Efficient_Replay_in_Federated_Incremental_Learning_CVPR_2024_paper.pdf)] |
|
- <a name="todo"></a> Dual-consistency Model Inversion for Non-exemplar Class Incremental Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Qiu_Dual-Consistency_Model_Inversion_for_Non-Exemplar_Class_Incremental_Learning_CVPR_2024_paper.pdf)] |
|
- <a name="todo"></a> Dual-Enhanced Coreset Selection with Class-wise Collaboration for Online Blurry Class Incremental Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Luo_Dual-Enhanced_Coreset_Selection_with_Class-wise_Collaboration_for_Online_Blurry_Class_CVPR_2024_paper.pdf)] |
|
- <a name="todo"></a> Coherent Temporal Synthesis for Incremental Action Segmentation (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.06102)] |
|
- <a name="todo"></a> Text-Enhanced Data-free Approach for Federated Class-Incremental Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.14101)][[code](https://github.com/tmtuan1307/lander)] |
|
- <a name="todo"></a> NICE: Neurogenesis Inspired Contextual Encoding for Replay-free Class Incremental Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Gurbuz_NICE_Neurogenesis_Inspired_Contextual_Encoding_for_Replay-free_Class_Incremental_Learning_CVPR_2024_paper.pdf)][[code](https://github.com/BurakGurbuz97/NICE)] |
|
- <a name="todo"></a> Long-Tail Class Incremental Learning via Independent Sub-prototype Construction (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Long-Tail_Class_Incremental_Learning_via_Independent_Sub-prototype_Construction_CVPR_2024_paper.pdf)] |
|
- <a name="todo"></a> FCS: Feature Calibration and Separation for Non-Exemplar Class Incremental Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Li_FCS_Feature_Calibration_and_Separation_for_Non-Exemplar_Class_Incremental_Learning_CVPR_2024_paper.pdf)][[code](https://github.com/zhoujiahuan1991/CVPR2024-FCS)] |
|
- <a name="todo"></a> Incremental Nuclei Segmentation from Histopathological Images via Future-class Awareness and Compatibility-inspired Distillation (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Wang_Incremental_Nuclei_Segmentation_from_Histopathological_Images_via_Future-class_Awareness_and_CVPR_2024_paper.pdf)][[code](https://github.com/why19991/InSeg)] |
|
- <a name="todo"></a> Gradient Reweighting: Towards Imbalanced Class-Incremental Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/He_Gradient_Reweighting_Towards_Imbalanced_Class-Incremental_Learning_CVPR_2024_paper.pdf)][[code]( https://github.com/JiangpengHe/imbalanced_cil)] |
|
- <a name="todo"></a> OrCo: Towards Better Generalization via Orthogonality and Contrast for Few-Shot Class-Incremental Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Ahmed_OrCo_Towards_Better_Generalization_via_Orthogonality_and_Contrast_for_Few-Shot_CVPR_2024_paper.pdf)][[code]( https://github.com/noorahmedds/OrCo)] |
|
- <a name="todo"></a> SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Kim_SDDGR_Stable_Diffusion-based_Deep_Generative_Replay_for_Class_Incremental_Object_CVPR_2024_paper.pdf)] |
|
- <a name="todo"></a> Generative Multi-modal Models are Good Class Incremental Learners (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.18383)][[code](https://github.com/DoubleClass/GMM)] |
|
- <a name="todo"></a> Task-Adaptive Saliency Guidance for Exemplar-free Class Incremental Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2212.08251)][[code](https://github.com/scok30/tass)] |
|
- <a name="todo"></a> DYSON: Dynamic Feature Space Self-Organization for Online Task-Free Class Incremental Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/He_DYSON_Dynamic_Feature_Space_Self-Organization_for_Online_Task-Free_Class_Incremental_CVPR_2024_paper.pdf)][[code](https://github.com/isCDX2/DYSON)] |
|
- <a name="todo"></a> Enhancing Visual Continual Learning with Language-Guided Supervision (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.16124)] |
|
- <a name="todo"></a> Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.11549)][[code](https://github.com/JiazuoYu/MoE-Adapters4CL)] |
|
- <a name="todo"></a> Adaptive VIO: Deep Visual-Inertial Odometry with Online Continual Learning (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Pan_Adaptive_VIO_Deep_Visual-Inertial_Odometry_with_Online_Continual_Learning_CVPR_2024_paper.pdf)] |
|
- <a name="todo"></a> Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2311.17597)][[code](https://github.com/yeerwen/MedCoSS)] |
|
- <a name="todo"></a> ECLIPSE: Efficient Continual Learning in Panoptic Segmentation with Visual Prompt Tuning (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.20126)][[code](https://github.com/clovaai/ECLIPSE)] |
|
- <a name="todo"></a> Online Task-Free Continual Generative and Discriminative Learning via Dynamic Cluster Memory (**CVPR2024**)[[paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Ye_Online_Task-Free_Continual_Generative_and_Discriminative_Learning_via_Dynamic_Cluster_CVPR_2024_paper.pdf)][[code](https://github.com/dtuzi123/DCM)] |
|
- <a name="todo"></a> Learning from One Continuous Video Stream (**CVPR2024**)[[paper](https://arxiv.org/abs/2312.00598)] |
|
- <a name="todo"></a> Improving Plasticity in Online Continual Learning via Collaborative Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2312.00600)][[code](https://github.com/maorong-wang/CCL-DC)] |
|
- <a name="todo"></a> Learning Equi-angular Representations for Online Continual Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2404.01628)][[code](https://github.com/yonseivnl/earlt)] |
|
- <a name="todo"></a> BrainWash: A Poisoning Attack to Forget in Continual Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2311.11995)] |
|
- <a name="todo"></a> Consistent Prompting for Rehearsal-Free Continual Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.08568)][[code](https://github.com/Zhanxin-Gao/CPrompt)] |
|
- <a name="todo"></a> Resurrecting Old Classes with New Data for Exemplar-Free Continual Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2405.19074)][[code](https://github.com/dipamgoswami/ADC)] |
|
- <a name="todo"></a> Convolutional Prompting meets Language Models for Continual Learning (**CVPR2024**)[[paper](https://arxiv.org/pdf/2403.20317)][[code](https://github.com/CVIR/ConvPrompt)] |
|
- <a name="todo"></a> Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning (**CVPR2024**)[[paper](https://arxiv.org/abs/2403.12030)][[code](https://github.com/sun-hailong/CVPR24-Ease)] |
|
- <a name="todo"></a> Pre-trained Vision and Language Transformers Are Few-Shot Incremental Learners (**CVPR2024**)[[paper](https://arxiv.org/abs/2404.02117)][[code](https://github.com/KHU-AGI/PriViLege)] |
|
- <a name="todo"></a> Orchestrate Latent Expertise: Advancing Online Continual Learning with Multi-Level Supervision and Reverse Self-Distillation (**CVPR2024**)[[paper](https://arxiv.org/abs/2404.00417)][[code](https://github.com/AnAppleCore/MOSE)] |
|
|
|
- <a name="todo"></a> Elastic Feature Consolidation For Cold Start Exemplar-Free Incremental Learning (**ICLR2024**)[[paper](https://openreview.net/attachment?id=7D9X2cFnt1&name=pdf)][[code](https://github.com/simomagi/elastic_feature_consolidation)] |
|
- <a name="todo"></a> Function-space Parameterization of Neural Networks for Sequential Learning (**ICLR2024**)[[paper](https://openreview.net/attachment?id=2dhxxIKhqz&name=pdf)] |
|
- <a name="todo"></a> Progressive Fourier Neural Representation for Sequential Video Compilation (**ICLR2024**)[[paper](https://openreview.net/attachment?id=rGFrRMBbOq&name=pdf)] |
|
- <a name="todo"></a> Kalman Filter Online Classification from non-Stationary Data (**ICLR2024**)[[paper](https://openreview.net/attachment?id=ZzmKEpze8e&name=pdf)] |
|
- <a name="todo"></a> Continual Momentum Filtering on Parameter Space for Online Test-time Adaptation (**ICLR2024**)[[paper](https://openreview.net/attachment?id=BllUWdpIOA&name=pdf)] |
|
- <a name="todo"></a> TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models (**ICLR2024**)[[paper](https://openreview.net/attachment?id=RRayv1ZPN3&name=pdf)] |
|
- <a name="todo"></a> Class Incremental Learning via Likelihood Ratio Based Task Prediction (**ICLR2024**)[[paper](https://openreview.net/attachment?id=8QfK9Dq4q0&name=pdf)][[code](https://github.com/linhaowei1/TPL)] |
|
- <a name="todo"></a> The Joint Effect of Task Similarity and Overparameterization on Catastrophic Forgetting - An Analytical Model (**ICLR2024**)[[paper](https://openreview.net/attachment?id=u3dHl287oB&name=pdf)] |
|
- <a name="todo"></a> Prediction Error-based Classification for Class-Incremental Learning (**ICLR2024**)[[paper](https://openreview.net/attachment?id=DJZDgMOLXQ&name=pdf)][[code](https://github.com/michalzajac-ml/pec)] |
|
- <a name="todo"></a> Adapting Large Language Models via Reading Comprehension (**ICLR2024**)[[paper](https://openreview.net/attachment?id=y886UXPEZ0&name=pdf)][[code](https://github.com/microsoft/LMOps/tree/main/adaptllm)] |
|
- <a name="todo"></a> Accurate Forgetting for Heterogeneous Federated Continual Learning (**ICLR2024**)[[paper](https://openreview.net/attachment?id=ShQrnAsbPI&name=pdf)] |
|
- <a name="todo"></a> Fixed Non-negative Orthogonal Classifier: Inducing Zero-mean Neural Collapse with Feature Dimension Separation (**ICLR2024**)[[paper](https://openreview.net/attachment?id=F4bmOrmUwc&name=pdf)] |
|
- <a name="todo"></a> A Probabilistic Framework for Modular Continual Learning (**ICLR2024**)[[paper](https://openreview.net/attachment?id=MVe2dnWPCu&name=pdf)] |
|
- <a name="todo"></a> A Unified and General Framework for Continual Learning (**ICLR2024**)[[paper](https://openreview.net/attachment?id=BE5aK0ETbp&name=pdf)] |
|
- <a name="todo"></a> Continual Learning on a Diet: Learning from Sparsely Labeled Streams Under Constrained Computation (**ICLR2024**)[[paper](https://openreview.net/attachment?id=Xvfz8NHmCj&name=pdf)] |
|
- <a name="todo"></a> CPPO: Continual Learning for Reinforcement Learning with Human Feedback (**ICLR2024**)[[paper](https://openreview.net/attachment?id=86zAUE80pP&name=pdf)] |
|
- <a name="todo"></a> Online Continual Learning for Interactive Instruction Following Agents (**ICLR2024**)[[paper](https://openreview.net/attachment?id=7M0EzjugaN&name=pdf)][[code](https://github.com/snumprlab/cl-alfred)] |
|
- <a name="todo"></a> Scalable Language Model with Generalized Continual Learning (**ICLR2024**)[[paper](https://openreview.net/attachment?id=mz8owj4DXu&name=pdf)] |
|
- <a name="todo"></a> ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation (**ICLR2024**)[[paper](https://openreview.net/attachment?id=sJ88Wg5Bp5&name=pdf)] |
|
- <a name="todo"></a> Hebbian Learning based Orthogonal Projection for Continual Learning of Spiking Neural Networks (**ICLR2024**)[[paper](https://openreview.net/attachment?id=MeB86edZ1P&name=pdf)][[code](https://github.com/pkuxmq/HLOP-SNN)] |
|
- <a name="todo"></a> TiC-CLIP: Continual Training of CLIP Models (**ICLR2024**)[[paper](https://openreview.net/attachment?id=TLADT8Wrhn&name=pdf)] |
|
- <a name="todo"></a> Continual Learning in the Presence of Spurious Correlations: Analyses and a Simple Baseline (**ICLR2024**)[[paper](https://openreview.net/attachment?id=3Y7r6xueJJ&name=pdf)] |
|
- <a name="todo"></a> Addressing Catastrophic Forgetting and Loss of Plasticity in Neural Networks (**ICLR2024**)[[paper](https://openreview.net/attachment?id=sKPzAXoylB&name=pdf)] |
|
- <a name="todo"></a> Locality Sensitive Sparse Encoding for Learning World Models Online (**ICLR2024**)[[paper](https://openreview.net/attachment?id=i8PjQT3Uig&name=pdf)] |
|
- <a name="todo"></a> Dissecting learning and forgetting in language model finetuning (**ICLR2024**)[[paper](https://openreview.net/attachment?id=tmsqb6WpLz&name=pdf)] |
|
- <a name="todo"></a> Prompt Gradient Projection for Continual Learning (**ICLR2024**)[[paper](https://openreview.net/attachment?id=EH2O3h7sBI&name=pdf)][[code](https://github.com/JingyangQiao/prompt-gradient-projection)] |
|
- <a name="todo"></a> Latent Trajectory Learning for Limited Timestamps under Distribution Shift over Time (**ICLR2024**)[[paper](https://openreview.net/attachment?id=bTMMNT7IdW&name=pdf)] |
|
- <a name="todo"></a> Divide and not forget: Ensemble of selectively trained experts in Continual Learning (**ICLR2024**)[[paper](https://arxiv.org/abs/2401.10191)][[code](https://github.com/grypesc/SEED)] |
|
- <a name="todo"></a> eTag: Class-Incremental Learning via Embedding Distillation and Task-Oriented Generation (**AAAI2024**) [[paper](https://ojs.aaai.org/index.php/AAAI/article/view/29153)][[code](https://github.com/libo-huang/eTag)] |
|
- <a name="todo"></a> Evolving Parameterized Prompt Memory for Continual Learning (**AAAI2024**)[[paper](https://ojs.aaai.org/index.php/AAAI/article/view/29231)][[code](https://github.com/MIV-XJTU/EvoPrompt)] |
|
- <a name="todo"></a> Towards Continual Learning Desiderata via HSIC-Bottleneck Orthogonalization and Equiangular Embedding (**AAAI2024**)[[paper](https://arxiv.org/abs/2401.09067)] |
|
- <a name="todo"></a> Fine-Grained Knowledge Selection and Restoration for Non-Exemplar Class Incremental Learning (**AAAI2024**)[[paper](https://arxiv.org/abs/2312.12722)] |
|
- <a name="todo"></a> Class-Incremental Learning: Cross-Class Feature Augmentation for Class Incremental Learning (**AAAI2024**)[[paper](https://arxiv.org/abs/2304.01899)] |
|
- <a name="todo"></a> Learning Task-Aware Language-Image Representation for Class-Incremental Object Detection (**AAAI2024**)[[paper](https://ojs.aaai.org/index.php/AAAI/article/view/28537/29047)] |
|
- <a name="todo"></a> MIND: Multi-Task Incremental Network Distillation (**AAAI2024**)[[paper](https://arxiv.org/abs/2312.02916)][[code](https://github.com/Lsabetta/MIND)] |
|
- <a name="todo"></a> Adapt Your Teacher: Improving Knowledge Distillation for Exemplar-free Continual Learning (**WACV2024**)[[paper](https://arxiv.org/abs/2308.09544)][[code](https://github.com/fszatkowski/cl-teacher-adaptation)] |
|
- <a name="todo"></a> Plasticity-Optimized Complementary Networks for Unsupervised Continual (**WACV2024**)[[paper](https://arxiv.org/abs/2309.06086)] |
|
- <a name="todo"></a> Online Class-Incremental Learning For Real-World Food Image Classification (**WACV2024**)[[paper](https://openaccess.thecvf.com/content/WACV2024/papers/Raghavan_Online_Class-Incremental_Learning_for_Real-World_Food_Image_Classification_WACV_2024_paper.pdf)] |
|
|
|
|
|
### 2023 |
|
|
|
- <a name="todo"></a> RaSP: Relation-aware Semantic Prior for Weakly Supervised Incremental Segmentation |
|
(**CoLLAs 2023**) [[paper](https://proceedings.mlr.press/v232/roy23a.html)] [[code](https://github.com/naver/rasp)] |
|
- <a name="todo"></a> SIESTA: Efficient Online Continual Learning with Sleep (**TMLR 2023**)[[paper](https://arxiv.org/abs/2303.10725)] |
|
- <a name="todo"></a> Sub-network Discovery and Soft-masking for Continual Learning of Mixed Tasks (**EMNLP 2023**)[[paper](https://arxiv.org/abs/2310.09436)] |
|
- <a name="todo"></a > Incorporating neuro-inspired adaptability for continual learning in artificial intelligence (**Nature Machine Intelligence 2023**) [[paper](https://www.nature.com/articles/s42256-023-00747-w)] |
|
- <a name="todo"></a > Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork (**NeurIPS 2023**) [[paper](https://proceedings.neurips.cc/paper_files/paper/2023/file/d7b3cef7c31b94a4a533db83d01a8882-Paper-Conference.pdf)] [[Code]](https://github.com/shanxiaojun/DSN) |
|
- <a name="todo"></a > Loss Decoupling for Task-Agnostic Continual Learning (**NeurIPS 2023**) [[paper](https://openreview.net/pdf?id=9Oi3YxIBSa)] |
|
- <a name="todo"></a > Bilevel Coreset Selection in Continual Learning: A New Formulation and Algorithm (**NeurIPS 2023**)[[paper](https://openreview.net/pdf?id=2dtU9ZbgSN)] |
|
- <a name="todo"></a > Fairness Continual Learning Approach to Semantic Scene Understanding in Open-World Environments (**NeurIPS 2023**)[[paper](https://arxiv.org/abs/2305.15700)] |
|
- <a name="todo"></a > An Efficient Dataset Condensation Plugin and Its Application to Continual Learning (**NeurIPS 2023**)[[paper](https://openreview.net/pdf?id=Murj6wcjRw)] |
|
- <a name="todo"></a > Overcoming Recency Bias of Normalization Statistics in Continual Learning: Balance and Adaptation (**NeurIPS 2023**)[[paper](https://openreview.net/pdf?id=Ph65E1bE6A)] |
|
- <a name="todo"></a > Prediction and Control in Continual Reinforcement Learning (**NeurIPS 2023**)[[paper](https://openreview.net/pdf?id=KakzVASqul)] |
|
- <a name="todo"></a > On the Stability-Plasticity Dilemma in Continual Meta-Learning: Theory and Algorithm (**NeurIPS 2023**)[[paper](https://openreview.net/pdf?id=DNHGKeOhLl)] |
|
- <a name="todo"></a > Saving 100x Storage: Prototype Replay for Reconstructing Training Sample Distribution in Class-Incremental Semantic Segmentation (**NeurIPS 2023**)[[paper](https://openreview.net/pdf?id=Ct0zPIe3xs)] |
|
- <a name="todo"></a > A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated Class Incremental Learning for Vision Tasks (**NeurIPS 2023**)[[paper](https://arxiv.org/pdf/2311.07784.pdf)] |
|
- <a name="todo"></a > Few-Shot Class-Incremental Learning via Training-Free Prototype Calibration (**NeurIPS 2023**)[[paper](https://arxiv.org/pdf/2312.05229.pdf)] |
|
- <a name="todo"></a > A Unified Approach to Domain Incremental Learning with Memory: Theory and Algorithm (**NeurIPS 2023**)[[paper](https://arxiv.org/pdf/2310.12244.pdf)][[code](https://github.com/Wang-ML-Lab/unified-continual-learning)] |
|
- <a name="todo"></a> Minimax Forward and Backward Learning of Evolving Tasks with Performance Guarantees (**NeurIPS 2023**)[[paper](https://arxiv.org/pdf/2310.15974.pdf)][[code](https://github.com/MachineLearningBCAM/IMRCs-for-incremental-learning-NeurIPS-2023)] |
|
- <a name="todo"></a> Recasting Continual Learning as Sequence Modeling (**NeurIPS 2023**)[[paper](https://arxiv.org/pdf/2310.11952.pdf)] |
|
- <a name="todo"></a> Augmented Memory Replay-based Continual Learning Approaches for Network Intrusion Detection (**NeurIPS 2023**)[[paper](https://openreview.net/pdf?id=yGLokEhdh9)] |
|
- <a name="todo"></a> Does Continual Learning Meet Compositionality? New Benchmarks and An Evaluation Framework (**NeurIPS 2023**)[[paper](https://openreview.net/pdf?id=38bZuqQOhC)] |
|
- <a name="todo"></a> CL-NeRF: Continual Learning of Neural Radiance Fields for Evolving Scene Representation (**NeurIPS 2023**)[[paper](https://openreview.net/pdf?id=uZjpSBTPik)] |
|
- <a name="todo"></a> TriRE: A Multi-Mechanism Learning Paradigm for Continual Knowledge Retention and Promotion (**NeurIPS 2023**)[[paper](https://arxiv.org/pdf/2310.08217.pdf)] |
|
- <a name="todo"></a> Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models (**NeurIPS 2023**)[[paper](https://arxiv.org/pdf/2305.10120.pdf)] |
|
- <a name="todo"></a> A Definition of Continual Reinforcement Learning (**NeurIPS 2023**)[[paper](https://arxiv.org/pdf/2307.11046.pdf)] |
|
- <a name="todo"></a> RanPAC: Random Projections and Pre-trained Models for Continual Learning (**NeurIPS 2023**)[[paper](https://arxiv.org/pdf/2307.02251.pdf)] |
|
- <a name="todo"></a> Hierarchical Decomposition of Prompt-Based Continual Learning: Rethinking Obscured Sub-optimality (**NeurIPS 2023**)[[paper](https://arxiv.org/abs/2310.07234)] |
|
- <a name="todo"></a> FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning (**NeurIPS 2023**)[[paper](https://arxiv.org/abs/2309.14062)] |
|
- <a name="todo"></a> The Ideal Continual Learner: An Agent That Never Forgets (**ICML2023**) [[paper](https://arxiv.org/abs/2305.00316)] |
|
- <a name="todo"></a> Continual Learners are Incremental Model Generalizers (**ICML2023**)[[paper](http://arxiv.org/abs/2306.12026)] |
|
- <a name="todo"></a> Learnability and Algorithm for Continual Learning (**ICML2023**)[[paper](https://arxiv.org/pdf/2306.12646.pdf)][[code](https://github.com/k-gyuhak/CLOOD)] |
|
- <a name="todo"></a> Parameter-Level Soft-Masking for Continual Learning (**ICML2023**)[[paper](https://arxiv.org/pdf/2306.14775.pdf)] |
|
- <a name="todo"></a> Continual Learning in Linear Classification on Separable Data (**ICML2023**)[[paper](https://arxiv.org/pdf/2306.03534.pdf)] |
|
- <a name="todo"></a> DualHSIC: HSIC-Bottleneck and Alignment for Continual Learning (**ICML2023**)[[paper](https://arxiv.org/pdf/2305.00380.pdf)] |
|
- <a name="todo"></a> BiRT: Bio-inspired Replay in Vision Transformers for Continual Learning (**ICML2023**)[[paper](https://arxiv.org/pdf/2305.04769.pdf)] |
|
- <a name="todo"></a> DDGR: Continual Learning with Deep Diffusion-based Generative Replay (**ICML2023**)[[paper](https://openreview.net/pdf?id=RlqgQXZx6r)] |
|
- <a name="todo"></a> Neuro-Symbolic Continual Learning: Knowledge, Reasoning Shortcuts and Concept Rehearsal (**ICML2023**)[[paper](http://proceedings.mlr.press/v202/marconato23a/marconato23a.pdf)] |
|
- <a name="todo"></a> Theory on Forgetting and Generalization of Continual Learning (**ICML2023**)[[paper](http://proceedings.mlr.press/v202/lin23f/lin23f.pdf)] |
|
- <a name="todo"></a> Poisoning Generative Replay in Continual Learning to Promote Forgetting (**ICML2023**)[[paper](https://proceedings.mlr.press/v202/kang23c/kang23c.pdf)] |
|
- <a name="todo"></a> Continual Vision-Language Representation Learning with Off-Diagonal Information (**ICML2023**)[[paper](https://arxiv.org/abs/2305.07437)] |
|
- <a name="todo"></a> Prototype-Sample Relation Distillation: Towards Replay-Free Continual Learning (**ICML2023**)[[paper](https://arxiv.org/abs/2303.14771)] |
|
- <a name="todo"></a> Does Continual Learning Equally Forget All Parameters? (**ICML2023**)[[paper](https://arxiv.org/abs/2304.04158)] |
|
- <a name="todo"></a> Growing a Brain with Sparsity-Inducing Generation for Continual Learning (**ICCV 2023**)[[paper]( https://openaccess.thecvf.com/content/ICCV2023/papers/Jin_Growing_a_Brain_with_Sparsity-Inducing_Generation_for_Continual_Learning_ICCV_2023_paper.pdf)][[code](https://github.com/Jin0316/GrowBrain)] |
|
- <a name="todo"></a> Self-regulating Prompts: Foundational Model Adaptation without Forgetting (**ICCV 2023**)[[paper](https://arxiv.org/abs/2307.06948)][[code](https://github.com/muzairkhattak/PromptSRC)] |
|
- <a name="todo"></a> Prototype Reminiscence and Augmented Asymmetric Knowledge Aggregation for Non-Exemplar Class-Incremental Learning (**ICCV 2023**)[[paper](https://openaccess.thecvf.com/content/ICCV2023/html/Shi_Prototype_Reminiscence_and_Augmented_Asymmetric_Knowledge_Aggregation_for_Non-Exemplar_Class-Incremental_ICCV_2023_paper.html)][[code](https://github.com/ShiWuxuan/PRAKA)] |
|
- <a name="todo"></a> Tangent Model Composition for Ensembling and Continual Fine-tuning (**ICCV 2023**)[[paper](https://arxiv.org/abs/2307.08114)][[code](https://github.com/tianyu139/tangent-model-composition)] |
|
- <a name="todo"></a> CBA: Improving Online Continual Learning via Continual Bias Adaptor (**ICCV 2023**)[[paper](https://browse.arxiv.org/pdf/2308.06925.pdf)] |
|
- <a name="todo"></a> CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation (**ICCV 2023**)[[paper](https://browse.arxiv.org/pdf/2308.07146.pdf)][[code](https://github.com/KevinLight831/CTP)] |
|
- <a name="todo"></a> NAPA-VQ: Neighborhood Aware Prototype Augmentation with Vector Quantization for Continual Learning (**ICCV 2023**)[[paper](https://browse.arxiv.org/pdf/2308.09297.pdf)][[code](https://github.com/TamashaM/NAPA-VQ.git)] |
|
- <a name="todo"></a> Online Continual Learning on Hierarchical Label Expansion (**ICCV 2023**)[[paper](https://arxiv.org/abs/2308.14374)] |
|
- <a name="todo"></a> Class-Incremental Grouping Network for Continual Audio-Visual Learning (**ICCV 2023**)[[paper](https://arxiv.org/abs/2309.05281)][[code](https://github.com/stoneMo/CIGN)] |
|
- <a name="todo"></a> Rapid Adaptation in Online Continual Learning: Are We Evaluating It Right? (**ICCV 2023**)[[paper](https://arxiv.org/abs/2305.09275)][[code](https://github.com/drimpossible/EvalOCL)] |
|
- <a name="todo"></a> When Prompt-based Incremental Learning Does Not Meet Strong Pretraining (**ICCV 2023**)[[paper](https://openaccess.thecvf.com/content/ICCV2023/papers/Tang_When_Prompt-based_Incremental_Learning_Does_Not_Meet_Strong_Pretraining_ICCV_2023_paper.pdf)] |
|
- <a name="todo"></a> Online Class Incremental Learning on Stochastic Blurry Task Boundary via Mask and Visual Prompt Tuning (**ICCV 2023**)[[paper](https://arxiv.org/abs/2308.09303)][[code](https://github.com/moonjunyyy/si-blurry)] |
|
- <a name="todo"></a> Dynamic Residual Classifier for Class Incremental Learning (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2308.13305.pdf)] |
|
- <a name="todo"></a> First Session Adaptation: A Strong Replay-Free Baseline for Class-Incremental Learning (**ICCV 2023**)[[paper](https://arxiv.org/abs/2303.13199)] |
|
- <a name="todo"></a> Masked Autoencoders are Efficient Class Incremental Learners (**ICCV 2023**)[[paper](https://arxiv.org/abs/2308.12510)] |
|
- <a name="todo"></a> Introducing Language Guidance in Prompt-based Continual Learning (**ICCV 2023**)[[paper](https://arxiv.org/abs/2308.15827)] |
|
- <a name="todo"></a> CLNeRF: Continual Learning Meets NeRFs (**ICCV 2023**)[[paper](https://arxiv.org/abs/2308.14816)] |
|
- <a name="todo"></a> Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2303.06628.pdf)][[code](https://github.com/Thunderbeee/ZSCL)] |
|
- <a name="todo"></a> LFS-GAN: Lifelong Few-Shot Image Generation (**ICCV 2023**)[[paper](https://arxiv.org/abs/2308.11917)] |
|
- <a name="todo"></a> TARGET: Federated Class-Continual Learning via Exemplar-Free Distillation (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2303.06937.pdf)] |
|
- <a name="todo"></a> Learning to Learn: How to Continuously Teach Humans and Machines (**ICCV 2023**)[[paper](https://arxiv.org/abs/2211.15470)] |
|
- <a name="todo"></a> Audio-Visual Class-Incremental Learning (**ICCV 2023**)[[paper](https://arxiv.org/abs/2308.11073)][[code](https://github.com/weiguoPian/AV-CIL_ICCV2023)] |
|
- <a name="todo"></a> MetaGCD: Learning to Continually Learn in Generalized Category Discovery (**ICCV 2023**)[[paper](https://arxiv.org/abs/2308.11063)] |
|
- <a name="todo"></a> Exemplar-Free Continual Transformer with Convolutions (**ICCV 2023**)[[paper](https://arxiv.org/abs/2308.11357)][[code](https://github.com/CVIR/contracon)] |
|
- <a name="todo"></a> A Unified Continual Learning Framework with General Parameter-Efficient Tuning (**ICCV 2023**)[[paper](https://arxiv.org/abs/2303.10070)] |
|
- <a name="todo"></a> Incremental Generalized Category Discovery (**ICCV 2023**)[[paper](https://arxiv.org/abs/2304.14310)] |
|
- <a name="todo"></a> Heterogeneous Forgetting Compensation for Class-Incremental Learning (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2308.03374.pdf)][[code](https://github.com/JiahuaDong/HFC)] |
|
- <a name="todo"></a> Augmented Box Replay: Overcoming Foreground Shift for Incremental Object Detection (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2307.12427.pdf)][[code](https://github.com/YuyangSunshine/ABR_IOD)] |
|
- <a name="todo"></a> MRN: Multiplexed Routing Network for Incremental Multilingual Text Recognition (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2305.14758.pdf)][[code](https://github.com/simplify23/MRN)] |
|
- <a name="todo"></a> CLR: Channel-wise Lightweight Reprogramming for Continual Learning (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2307.11386.pdf)][[code](https://github.com/gyhandy/Channel-wise-Lightweight-Reprogramming)] |
|
- <a name="todo"></a> ICICLE: Interpretable Class Incremental Continual Learning (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2303.07811.pdf)] |
|
- <a name="todo"></a> Proxy Anchor-based Unsupervised Learning for Continuous Generalized Category Discovery (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2307.10943.pdf)] |
|
- <a name="todo"></a> SLCA: Slow Learner with Classifier Alignment for Continual Learning on a Pre-trained Model (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2303.05118.pdf)][[code](https://github.com/GengDavid/SLCA)] |
|
- <a name="todo"></a> Online Prototype Learning for Online Continual Learning (**ICCV 2023**)[[paper](https://arxiv.org/pdf/2308.00301.pdf)][[code](https://github.com/weilllllls/OnPro)] |
|
- <a name="todo"></a> Analyzing and Reducing the Performance Gap in Cross-Lingual Transfer with Fine-tuning Slow and Fast (**ACL2023**)[[paper](https://arxiv.org/abs/2305.11449)] |
|
- <a name="todo"></a> Class-Incremental Learning based on Label Generation (**ACL2023**)[[paper](https://arxiv.org/abs/2306.12619)] |
|
- <a name="todo"></a> Computationally Budgeted Continual Learning: What Does Matter? (**CVPR2023**)[[paper](https://arxiv.org/abs/2303.11165)][[code](https://github.com/drimpossible/BudgetCL)] |
|
- <a name="todo"></a> Real-Time Evaluation in Online Continual Learning: A New Hope (**CVPR2023**)[[paper](https://arxiv.org/abs/2302.01047)] |
|
- <a name="todo"></a> Dealing With Cross-Task Class Discrimination in Online Continual Learning (**CVPR2023**)[[paper](https://openaccess.thecvf.com/content/CVPR2023/html/Guo_Dealing_With_Cross-Task_Class_Discrimination_in_Online_Continual_Learning_CVPR_2023_paper.html)][[code](https://github.com/gydpku/GSA)] |
|
- <a name="todo"></a> Decoupling Learning and Remembering: A Bilevel Memory Framework With Knowledge Projection for Task-Incremental Learning (**CVPR2023**)[[paper](https://openaccess.thecvf.com/content/CVPR2023/html/Sun_Decoupling_Learning_and_Remembering_A_Bilevel_Memory_Framework_With_Knowledge_CVPR_2023_paper.html)][[code](https://github.com/SunWenJu123/BMKP)] |
|
- <a name="todo"></a> GKEAL: Gaussian Kernel Embedded Analytic Learning for Few-shot Class Incremental Task (**CVPR2023**)[[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhuang_GKEAL_Gaussian_Kernel_Embedded_Analytic_Learning_for_Few-Shot_Class_Incremental_CVPR_2023_paper.pdf)] |
|
- <a name="todo"></a> EcoTTA: Memory-Efficient Continual Test-time Adaptation via Self-distilled Regularization (**CVPR2023**)[[paper](https://arxiv.org/abs/2303.01904)] |
|
- <a name="todo"></a> Endpoints Weight Fusion for Class Incremental Semantic Segmentation (**CVPR2023**)[[paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Xiao_Endpoints_Weight_Fusion_for_Class_Incremental_Semantic_Segmentation_CVPR_2023_paper.pdf)] |
|
- <a name="todo"></a> On the Stability-Plasticity Dilemma of Class-Incremental Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2304.01663.pdf)] |
|
- <a name="todo"></a> Regularizing Second-Order Influences for Continual Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2304.10177.pdf)][[code](https://github.com/feifeiobama/InfluenceCL)] |
|
- <a name="todo"></a> Rebalancing Batch Normalization for Exemplar-based Class-Incremental Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2201.12559.pdf)] |
|
- <a name="todo"></a> Task Difficulty Aware Parameter Allocation & Regularization for Lifelong Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2304.05288.pdf)] |
|
- <a name="todo"></a> A Probabilistic Framework for Lifelong Test-Time Adaptation (**CVPR2023**)[[paper](https://arxiv.org/pdf/2212.09713.pdf)][[code](https://github.com/dhanajitb/petal)] |
|
- <a name="todo"></a> Continual Semantic Segmentation with Automatic Memory Sample Selection (**CVPR2023**)[[paper](https://arxiv.org/pdf/2304.05015.pdf)] |
|
- <a name="todo"></a> Exploring Data Geometry for Continual Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2304.03931.pdf)] |
|
- <a name="todo"></a> PCR: Proxy-based Contrastive Replay for Online Class-Incremental Continual Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2304.04408.pdf)][[code](https://github.com/FelixHuiweiLin/PCR)] |
|
- <a name="todo"></a> Learning with Fantasy: Semantic-Aware Virtual Contrastive Constraint for Few-Shot Class-Incremental Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2304.00426.pdf)][[code](https://github.com/zysong0113/SAVC)] |
|
- <a name="todo"></a> Foundation Model Drives Weakly Incremental Learning for Semantic Segmentation (**CVPR2023**)[[paper](https://arxiv.org/pdf/2302.14250.pdf)] |
|
- <a name="todo"></a> Continual Detection Transformer for Incremental Object Detection (**CVPR2023**)[[paper](https://arxiv.org/pdf/2304.03110.pdf)][[code](https://github.com/yaoyao-liu/CL-DETR)] |
|
- <a name="todo"></a> PIVOT: Prompting for Video Continual Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2212.04842.pdf)] |
|
- <a name="todo"></a> CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2211.13218.pdf)][[code](https://github.com/GT-RIPL/CODA-Prompt)] |
|
- <a name="todo"></a> Principles of Forgetting in Domain-Incremental Semantic Segmentation in Adverse Weather Conditions (**CVPR2023**)[[paper](https://arxiv.org/pdf/2303.14115.pdf)] |
|
- <a name="todo"></a> Class-Incremental Exemplar Compression for Class-Incremental Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2303.14042.pdf)][[code](https://github.com/xfflzl/CIM-CIL)] |
|
- <a name="todo"></a> Dense Network Expansion for Class Incremental Learning (**CVPR2023**)[[paper](https://arxiv.org/pdf/2303.12696.pdf)] |
|
- <a name="todo"></a> Online Bias Correction for Task-Free Continual Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=18XzeuYZh_)] |
|
- <a name="todo"></a> Sparse Distributed Memory is a Continual Learner (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=JknGeelZJpHP)] |
|
- <a name="todo"></a> Continual Learning of Language Models (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=m_GDIItaI3o)] |
|
- <a name="todo"></a> Progressive Prompts: Continual Learning for Language Models without Forgetting (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=UJTgQBc91_)] |
|
- <a name="todo"></a> Is Forgetting Less a Good Inductive Bias for Forward Transfer? (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=dL35lx-mTEs)] |
|
- <a name="todo"></a> Online Boundary-Free Continual Learning by Scheduled Data Prior (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=qco4ekz2Epm)] |
|
- <a name="todo"></a>Incremental Learning of Structured Memory via Closed-Loop Transcription (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=XrgjF5-M3xi)] |
|
- <a name="todo"></a>Better Generative Replay for Continual Federated Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=cRxYWKiTan)] |
|
- <a name="todo"></a>3EF: Class-Incremental Learning via Efficient Energy-Based Expansion and Fusion (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=iP77_axu0h3)] |
|
- <a name="todo"></a>Progressive Voronoi Diagram Subdivision Enables Accurate Data-free Class-Incremental Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=zJXg_Wmob03)] |
|
- <a name="todo"></a>Learning without Prejudices: Continual Unbiased Learning via Benign and Malignant Forgetting (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=gfPUokHsW-)] |
|
- <a name="todo"></a>Building a Subspace of Policies for Scalable Continual Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=UKr0MwZM6fL)] |
|
- <a name="todo"></a>A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=S07feAlQHgM)] |
|
- <a name="todo"></a>Continual evaluation for lifelong learning: Identifying the stability gap (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=Zy350cRstc6)] |
|
- <a name="todo"></a>Continual Unsupervised Disentangling of Self-Organizing Representations (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=ih0uFRFhaZZ)] |
|
- <a name="todo"></a>Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=kPLzOfPfA2l)] |
|
- <a name="todo"></a>Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class-Incremental Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=y5W8tpojhtJ)] |
|
- <a name="todo"></a>On the Soft-Subnetwork for Few-Shot Class Incremental Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=z57WK5lGeHd)] |
|
- <a name="todo"></a>Task-Aware Information Routing from Common Representation Space in Lifelong Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=-M0TNnyWFT5)] |
|
- <a name="todo"></a>Error Sensitivity Modulation based Experience Replay: Mitigating Abrupt Representation Drift in Continual Learning (**ICLR2023**)[[paper]( https://openreview.net/pdf?id=zlbci7019Z3)] |
|
- <a name="todo"></a> Neural Weight Search for Scalable Task Incremental Learning (**WACV2023**)[[paper]( https://arxiv.org/abs/2211.13823)] |
|
- <a name="todo"></a> Attribution-aware Weight Transfer: A Warm-Start Initialization for Class-Incremental Semantic Segmentation (**WACV2023**)[[paper]( https://arxiv.org/abs/2210.07207)] |
|
- <a name="todo"></a> FeTrIL: Feature Translation for Exemplar-Free Class-Incremental Learning (**WACV2023**)[[paper]( https://arxiv.org/abs/2211.13131)] |
|
- <a name="todo"></a> Do Pre-trained Models Benefit Equally in Continual Learning? (**WACV2023**)[[paper]( https://arxiv.org/abs/2210.15701)] [[code](https://github.com/eric11220/pretrained-models-in-CL)] |
|
- <a name="todo"></a> Sparse Coding in a Dual Memory System for Lifelong Learning (**AAAI2023**)[[paper]( https://arxiv.org/abs/2301.05058)] [[code](https://github.com/NeurAI-Lab/SCoMMER)] |
|
|
|
### 2022 |
|
- <a name="todo"></a> Online Continual Learning through Mutual Information Maximization (**ICML2022**)[[paper](https://proceedings.mlr.press/v162/guo22g/guo22g.pdf)] |
|
- <a name="todo"></a> Prototype-guided continual adaptation for class-incremental unsupervised domain adaptation (**ECCV2022**)[[paper]( https://arxiv.org/pdf/2207.10856.pdf)] [[code](https://github.com/Hongbin98/ProCA)] |
|
- <a name="todo"></a> Balanced softmax cross-entropy for incremental learning with and without memory (**CVIU**)[[paper](https://www.sciencedirect.com/science/article/pii/S1077314222001606)] |
|
- <a name="todo"></a> Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection (**COLING2022**) [[paper](https://arxiv.org/abs/2204.07275)] [[code]( https://github.com/VT-NLP/Incremental_Prompting)] |
|
- <a name="todo"></a> Improving Task-free Continual Learning by Distributionally Robust Memory Evolution (**ICML2022**)[[paper](https://proceedings.mlr.press/v162/wang22v/wang22v.pdf)] |
|
- <a name="todo"></a> Forget-free Continual Learning with Winning Subnetworks (**ICML2022**)[[paper](https://proceedings.mlr.press/v162/kang22b/kang22b.pdf)] |
|
- <a name="todo"></a> NISPA: Neuro-Inspired Stability-Plasticity Adaptation for Continual Learning in Sparse Networks (**ICML2022**)[[paper](https://proceedings.mlr.press/v162/gurbuz22a/gurbuz22a.pdf)] |
|
- <a name="todo"></a> Continual Learning via Sequential Function-Space Variational Inference (**ICML2022**)[[paper](https://proceedings.mlr.press/v162/rudner22a/rudner22a.pdf)] |
|
- <a name="todo"></a> A Theoretical Study on Solving Continual Learning (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2211.02633)] [[code](https://github.com/k-gyuhak/WPTP)] |
|
- <a name="todo"></a> ACIL: Analytic Class-Incremental Learning with Absolute Memorization and Privacy Protection (**NeurIPS2022**) [[paper](https://proceedings.neurips.cc/paper_files/paper/2022/file/4b74a42fc81fc7ee252f6bcb6e26c8be-Paper-Conference.pdf)] |
|
- <a name="todo"></a> Beyond Not-Forgetting: Continual Learning with Backward Knowledge Transfer (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2211.00789)] |
|
- <a name="todo"></a> Memory Efficient Continual Learning with Transformers (**NeurIPS2022**) [[paper](https://assets.amazon.science/44/6c/6d3f91ca4aa7a18149d30fa2c8a4/memory-efficient-continual-learning-with-transformers.pdf)] |
|
- <a name="todo"></a> Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2210.04524)] [[code](https://github.com/zoilsen/clom)] |
|
- <a name="todo"></a> Disentangling Transfer in Continual Reinforcement Learning (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2209.13900)] |
|
- <a name="todo"></a> Task-Free Continual Learning via Online Discrepancy Distance Learning (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2210.06579)] |
|
- <a name="todo"></a> A simple but strong baseline for online continual learning: Repeated Augmented Rehearsal (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2209.13917)] |
|
- <a name="todo"></a> S-Prompts Learning with Pre-trained Transformers: An Occam’s Razor for Domain Incremental Learning (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2207.12819)] |
|
- <a name="todo"></a> Lifelong Neural Predictive Coding: Learning Cumulatively Online without Forgetting (**NeurIPS2022**) [[paper](https://arxiv.org/abs/1905.10696)] |
|
- <a name="todo"></a> Few-Shot Continual Active Learning by a Robot (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2210.04137)] |
|
- <a name="todo"></a> Continual learning: a feature extraction formalization, an efficient algorithm, and fundamental obstructions(**NeurIPS2022**) [[paper](https://arxiv.org/abs/2203.14383)] |
|
- <a name="todo"></a> SparCL: Sparse Continual Learning on the Edge(**NeurIPS2022**) [[paper](https://arxiv.org/abs/2209.09476)] |
|
- <a name="todo"></a> CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks (**NeurIPS2022**) [[paper](https://openreview.net/forum?id=FhqzyGoTSH)] [[code](https://github.com/GLAMOR-USC/CLiMB)] |
|
- <a name="todo"></a> Continual Learning In Environments With Polynomial Mixing Times (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2112.07066)] [[code](https://github.com/sharathraparthy/polynomial-mixing-times)] |
|
- <a name="todo"></a> Exploring Example Influence in Continual Learning (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2209.12241)] [[code](https://github.com/sssunqing/example_influence_cl)] |
|
- <a name="todo"></a> ALIFE: Adaptive Logit Regularizer and Feature Replay for Incremental Semantic Segmentation (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2210.06816)] |
|
- <a name="todo"></a> On the Effectiveness of Lipschitz-Driven Rehearsal in Continual Learning (**NeurIPS2022**) [[paper](https://arxiv.org/abs/2210.06443)] [[code](https://github.com/aimagelab/lider)] |
|
- <a name="todo"></a> On Reinforcement Learning and Distribution Matching for Fine-Tuning Language Models with no Catastrophic Forgetting (**NeurIPS2022**)[[paper](https://arxiv.org/abs/2206.00761)] |
|
- <a name="todo"></a> CGLB: Benchmark Tasks for Continual Graph Learning (**NeurIPS2022**)[[paper](https://openreview.net/forum?id=5wNiiIDynDF)] [[code](https://github.com/QueuQ/CGLB)] |
|
- <a name="todo"></a> How Well Do Unsupervised Learning Algorithms Model Human Real-time and Life-long Learning? (**NeurIPS2022**)[[paper](https://openreview.net/forum?id=c0l2YolqD2T)] |
|
- <a name="todo"></a> CoSCL: Cooperation of Small Continual Learners is Stronger than a Big One (**ECCV2022**)[[paper](https://arxiv.org/abs/2207.06543)] [[code](https://github.com/lywang3081/CoSCL)] |
|
- <a name="todo"></a>Generative Negative Text Replay for Continual Vision-Language Pretraining (**ECCV2022**) [[paper](https://arxiv.org/abs/2210.17322)] |
|
- <a name="todo"></a> DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning (**ECCV2022**) [[paper](https://arxiv.org/abs/2204.04799)] [[code](https://github.com/google-research/l2p)] |
|
- <a name="todo"></a> The Challenges of Continuous Self-Supervised Learning (**ECCV2022**)[[paper](https://arxiv.org/abs/2203.12710)] |
|
- <a name="todo"></a> Helpful or Harmful: Inter-Task Association in Continual Learning (**ECCV2022**)[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136710518.pdf)] |
|
- <a name="todo"></a> incDFM: Incremental Deep Feature Modeling for Continual Novelty Detection (**ECCV2022**)[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850581.pdf)] |
|
- <a name="todo"></a> S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning (**ECCV2022**)[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850427.pdf)] |
|
- <a name="todo"></a> Online Task-free Continual Learning with Dynamic Sparse Distributed Memory (**ECCV2022**)[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136850721.pdf)][[code](https://github.com/Julien-pour/Dynamic-Sparse-Distributed-Memory)] |
|
- <a name="todo"></a> Balancing between Forgetting and Acquisition in Incremental Subpopulation Learning (**ECCV2022**)[[paper](https://www.ecva.net/papers/eccv_2022/papers_ECCV/papers/136860354.pdf)] |
|
- <a name="todo"></a> Class-Incremental Learning with Cross-Space Clustering and Controlled Transfer (**ECCV2022**) [[paper](https://arxiv.org/abs/2208.03767)] [[code](https://github.com/ashok-arjun/CSCCT)] |
|
- <a name="todo"></a> FOSTER: Feature Boosting and Compression for Class-Incremental Learning (**ECCV2022**) [[paper](https://arxiv.org/abs/2204.04662)] [[code](https://github.com/G-U-N/ECCV22-FOSTER)] |
|
- <a name="todo"></a> Meta-Learning with Less Forgetting on Large-Scale Non-Stationary Task Distributions (**ECCV2022**) [[paper](https://arxiv.org/abs/2209.01501)] |
|
- <a name="todo"></a> R-DFCIL: Relation-Guided Representation Learning for Data-Free Class Incremental Learning (**ECCV2022**) [[paper](https://arxiv.org/abs/2203.13104)] [[code](https://github.com/jianzhangcs/r-dfcil)] |
|
- <a name="todo"></a> DLCFT: Deep Linear Continual Fine-Tuning for General Incremental Learning (**ECCV2022**) [[paper](https://arxiv.org/abs/2208.08112)] |
|
- <a name="todo"></a> Learning with Recoverable Forgetting (**ECCV2022**) [[paper](https://arxiv.org/abs/2207.08224)] |
|
- <a name="todo"></a> Prototype-Guided Continual Adaptation for Class-Incremental Unsupervised Domain Adaptation (**ECCV2022**) [[paper](https://arxiv.org/abs/2207.10856)] [[code](https://github.com/hongbin98/proca)] |
|
- <a name="todo"></a> Balancing Stability and Plasticity through Advanced Null Space in Continual Learning (**ECCV2022**) [[paper](https://arxiv.org/abs/2207.12061)] |
|
- <a name="todo"></a>Long-Tailed Class Incremental Learning (**ECCV2022**) [[paper](https://arxiv.org/abs/2210.00266)] |
|
- <a name="todo"></a>Anti-Retroactive Interference for Lifelong Learning (**ECCV2022**) [[paper](https://arxiv.org/abs/2208.12967)] |
|
- <a name="todo"></a>Novel Class Discovery without Forgetting (**ECCV2022**) [[paper](https://arxiv.org/abs/2207.10659)] |
|
- <a name="todo"></a>Class-incremental Novel Class Discovery (**ECCV2022**) [[paper](https://arxiv.org/abs/2207.08605)] |
|
- <a name="todo"></a>Few-Shot Class Incremental Learning From an Open-Set Perspective(**ECCV2022**)[[paper](https://arxiv.org/pdf/2208.00147.pdf)] |
|
- <a name="todo"></a>Incremental Task Learning with Incremental Rank Updates(**ECCV2022**)[[paper](https://arxiv.org/pdf/2207.09074.pdf)] |
|
- <a name="todo"></a>Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay(**ECCV2022**)[[paper](https://arxiv.org/pdf/2207.11213.pdf)] |
|
- <a name="todo"></a>Online Continual Learning with Contrastive Vision Transformer (**ECCV2022**)[[paper](https://arxiv.org/pdf/2207.13516.pdf)] |
|
- <a name="todo"></a>Transfer without Forgetting (**ECCV2022**) [[paper](https://arxiv.org/abs/2206.00388)][[code](https://github.com/mbosc/twf)] |
|
|
|
- <a name="todo"></a> Continual Training of Language Models for Few-Shot Learning (**EMNLP2022**) [[paper](https://arxiv.org/abs/2210.05549)] [[code](https://github.com/UIC-Liu-Lab/CPT)] |
|
- <a name="todo"></a> Uncertainty-aware Contrastive Distillation for Incremental Semantic Segmentation (**TPAMI2022**) [[paper](https://arxiv.org/abs/2203.14098)] |
|
- <a name="todo"></a> MgSvF: Multi-Grained Slow vs. Fast Framework for Few-Shot Class-Incremental Learning (**TPAMI2022**) [[paper](https://ieeexplore.ieee.org/abstract/document/9645290)] |
|
- <a name="todo"></a>Class-Incremental Continual Learning into the eXtended DER-verse (**TPAMI2022**) [[paper](https://arxiv.org/abs/2201.00766)] [[code](https://github.com/aimagelab/mammoth)] |
|
- <a name="todo"></a>Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks (**TPAMI2022**) [[paper](https://arxiv.org/abs/2203.17030)] [[code](https://github.com/zhoudw-zdw/TPAMI-Limit)] |
|
- <a name="todo"></a>Continual Semi-Supervised Learning through Contrastive Interpolation Consistency (**PRL2022**) [[paper](https://arxiv.org/abs/2108.06552)][[code](https://github.com/aimagelab/CSSL)] |
|
- <a name="todo"></a>GCR: Gradient Coreset Based Replay Buffer Selection for Continual Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2111.11210)] |
|
- <a name="todo"></a>Learning Bayesian Sparse Networks With Full Experience Replay for Continual Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2202.10203)] |
|
- <a name="todo"></a>Continual Learning With Lifelong Vision Transformer (**CVPR2022**) [[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Wang_Continual_Learning_With_Lifelong_Vision_Transformer_CVPR_2022_paper.pdf)] |
|
- <a name="todo"></a>Towards Better Plasticity-Stability Trade-Off in Incremental Learning: A Simple Linear Connector (**CVPR2022**) [[paper](https://arxiv.org/abs/2110.07905)] |
|
- <a name="todo"></a>Doodle It Yourself: Class Incremental Learning by Drawing a Few Sketches (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.14843)] |
|
- <a name="todo"></a>Continual Learning for Visual Search with Backward Consistent Feature Embedding (**CVPR2022**) [[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Wan_Continual_Learning_for_Visual_Search_With_Backward_Consistent_Feature_Embedding_CVPR_2022_paper.pdf)] |
|
- <a name="todo"></a>Online Continual Learning on a Contaminated Data Stream with Blurry Task Boundaries (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.15355)] |
|
- <a name="todo"></a>Not Just Selection, but Exploration: Online Class-Incremental Continual Learning via Dual View Consistency (**CVPR2022**) [[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Gu_Not_Just_Selection_but_Exploration_Online_Class-Incremental_Continual_Learning_via_CVPR_2022_paper.pdf)] |
|
- <a name="todo"></a>Bring Evanescent Representations to Life in Lifelong Class Incremental Learning (**CVPR2022**) [[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Toldo_Bring_Evanescent_Representations_to_Life_in_Lifelong_Class_Incremental_Learning_CVPR_2022_paper.pdf)] |
|
- <a name="todo"></a>Lifelong Graph Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2009.00647)] |
|
- <a name="todo"></a>Lifelong Unsupervised Domain Adaptive Person Re-identification with Coordinated Anti-forgetting and Adaptation (**CVPR2022**) [[paper](https://arxiv.org/abs/2112.06632)] |
|
- <a name="todo"></a>vCLIMB: A Novel Video Class Incremental Learning Benchmark (**CVPR2022**) [[paper](https://arxiv.org/abs/2201.09381)] |
|
- <a name="todo"></a>Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation(**CVPR2022**) [[paper](https://arxiv.org/abs/2204.00895)] |
|
- <a name="todo"></a>Few-Shot Incremental Learning for Label-to-Image Translation (**CVPR2022**) [[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_Few-Shot_Incremental_Learning_for_Label-to-Image_Translation_CVPR_2022_paper.pdf)] |
|
- <a name="todo"></a> MetaFSCIL: A Meta-Learning Approach for Few-Shot Class Incremental Learning (**CVPR2022**) [[paper](https://openaccess.thecvf.com/content/CVPR2022/papers/Chi_MetaFSCIL_A_Meta-Learning_Approach_for_Few-Shot_Class_Incremental_Learning_CVPR_2022_paper.pdf)] |
|
- <a name="todo"></a> Incremental Learning in Semantic Segmentation from Image Labels (**CVPR2022**) [[paper](https://arxiv.org/abs/2112.01882)] |
|
- <a name="todo"></a> Self-Supervised Models are Continual Learners (**CVPR2022**) [[paper](https://arxiv.org/abs/2112.04215)] [[code](https://github.com/DonkeyShot21/cassle)] |
|
- <a name="todo"></a> Learning to Imagine: Diversify Memory for Incremental Learning using Unlabeled Data (**CVPR2022**) [[paper](https://arxiv.org/abs/2204.08932)] |
|
- <a name="todo"></a> General Incremental Learning with Domain-aware Categorical Representations (**CVPR2022**) [[paper](https://arxiv.org/abs/2204.04078)] |
|
- <a name="todo"></a> Constrained Few-shot Class-incremental Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.16588)] |
|
- <a name="todo"></a> Overcoming Catastrophic Forgetting in Incremental Object Detection via Elastic Response Distillation (**CVPR2022**) [[paper](https://arxiv.org/abs/2204.02136)] |
|
- <a name="todo"></a> Class-Incremental Learning with Strong Pre-trained Models (**CVPR2022**) [[paper](https://arxiv.org/abs/2204.03634)] |
|
- <a name="todo"></a> Energy-based Latent Aligner for Incremental Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.14952)] [[code](https://github.com/JosephKJ/ELI)] |
|
- <a name="todo"></a> Meta-attention for ViT-backed Continual Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.11684)] [[code](https://github.com/zju-vipa/MEAT-TIL)] |
|
- <a name="todo"></a> Learning to Prompt for Continual Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2112.08654)] [[code](https://github.com/google-research/l2p)] |
|
- <a name="todo"></a> On Generalizing Beyond Domains in Cross-Domain Continual Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.03970)] |
|
- <a name="todo"></a> Probing Representation Forgetting in Supervised and Unsupervised Continual Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.13381)] |
|
- <a name="todo"></a> Incremental Transformer Structure Enhanced Image Inpainting with Masking Positional Encoding (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.00867)] [[code](https://github.com/DQiaole/ZITS_inpainting)] |
|
- <a name="todo"></a> Mimicking the Oracle: An Initial Phase Decorrelation Approach for Class Incremental Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2112.04731)] [[code](https://github.com/Yujun-Shi/CwD)] |
|
- <a name="todo"></a> Forward Compatible Few-Shot Class-Incremental Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.06953)] [[code](https://github.com/zhoudw-zdw/CVPR22-Fact)] |
|
- <a name="todo"></a> Self-Sustaining Representation Expansion for Non-Exemplar Class-Incremental Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.06359)] |
|
- <a name="todo"></a> DyTox: Transformers for Continual Learning with DYnamic TOken eXpansion (**CVPR2022**) [[paper](https://arxiv.org/abs/2111.11326)] |
|
- <a name="todo"></a> Federated Class-Incremental Learning (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.11473)] [[code](https://github.com/conditionWang/FCIL)] |
|
- <a name="todo"></a> Representation Compensation Networks for Continual Semantic Segmentation (**CVPR2022**) [[paper](https://arxiv.org/abs/2203.05402)] |
|
- <a name="todo"></a> A Multi-Head Model for Continual Learning via Out-of-Distribution Replay (**CoLLAs2022**) [[paper](https://arxiv.org/abs/2208.09734)] [[code](https://github.com/k-gyuhak/MORE)] |
|
- <a name="todo"></a> Continual Attentive Fusion for Incremental Learning in Semantic Segmentation (**TMM2022**) [[paper](https://arxiv.org/abs/2202.00432)] |
|
- <a name="todo"></a> Self-training for class-incremental semantic segmentation (**TNNLS2022**) [[paper](https://arxiv.org/abs/2012.03362)] |
|
- <a name="todo"></a> Effects of Auxiliary Knowledge on Continual Learning (**ICPR2022**) [[paper](https://arxiv.org/abs/2206.02577)] |
|
- <a name="todo"></a>Continual Sequence Generation with Adaptive Compositional Modules (**ACL2022**) [[paper](https://arxiv.org/pdf/2203.10652.pdf)] |
|
- <a name="todo"></a> Learngene: From Open-World to Your Learning Task (**AAAI2022**) [[paper](https://arxiv.org/pdf/2106.06788.pdf)] [[code](https://github.com/BruceQFWang/learngene)] |
|
|
|
- <a name="todo"></a> Rethinking the Representational Continuity: Towards Unsupervised Continual Learning (**ICLR2022**) [[paper](https://openreview.net/pdf?id=9Hrka5PA7LW)] |
|
- <a name="todo"></a> Continual Learning with Filter Atom Swapping (**ICLR2022**) [[paper](https://openreview.net/pdf?id=metRpM4Zrcb)] |
|
- <a name="todo"></a> Continual Learning with Recursive Gradient Optimization (**ICLR2022**) [[paper](https://openreview.net/pdf?id=7YDLgf9_zgm)] |
|
- <a name="todo"></a> TRGP: Trust Region Gradient Projection for Continual Learning (**ICLR2022**) [[paper](https://openreview.net/pdf?id=iEvAf8i6JjO)] |
|
- <a name="todo"></a> Looking Back on Learned Experiences For Class/task Incremental Learning (**ICLR2022**) [[paper](https://openreview.net/pdf?id=RxplU3vmBx)] |
|
- <a name="todo"></a> Continual Normalization: Rethinking Batch Normalization for Online Continual Learning (**ICLR2022**) [[paper](https://openreview.net/pdf?id=vwLLQ-HwqhZ)] |
|
- <a name="todo"></a> Model Zoo: A Growing Brain That Learns Continually (**ICLR2022**) [[paper](https://openreview.net/pdf?id=WfvgGBcgbE7)] |
|
- <a name="todo"></a> Learning curves for continual learning in neural networks: Self-knowledge transfer and forgetting (**ICLR2022**) [[paper](https://openreview.net/pdf?id=tFgdrQbbaa)] |
|
- <a name="todo"></a> Memory Replay with Data Compression for Continual Learning (**ICLR2022**) [[paper](https://openreview.net/pdf?id=a7H7OucbWaU)] |
|
- <a name="todo"></a> Learning Fast, Learning Slow: A General Continual Learning Method based on Complementary Learning System (**ICLR2022**) [[paper](https://openreview.net/pdf?id=uxxFrDwrE7Y)] |
|
- <a name="todo"></a> Online Coreset Selection for Rehearsal-based Continual Learning (**ICLR2022**) [[paper](https://openreview.net/pdf?id=f9D-5WNG4Nv)] |
|
- <a name="todo"></a> Pretrained Language Model in Continual Learning: A Comparative Study (**ICLR2022**) [[paper](https://openreview.net/pdf?id=figzpGMrdD)] |
|
- <a name="todo"></a> Online Continual Learning on Class Incremental Blurry Task Configuration with Anytime Inference (**ICLR2022**) [[paper](https://openreview.net/pdf?id=nrGGfMbY_qK)] |
|
- <a name="todo"></a> New Insights on Reducing Abrupt Representation Change in Online Continual Learning (**ICLR2022**) [[paper](https://openreview.net/pdf?id=N8MaByOzUfb)] |
|
- <a name="todo"></a> Towards Continual Knowledge Learning of Language Models (**ICLR2022**) [[paper](https://openreview.net/pdf?id=vfsRB5MImo9)] |
|
- <a name="todo"></a> CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability (**ICLR2022**) [[paper](https://openreview.net/pdf?id=rHMaBYbkkRJ)] |
|
- <a name="todo"></a> CoMPS: Continual Meta Policy Search (**ICLR2022**) [[paper](https://openreview.net/pdf?id=PVJ6j87gOHz)] |
|
- <a name="todo"></a> Information-theoretic Online Memory Selection for Continual Learning (**ICLR2022**) [[paper](https://openreview.net/pdf?id=IpctgL7khPp)] |
|
- <a name="todo"></a> Subspace Regularizers for Few-Shot Class Incremental Learning (**ICLR2022**) [[paper](https://openreview.net/pdf?id=boJy41J-tnQ)] |
|
- <a name="todo"></a> LFPT5: A Unified Framework for Lifelong Few-shot Language Learning Based on Prompt Tuning of T5 (**ICLR2022**) [[paper](https://openreview.net/pdf?id=HCRVf71PMF)] |
|
- <a name="todo"></a> Effect of scale on catastrophic forgetting in neural networks (**ICLR2022**) [[paper]( https://openreview.net/pdf?id=GhVS8_yPeEa)] |
|
- <a name="todo"></a> Dataset Knowledge Transfer for Class-Incremental Learning without Memory (**WACV2022**) [[paper](https://arxiv.org/pdf/2110.08421.pdf)] |
|
- <a name="todo"></a> Knowledge Capture and Replay for Continual Learning (**WACV2022**) [[paper](https://openaccess.thecvf.com/content/WACV2022/papers/Gopalakrishnan_Knowledge_Capture_and_Replay_for_Continual_Learning_WACV_2022_paper.pdf)] |
|
- <a name="todo"></a> Online Continual Learning via Candidates Voting (**WACV2022**) [[paper](https://openaccess.thecvf.com/content/WACV2022/papers/He_Online_Continual_Learning_via_Candidates_Voting_WACV_2022_paper.pdf)] |
|
- <a name="todo"></a> lpSpikeCon: Enabling Low-Precision Spiking Neural Network Processing for Efficient Unsupervised Continual Learning on Autonomous Agents (**IJCNN2022**) [[paper](https://doi.org/10.1109/IJCNN55064.2022.9892948)] |
|
- <a name="todo"></a> Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition (**Journal of Imaging 2022**) [[paper](https://www.mdpi.com/2313-433X/8/4/93)] |
|
|
|
### 2021 |
|
- <a name="todo"></a> Incremental Object Detection via Meta-Learning (**TPAMI 2021**) [[paper](https://arxiv.org/abs/2003.08798)] [[code](https://github.com/JosephKJ/iOD)] |
|
- <a name="todo"></a> Triple-Memory Networks: A Brain-Inspired Method for Continual Learning (**TNNLS 2021**) [[paper](https://ieeexplore.ieee.org/document/9540230)] |
|
- <a name="todo"></a> Memory efficient class-incremental learning for image classification (**TNNLS 2021**) [[paper](https://ieeexplore.ieee.org/abstract/document/9422177)] |
|
- <a name="todo"></a> A Procedural World Generation Framework for Systematic Evaluation of Continual Learning (**NeurIPS2021**) [[paper](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/d645920e395fedad7bbbed0eca3fe2e0-Abstract-round1.html)] |
|
- <a name="todo"></a> Class-Incremental Learning via Dual Augmentation (**NeurIPS2021**) [[paper](https://papers.nips.cc/paper/2021/file/77ee3bc58ce560b86c2b59363281e914-Paper.pdf)] |
|
- <a name="todo"></a> SSUL: Semantic Segmentation with Unknown Label for Exemplar-based Class-Incremental Learning (**NeurIPS2021**) [[paper](https://proceedings.neurips.cc/paper/2021/file/5a9542c773018268fc6271f7afeea969-Paper.pdf)] |
|
- <a name="todo"></a> RMM: Reinforced Memory Management for Class-Incremental Learning (**NeurIPS2021**) [[paper](https://proceedings.neurips.cc/paper/2021/hash/1cbcaa5abbb6b70f378a3a03d0c26386-Abstract.html)] |
|
- <a name="todo"></a> Overcoming Catastrophic Forgetting in Incremental Few-Shot Learning by Finding Flat Minima (**NeurIPS2021**) [[paper](https://openreview.net/forum?id=ALvt7nXa2q)] |
|
- <a name="todo"></a> Lifelong Domain Adaptation via Consolidated Internal Distribution (**NeurIPS2021**) [[paper](https://openreview.net/forum?id=lpW-UP8VKcg)] |
|
- <a name="todo"></a> AFEC: Active Forgetting of Negative Transfer in Continual Learning (**NeurIPS2021**) [[paper](https://openreview.net/pdf/72a18fad6fce88ef0286e9c7582229cf1c8d9f93.pdf)] |
|
- <a name="todo"></a> Natural continual learning: success is a journey, not (just) a destination (**NeurIPS2021**) [[paper](https://openreview.net/forum?id=W9250bXDgpK)] |
|
- <a name="todo"></a> Gradient-based Editing of Memory Examples for Online Task-free Continual Learning (**NeurIPS2021**) [[paper](https://papers.nips.cc/paper/2021/hash/f45a1078feb35de77d26b3f7a52ef502-Abstract.html)] |
|
- <a name="todo"></a> Optimizing Reusable Knowledge for Continual Learning via Metalearning (**NeurIPS2021**) [[paper](https://openreview.net/forum?id=hHTctAv9Lvh)] |
|
- <a name="todo"></a> Formalizing the Generalization-Forgetting Trade-off in Continual Learning (**NeurIPS2021**) [[paper](https://openreview.net/forum?id=u1XV9BPAB9)] |
|
- <a name="todo"></a> Learning where to learn: Gradient sparsity in meta and continual learning (**NeurIPS2021**) [[paper](https://arxiv.org/abs/2110.14402)] |
|
- <a name="todo"></a> Flattening Sharpness for Dynamic Gradient Projection Memory Benefits Continual Learning (**NeurIPS2021**) [[paper](https://openreview.net/forum?id=q1eCa1kMfDd)] |
|
- <a name="todo"></a> Posterior Meta-Replay for Continual Learning (**NeurIPS2021**) [[paper](https://arxiv.org/abs/2103.01133)] |
|
- <a name="todo"></a> Continual Auxiliary Task Learning (**NeurIPS2021**) [[paper](https://openreview.net/forum?id=EpL9IFAMa3)] |
|
- <a name="todo"></a> Mitigating Forgetting in Online Continual Learning with Neuron Calibration (**NeurIPS2021**) [[paper](https://openreview.net/pdf/cc3ebd7a4834a4551e0b1f825969f9f51fd06415.pdf)] |
|
- <a name="todo"></a> BNS: Building Network Structures Dynamically for Continual Learning (**NeurIPS2021**) [[paper](https://papers.nips.cc/paper/2021/hash/ac64504cc249b070772848642cffe6ff-Abstract.html)] |
|
- <a name="todo"></a> DualNet: Continual Learning, Fast and Slow (**NeurIPS2021**) [[paper](https://openreview.net/pdf?id=eQ7Kh-QeWnO)] |
|
- <a name="todo"></a> BooVAE: Boosting Approach for Continual Learning of VAE (**NeurIPS2021**) [[paper](https://papers.nips.cc/paper/2021/hash/952285b9b7e7a1be5aa7849f32ffff05-Abstract.html)] |
|
- <a name="todo"></a> Generative vs. Discriminative: Rethinking The Meta-Continual Learning (**NeurIPS2021**) [[paper](https://papers.nips.cc/paper/2021/hash/b4e267d84075f66ebd967d95331fcc03-Abstract.html)] |
|
- <a name="todo"></a> Achieving Forgetting Prevention and Knowledge Transfer in Continual Learning (**NeurIPS2021**) [[paper](https://papers.nips.cc/paper/2021/hash/bcd0049c35799cdf57d06eaf2eb3cff6-Abstract.html)] |
|
- <a name="todo"></a> Bridging Non Co-occurrence with Unlabeled In-the-wild Data for Incremental Object Detection (**NeurIPS, 2021**) [[paper](https://papers.nips.cc/paper/2021/file/ffc58105bf6f8a91aba0fa2d99e6f106-Paper.pdf)] [[code](https://github.com/dongnana777/Bridging-Non-Co-occurrence)] |
|
- <a name="todo"></a> SS-IL: Separated Softmax for Incremental Learning (**ICCV, 2021**) [[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Ahn_SS-IL_Separated_Softmax_for_Incremental_Learning_ICCV_2021_paper.pdf)] |
|
- <a name="todo"></a> Striking a Balance between Stability and Plasticity for Class-Incremental Learning (**ICCV, 2021**) [[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Wu_Striking_a_Balance_Between_Stability_and_Plasticity_for_Class-Incremental_Learning_ICCV_2021_paper.pdf)] |
|
- <a name="todo"></a> Synthesized Feature based Few-Shot Class-Incremental Learning on a Mixture of Subspaces (**ICCV, 2021**) [[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Cheraghian_Synthesized_Feature_Based_Few-Shot_Class-Incremental_Learning_on_a_Mixture_of_ICCV_2021_paper.pdf)] |
|
- <a name="todo"></a> Class-Incremental Learning for Action Recognition in Videos (**ICCV, 2021**) [[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Park_Class-Incremental_Learning_for_Action_Recognition_in_Videos_ICCV_2021_paper.pdf)] |
|
- <a name="todo"></a> Continual Prototype Evolution:Learning Online from Non-Stationary Data Streams (**ICCV, 2021**) [[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/De_Lange_Continual_Prototype_Evolution_Learning_Online_From_Non-Stationary_Data_Streams_ICCV_2021_paper.pdf)] |
|
- <a name="todo"></a> Rehearsal Revealed: The Limits and Merits of Revisiting Samples in Continual Learning (**ICCV, 2021**) [[paper](https://arxiv.org/abs/2104.07446)] |
|
- <a name="todo"></a> Co2L: Contrastive Continual Learning (**ICCV, 2021**) [[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Cha_Co2L_Contrastive_Continual_Learning_ICCV_2021_paper.pdf)] |
|
- <a name="todo"></a> Wanderlust: Online Continual Object Detection in the Real World (**ICCV, 2021**) [[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Wang_Wanderlust_Online_Continual_Object_Detection_in_the_Real_World_ICCV_2021_paper.pdf)] |
|
- <a name="todo"></a> Continual Learning on Noisy Data Streams via Self-Purified Replay (**ICCV, 2021**) [[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Kim_Continual_Learning_on_Noisy_Data_Streams_via_Self-Purified_Replay_ICCV_2021_paper.pdf)] |
|
- <a name="todo"></a> Else-Net: Elastic Semantic Network for Continual Action Recognition from Skeleton Data (**ICCV, 2021**) [[paper](https://openaccess.thecvf.com/content/ICCV2021/papers/Li_Else-Net_Elastic_Semantic_Network_for_Continual_Action_Recognition_From_Skeleton_ICCV_2021_paper.pdf)] |
|
- <a name="todo"></a> Detection and Continual Learning of Novel Face Presentation Attacks (**ICCV, 2021**) [[paper](https://arxiv.org/pdf/2108.12081.pdf)] |
|
- <a name="todo"></a> Online Continual Learning with Natural Distribution Shifts: An Empirical Study with Visual Data (**ICCV, 2021**) [[paper](https://arxiv.org/abs/2108.09020)] |
|
- <a name="todo"></a> Continual Learning for Image-Based Camera Localization (**ICCV, 2021**) [[paper](https://arxiv.org/abs/2108.09112)] |
|
- <a name="todo"></a> Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting (**ICCV, 2021**) [[paper](https://arxiv.org/abs/2108.08165)] |
|
- <a name="todo"></a> Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning (**ICCV, 2021**) [[paper](https://arxiv.org/abs/2106.09701)] |
|
- <a name="todo"></a> RECALL: Replay-based Continual Learning in Semantic Segmentation (**ICCV, 2021**) [[paper](https://arxiv.org/pdf/2108.03673.pdf)] |
|
- <a name="todo"></a> Few-Shot and Continual Learning with Attentive Independent Mechanisms (**ICCV, 2021**) [[paper](https://arxiv.org/abs/2107.14053)] |
|
- <a name="todo"></a> Learning with Selective Forgetting (**IJCAI, 2021**) [[paper](https://www.ijcai.org/proceedings/2021/0137.pdf)] |
|
- <a name="todo"></a> Continuous Coordination As a Realistic Scenario for Lifelong Learning (**ICML, 2021**) [[paper](https://arxiv.org/pdf/2103.03216.pdf)] |
|
- <a name="todo"></a> Kernel Continual Learning (**ICML, 2021**) [[paper](https://proceedings.mlr.press/v139/derakhshani21a.html)] |
|
- <a name="todo"></a> Variational Auto-Regressive Gaussian Processes for Continual Learning (**ICML, 2021**) [[paper](https://proceedings.mlr.press/v139/kapoor21b.html)] |
|
- <a name="todo"></a> Bayesian Structural Adaptation for Continual Learning (**ICML, 2021**) [[paper](https://proceedings.mlr.press/v139/kumar21a.html)] |
|
- <a name="todo"></a> Continual Learning in the Teacher-Student Setup: Impact of Task Similarity (**ICML, 2021**) [[paper](https://proceedings.mlr.press/v139/lee21e.html)] |
|
- <a name="todo"></a> Continuous Coordination As a Realistic Scenario for Lifelong Learning (**ICML, 2021**) [[paper](https://proceedings.mlr.press/v139/nekoei21a.html)] |
|
- <a name="todo"></a> Federated Continual Learning with Weighted Inter-client Transfer (**ICML, 2021**) [[paper](http://proceedings.mlr.press/v139/yoon21b/yoon21b.pdf)] |
|
- <a name="todo"></a> Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks (**NAACL, 2021**) [[paper](https://www.aclweb.org/anthology/2021.naacl-main.378.pdf)] |
|
- <a name="todo"></a> Continual Learning for Text Classification with Information Disentanglement Based Regularization (**NAACL, 2021**) [[paper](https://www.aclweb.org/anthology/2021.naacl-main.218.pdf)] |
|
- <a name="todo"></a> CLASSIC: Continual and Contrastive Learning of Aspect Sentiment Classification Tasks (**EMNLP, 2021**) [[paper](https://aclanthology.org/2021.emnlp-main.550/)][[code](https://github.com/ZixuanKe/PyContinual)] |
|
- <a name="todo"></a> Co-Transport for Class-Incremental Learning (**ACM MM, 2021**) [[paper](https://arxiv.org/pdf/2107.12654.pdf)] |
|
- <a name="todo"></a> Towards Open World Object Detection (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Joseph_Towards_Open_World_Object_Detection_CVPR_2021_paper.pdf)] [[code](https://github.com/JosephKJ/OWOD)] [[video](https://www.youtube.com/watch?v=aB2ZFAR-OZg)] |
|
- <a name="todo"></a> Prototype Augmentation and Self-Supervision for Incremental Learning (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhu_Prototype_Augmentation_and_Self-Supervision_for_Incremental_Learning_CVPR_2021_paper.pdf)] [[code](https://github.com/Impression2805/CVPR21_PASS)] |
|
- <a name="todo"></a> ORDisCo: Effective and Efficient Usage of Incremental Unlabeled Data for Semi-supervised Continual Learning (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Wang_ORDisCo_Effective_and_Efficient_Usage_of_Incremental_Unlabeled_Data_for_CVPR_2021_paper.pdf)] |
|
- <a name="todo"></a> Incremental Learning via Rate Reduction (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Wu_Incremental_Learning_via_Rate_Reduction_CVPR_2021_paper.pdf)] |
|
- <a name="todo"></a> IIRC: Incremental Implicitly-Refined Classification (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Abdelsalam_IIRC_Incremental_Implicitly-Refined_Classification_CVPR_2021_paper.pdf)] |
|
- <a name="todo"></a> Continual Adaptation of Visual Representations via Domain Randomization and Meta-learning (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Volpi_Continual_Adaptation_of_Visual_Representations_via_Domain_Randomization_and_Meta-Learning_CVPR_2021_paper.pdf)] |
|
- <a name="todo"></a> Image De-raining via Continual Learning (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhou_Image_De-Raining_via_Continual_Learning_CVPR_2021_paper.pdf)] |
|
- <a name="todo"></a> Continual Learning via Bit-Level Information Preserving (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Shi_Continual_Learning_via_Bit-Level_Information_Preserving_CVPR_2021_paper.pdf)] |
|
- <a name="todo"></a> Hyper-LifelongGAN: Scalable Lifelong Learning for Image Conditioned Generation (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhai_Hyper-LifelongGAN_Scalable_Lifelong_Learning_for_Image_Conditioned_Generation_CVPR_2021_paper.pdf)] |
|
- <a name="todo"></a> Lifelong Person Re-Identification via Adaptive Knowledge Accumulation (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Pu_Lifelong_Person_Re-Identification_via_Adaptive_Knowledge_Accumulation_CVPR_2021_paper.pdf)] |
|
- <a name="todo"></a> Distilling Causal Effect of Data in Class-Incremental Learning (**CVPR, 2021**) [[paper](https://arxiv.org/abs/2103.01737)] |
|
- <a name="todo"></a> Self-Promoted Prototype Refinement for Few-Shot Class-Incremental Learning (**CVPR, 2021**) [[paper](https://openaccess.thecvf.com/content/CVPR2021/papers/Zhu_Self-Promoted_Prototype_Refinement_for_Few-Shot_Class-Incremental_Learning_CVPR_2021_paper.pdf)] |
|
- <a name="todo"></a> Layerwise Optimization by Gradient Decomposition for Continual Learning (**CVPR, 2021**) [[paper](https://arxiv.org/abs/2105.07561)] |
|
- <a name="todo"></a> Adaptive Aggregation Networks for Class-Incremental Learning (**CVPR, 2021**) [[paper](https://arxiv.org/pdf/2010.05063.pdf)] |
|
- <a name="todo"></a> Incremental Few-Shot Instance Segmentation (**CVPR, 2021**) [[paper](https://arxiv.org/abs/2105.05312)] |
|
- <a name="todo"></a> Efficient Feature Transformations for Discriminative and Generative Continual Learning (**CVPR, 2021**) [[paper](https://arxiv.org/abs/2103.13558)] |
|
- <a name="todo"></a> On Learning the Geodesic Path for Incremental Learning (**CVPR, 2021**) [[paper](https://arxiv.org/abs/2104.08572)] |
|
- <a name="todo"></a> Few-Shot Incremental Learning with Continually Evolved Classifiers (**CVPR, 2021**) [[paper](https://arxiv.org/abs/2104.03047)] |
|
- <a name="todo"></a> Rectification-based Knowledge Retention for Continual Learning (**CVPR, 2021**) [[paper](https://arxiv.org/abs/2103.16597)] |
|
- <a name="todo"></a> DER: Dynamically Expandable Representation for Class Incremental Learning (**CVPR, 2021**) [[paper](https://arxiv.org/abs/2103.16788)] |
|
- <a name="todo"></a> Rainbow Memory: Continual Learning with a Memory of Diverse Samples (**CVPR, 2021**) [[paper](https://arxiv.org/abs/2103.17230)] |
|
- <a name="todo"></a> Training Networks in Null Space of Feature Covariance for Continual Learning |
|
(**CVPR, 2021**) [[paper](https://arxiv.org/abs/2103.07113)] |
|
- <a name="todo"></a> Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning |
|
(**CVPR, 2021**) [[paper](https://arxiv.org/abs/2103.04059)] |
|
- <a name="todo"></a> PLOP: Learning without Forgetting for Continual Semantic Segmentation |
|
(**CVPR, 2021**) [[paper](https://arxiv.org/abs/2011.11390)] |
|
- <a name="todo"></a> Continual Semantic Segmentation via Repulsion-Attraction of Sparse and Disentangled Latent Representations |
|
(**CVPR, 2021**) [[paper](https://arxiv.org/abs/2103.06342)] |
|
- <a name="todo"></a> Online Class-Incremental Continual Learning with Adversarial Shapley Value(**AAAI, 2021**) [[paper](https://arxiv.org/abs/2009.00093)] [[code](https://github.com/RaptorMai/online-continual-learning)] |
|
- <a name="todo"></a> Lifelong and Continual Learning Dialogue Systems: Learning during Conversation(**AAAI, 2021**) [[paper](https://www.cs.uic.edu/~liub/publications/LINC_paper_AAAI_2021_camera_ready.pdf)] |
|
- <a name="todo"></a> Continual learning for named entity recognition(**AAAI, 2021**) [[paper](https://www.amazon.science/publications/continual-learning-for-named-entity-recognition)] |
|
- <a name="todo"></a> Using Hindsight to Anchor Past Knowledge in Continual Learning(**AAAI, 2021**) [[paper](https://arxiv.org/abs/2002.08165)] |
|
- <a name="todo"></a> Split-and-Bridge: Adaptable Class Incremental Learning within a Single Neural Network(**AAAI, 2021**) [[paper](https://arxiv.org/abs/2107.01349)] [[code](https://github.com/bigdata-inha/Split-and-Bridge)] |
|
- <a name="todo"></a> Curriculum-Meta Learning for Order-Robust Continual Relation Extraction(**AAAI, 2021**) [[paper](https://arxiv.org/abs/2101.01926)] |
|
- <a name="todo"></a> Continual Learning by Using Information of Each Class Holistically(**AAAI, 2021**) [[paper](https://www.cs.uic.edu/~liub/publications/AAAI2021_PCL.pdf)] |
|
- <a name="todo"></a> Gradient Regularized Contrastive Learning for Continual Domain Adaptation(**AAAI, 2021**) [[paper](https://arxiv.org/abs/2007.12942)] |
|
- <a name="todo"></a> Unsupervised Model Adaptation for Continual Semantic Segmentation(**AAAI, 2021**) [[paper](https://arxiv.org/abs/2009.12518)] |
|
- <a name="todo"></a> A Continual Learning Framework for Uncertainty-Aware Interactive Image Segmentation(**AAAI, 2021**) [[paper](https://www.aaai.org/AAAI21Papers/AAAI-2989.ZhengE.pdf)] |
|
- <a name="todo"></a> Do Not Forget to Attend to Uncertainty While Mitigating Catastrophic Forgetting(**WACV, 2021**) [[paper](https://openaccess.thecvf.com/content/WACV2021/html/Kurmi_Do_Not_Forget_to_Attend_to_Uncertainty_While_Mitigating_Catastrophic_WACV_2021_paper.html)] |
|
- <a name="todo"></a> SpikeDyn: A Framework for Energy-Efficient Spiking Neural Networks with Continual and Unsupervised Learning Capabilities in Dynamic Environments (**DAC2021**) [[paper](https://doi.org/10.1109/DAC18074.2021.9586281)] |
|
|
|
### 2020 |
|
- <a name="todo"></a> Rethinking Experience Replay: a Bag of Tricks for Continual Learning(**ICPR, 2020**) [[paper](https://arxiv.org/abs/2010.05595)] [[code](https://github.com/hastings24/rethinking_er)] |
|
- <a name="todo"></a> Continual Learning for Natural Language Generation in Task-oriented Dialog Systems(**EMNLP, 2020**) [[paper](https://arxiv.org/abs/2010.00910)] |
|
- <a name="todo"></a> Distill and Replay for Continual Language Learning(**COLING, 2020**) [[paper](https://www.aclweb.org/anthology/2020.coling-main.318.pdf)] |
|
- <a name="todo"></a> Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks (**NeurIPS2020**) [[paper](https://proceedings.neurips.cc/paper/2020/file/d7488039246a405baf6a7cbc3613a56f-Paper.pdf)] [[code](https://github.com/ZixuanKe/CAT)] |
|
- <a name="todo"></a> Meta-Consolidation for Continual Learning (**NeurIPS2020**) [[paper](https://arxiv.org/abs/2010.00352?context=cs)] |
|
- <a name="todo"></a> Understanding the Role of Training Regimes in Continual Learning (**NeurIPS2020**) [[paper](https://arxiv.org/pdf/2006.06958.pdf)] |
|
- <a name="todo"></a> Continual Learning with Node-Importance based Adaptive Group Sparse Regularization (**NeurIPS2020**) [[paper](https://arxiv.org/pdf/2003.13726.pdf)] |
|
- <a name="todo"></a> Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning (**NeurIPS2020**) [[paper](https://arxiv.org/pdf/2003.05856.pdf)] |
|
- <a name="todo"></a> Coresets via Bilevel Optimization for Continual Learning and Streaming (**NeurIPS2020**) [[paper](https://arxiv.org/pdf/2006.03875.pdf)] |
|
- <a name="todo"></a> RATT: Recurrent Attention to Transient Tasks for Continual Image Captioning (**NeurIPS2020**) [[paper](https://arxiv.org/pdf/2007.06271.pdf)] |
|
- <a name="todo"></a> Continual Deep Learning by Functional Regularisation of Memorable Past (**NeurIPS2020**) [[paper](https://arxiv.org/pdf/2004.14070.pdf)] |
|
- <a name="todo"></a> Dark Experience for General Continual Learning: a Strong, Simple Baseline (**NeurIPS2020**) [[paper](https://arxiv.org/pdf/2004.07211.pdf)] [[code](https://github.com/aimagelab/mammoth)] |
|
- <a name="todo"></a> GAN Memory with No Forgetting (**NeurIPS2020**) [[paper](https://arxiv.org/pdf/2006.07543.pdf)] |
|
- <a name="todo"></a> Calibrating CNNs for Lifelong Learning (**NeurIPS2020**) [[paper](http://people.ee.duke.edu/~lcarin/Final_Calibration_Incremental_Learning_NeurIPS_2020.pdf)] |
|
- <a name="todo"></a> Mitigating Forgetting in Online Continual Learning |
|
via Instance-Aware Parameterization (**NeurIPS2020**) [[paper](https://papers.nips.cc/paper/2020/file/ca4b5656b7e193e6bb9064c672ac8dce-Paper.pdf)] |
|
- <a name="todo"></a> ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning for Session-based Recommendation(**RecSys, 2020**) [[paper](https://arxiv.org/abs/2007.12000)] |
|
- <a name="todo"></a> Initial Classifier Weights Replay for Memoryless Class Incremental Learning (**BMVC2020**) [[paper](https://arxiv.org/pdf/2008.13710.pdf)] |
|
- <a name="todo"></a> Adversarial Continual Learning (**ECCV2020**) [[paper](https://arxiv.org/abs/2003.09553)] [[code](https://github.com/facebookresearch/Adversarial-Continual-Learning)] |
|
- <a name="todo"></a> REMIND Your Neural Network to Prevent Catastrophic Forgetting (**ECCV2020**) [[paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123530460.pdf)] [[code](https://github.com/tyler-hayes/REMIND)] |
|
- <a name="todo"></a> Incremental Meta-Learning via Indirect Discriminant Alignment (**ECCV2020**) [[paper](https://arxiv.org/abs/2002.04162)] |
|
- <a name="todo"></a> Memory-Efficient Incremental Learning Through Feature Adaptation (**ECCV2020**) [[paper](https://arxiv.org/abs/2004.00713)] |
|
- <a name="todo"></a> PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning (**ECCV2020**) [[paper](https://arxiv.org/abs/2004.13513)] [[code](https://github.com/arthurdouillard/incremental_learning.pytorch)] |
|
- <a name="todo"></a> Reparameterizing Convolutions for Incremental Multi-Task Learning Without Task Interference (**ECCV2020**) [[paper](https://arxiv.org/abs/2007.12540)] |
|
- <a name="todo"></a> Learning latent representions across multiple data domains using Lifelong VAEGAN (**ECCV2020**) [[paper](https://arxiv.org/abs/2007.10221)] |
|
- <a name="todo"></a> Online Continual Learning under Extreme Memory Constraints (**ECCV2020**) [[paper](https://arxiv.org/abs/2008.01510)] |
|
- <a name="todo"></a> Class-Incremental Domain Adaptation (**ECCV2020**) [[paper](https://arxiv.org/abs/2008.01389)] |
|
- <a name="todo"></a> More Classifiers, Less Forgetting: A Generic Multi-classifier Paradigm for Incremental Learning (**ECCV2020**) [[paper](http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123710698.pdf)] |
|
- <a name="todo"></a> Piggyback GAN: Efficient Lifelong Learning for Image Conditioned Generation (**ECCV2020**) [[paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123660392.pdf)] |
|
- <a name="todo"></a> GDumb: A Simple Approach that Questions Our Progress in Continual Learning (**ECCV2020**) [[paper](http://www.robots.ox.ac.uk/~tvg/publications/2020/gdumb.pdf)] |
|
- <a name="todo"></a> Imbalanced Continual Learning with Partitioning Reservoir Sampling (**ECCV2020**) [[paper](http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123580409.pdf)] |
|
- <a name="todo"></a> Topology-Preserving Class-Incremental Learning (**ECCV2020**) [[paper](http://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123640256.pdf)] |
|
- <a name="todo"></a> GraphSAIL: Graph Structure Aware Incremental Learning for Recommender Systems (**CIKM2020**) [[paper](https://arxiv.org/abs/2008.13517)] |
|
- <a name="todo"></a> OvA-INN: Continual Learning with Invertible Neural Networks (**IJCNN2020**) [[paper](https://arxiv.org/abs/2006.13772)] |
|
- <a name="todo"></a> XtarNet: Learning to Extract Task-Adaptive Representation |
|
for Incremental Few-Shot Learning (**ICLM2020**) [[paper](https://arxiv.org/pdf/2003.08561.pdf)] |
|
- <a name="todo"></a> Optimal Continual Learning has Perfect Memory and is NP-HARD (**ICML2020**) [[paper](https://arxiv.org/pdf/2006.05188.pdf)] |
|
- <a name="todo"></a> Neural Topic Modeling with Continual Lifelong Learning (**ICML2020**) [[paper](https://arxiv.org/pdf/2006.10909.pdf)] |
|
- <a name="todo"></a> Continual Learning with Knowledge Transfer for Sentiment Classification (**ECML-PKDD2020**) [[paper](https://www.cs.uic.edu/~liub/publications/ECML-PKDD-2020.pdf)] [[code](https://github.com/ZixuanKe/LifelongSentClass)] |
|
- <a name="todo"></a> Semantic Drift Compensation for Class-Incremental Learning (**CVPR2020**) [[paper](https://arxiv.org/pdf/2004.00440.pdf)] [[code](https://github.com/yulu0724/SDC-IL)] |
|
- <a name="todo"></a> Few-Shot Class-Incremental Learning (**CVPR2020**) [[paper](https://arxiv.org/pdf/2004.10956.pdf)] |
|
- <a name="todo"></a> Modeling the Background for Incremental Learning in Semantic Segmentation (**CVPR2020**) [[paper](https://arxiv.org/pdf/2002.00718.pdf)] |
|
- <a name="todo"></a> Incremental Few-Shot Object Detection (**CVPR2020**) [[paper](https://arxiv.org/pdf/2003.04668.pdf)] |
|
- <a name="todo"></a> Incremental Learning In Online Scenario (**CVPR2020**) [[paper](https://arxiv.org/pdf/2003.13191.pdf)] |
|
- <a name="todo"></a> Maintaining Discrimination and Fairness in Class Incremental Learning (**CVPR2020**) [[paper](https://arxiv.org/pdf/1911.07053.pdf)] |
|
- <a name="todo"></a> Conditional Channel Gated Networks for Task-Aware Continual Learning (**CVPR2020**) [[paper](https://arxiv.org/pdf/2004.00070.pdf)] |
|
- <a name="todo"></a> Continual Learning with Extended Kronecker-factored Approximate Curvature |
|
(**CVPR2020**) [[paper](https://arxiv.org/abs/2004.07507)] |
|
- <a name="todo"></a> iTAML : An Incremental Task-Agnostic Meta-learning Approach (**CVPR2020**) [[paper](https://arxiv.org/pdf/2003.11652.pdf)] [[code](https://github.com/brjathu/iTAML)] |
|
- <a name="todo"></a> Mnemonics Training: Multi-Class Incremental Learning without Forgetting (**CVPR2020**) [[paper](https://arxiv.org/pdf/2002.10211.pdf)] [[code](https://github.com/yaoyao-liu/mnemonics)] |
|
- <a name="todo"></a> ScaIL: Classifier Weights Scaling for Class Incremental Learning (**WACV2020**) [[paper](https://arxiv.org/abs/2001.05755)] |
|
- <a name="todo"></a> Accepted papers(**ICLR2020**) [[paper](https://docs.google.com/presentation/d/17s5Y8N9dypH-59tuwKaCp80NYBxTmtT6V-zOFlsH-SA/edit?usp=sharing)] |
|
- <a name="todo"></a> Brain-inspired replay for continual learning with artificial neural networks (**Natrue Communications 2020**) [[paper](https://www.nature.com/articles/s41467-020-17866-2)] [[code](https://github.com/GMvandeVen/brain-inspired-replay)] |
|
- <a name="todo"></a> Learning to Continually Learn (**ECAI 2020**) [[paper](https://arxiv.org/abs/2002.09571)] [[code](https://github.com/uvm-neurobotics-lab/ANML)] |
|
### 2019 |
|
- <a name="todo"></a> Compacting, Picking and Growing for Unforgetting Continual Learning (**NeurIPS2019**)[[paper](https://papers.nips.cc/paper/9518-compacting-picking-and-growing-for-unforgetting-continual-learning.pdf)][[code](https://github.com/ivclab/CPG)] |
|
- <a name="todo"></a> Increasingly Packing Multiple Facial-Informatics Modules in A Unified Deep-Learning Model via Lifelong Learning (**ICMR2019**) [[paper](https://dl.acm.org/doi/10.1145/3323873.3325053)][[code](https://github.com/ivclab/PAE)] |
|
- <a name="todo"></a> Towards Training Recurrent Neural Networks for Lifelong Learning (**Neural Computation 2019**) [[paper](https://arxiv.org/pdf/1811.07017.pdf)] |
|
- <a name="todo"></a> Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay (**IJCAI2019**) [[paper]](https://www.ijcai.org/Proceedings/2019/0463.pdf) |
|
- <a name="todo"></a> IL2M: Class Incremental Learning With Dual Memory |
|
(**ICCV2019**) [[paper](http://openaccess.thecvf.com/content_ICCV_2019/papers/Belouadah_IL2M_Class_Incremental_Learning_With_Dual_Memory_ICCV_2019_paper.pdf)] |
|
- <a name="todo"></a> Incremental Learning Using Conditional Adversarial Networks |
|
(**ICCV2019**) [[paper](http://openaccess.thecvf.com/content_ICCV_2019/html/Xiang_Incremental_Learning_Using_Conditional_Adversarial_Networks_ICCV_2019_paper.html)] |
|
- <a name="todo"></a> Adaptive Deep Models for Incremental Learning: Considering Capacity Scalability and Sustainability (**KDD2019**) [[paper](http://www.lamda.nju.edu.cn/yangy/KDD19.pdf)] |
|
- <a name="todo"></a> Random Path Selection for Incremental Learning (**NeurIPS2019**) [[paper](https://arxiv.org/pdf/1906.01120.pdf)] |
|
- <a name="todo"></a> Online Continual Learning with Maximal Interfered Retrieval (**NeurIPS2019**) [[paper](http://papers.neurips.cc/paper/9357-online-continual-learning-with-maximal-interfered-retrieval)] |
|
- <a name="todo"></a> Meta-Learning Representations for Continual Learning (**NeurIPS2019**) [[paper](http://papers.nips.cc/paper/8458-meta-learning-representations-for-continual-learning.pdf)] [[code](https://github.com/Khurramjaved96/mrcl)] |
|
- <a name="todo"></a> Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild (**ICCV2019**) [[paper](https://arxiv.org/pdf/1903.12648.pdf)] |
|
- <a name="todo"></a> Continual Learning by Asymmetric Loss Approximation |
|
with Single-Side Overestimation (**ICCV2019**) [[paper](https://arxiv.org/pdf/1908.02984.pdf)] |
|
- <a name="todo"></a> Lifelong GAN: Continual Learning for Conditional Image Generation (**ICCV2019**) [[paper](https://arxiv.org/pdf/1907.10107.pdf)] |
|
- <a name="todo"></a> Continual learning of context-dependent processing in neural networks (**Nature Machine Intelligence 2019**) [[paper](https://rdcu.be/bOaa3)] [[code](https://github.com/beijixiong3510/OWM)] |
|
- <a name="todo"></a> Large Scale Incremental Learning (**CVPR2019**) [[paper](https://arxiv.org/abs/1905.13260)] [[code](https://github.com/wuyuebupt/LargeScaleIncrementalLearning)] |
|
- <a name="todo"></a> Learning a Unified Classifier Incrementally via Rebalancing (**CVPR2019**) [[paper](http://openaccess.thecvf.com/content_CVPR_2019/papers/Hou_Learning_a_Unified_Classifier_Incrementally_via_Rebalancing_CVPR_2019_paper.pdf)] [[code](https://github.com/hshustc/CVPR19_Incremental_Learning)] |
|
- <a name="todo"></a> Learning Without Memorizing (**CVPR2019**) [[paper](https://arxiv.org/pdf/1811.08051.pdf)] |
|
- <a name="todo"></a> Learning to Remember: A Synaptic Plasticity Driven Framework for Continual Learning (**CVPR2019**) [[paper](https://arxiv.org/abs/1904.03137)] |
|
- <a name="todo"></a> Task-Free Continual Learning (**CVPR2019**) [[paper](https://arxiv.org/pdf/1812.03596.pdf)] |
|
- <a name="todo"></a> Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting (**ICML2019**) [[paper](https://arxiv.org/abs/1904.00310)] |
|
- <a name="todo"></a> Efficient Lifelong Learning with A-GEM (**ICLR2019**) [[paper](https://openreview.net/forum?id=Hkf2_sC5FX)] [[code](https://github.com/facebookresearch/agem)] |
|
- <a name="todo"></a> Learning to Learn without Forgetting By Maximizing Transfer and Minimizing Interference (**ICLR2019**) [[paper](https://openreview.net/forum?id=B1gTShAct7)] [[code](https://github.com/mattriemer/mer)] |
|
- <a name="todo"></a> Overcoming Catastrophic Forgetting via Model Adaptation (**ICLR2019**) [[paper](https://openreview.net/forum?id=ryGvcoA5YX)] |
|
- <a name="todo"></a> A comprehensive, application-oriented study of catastrophic forgetting in DNNs (**ICLR2019**) [[paper](https://openreview.net/forum?id=BkloRs0qK7)] |