1982
Stephen Grossberg. How does a brain build a cognitive code? In Studies of mind and brain, pages 1–52. Springer, 1982.
1986
J. C. Schlimmer and D. H. Fisher. A case study of incremental concept induction. In AAAI, 1986.
1989
M. McCloskey and N. J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of learning and motivation, 24:109–165, 1989.
1990
R. Ratcliff. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychological review, 97 2:285–308, 1990.
1995
S. Thrun. A lifelong learning perspective for mobile robot control. In V. Graefe (ed.), Intelligent Robots and Systems. Elsevier, 1995.
Anthony V. Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connect. Sci., 7:123–146, 1995.
Thrun, S. and Mitchell, T. Lifelong robot learning. Robotics and Autonomous Systems, 15:25–46, 1995.
1996
S. Thrun. Is learning the n-th thing any easier than learning the first? In NIPS, 1996.
1997
Mark B Ring. Child: A first step towards continual learning. Machine Learning, 28(1):77–104, 1997.
1998
Sebastian Thrun. Lifelong learning algorithms. In Learning to learn, pp. 181–209. Springer, 1998.
Mark B. Ring. Child: A first step towards continual learning. In Learning to Learn, 1998.
1999
French, R.M.: Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3(4), 128–135 (1999)
2000
G. Cauwenberghs and T. Poggio. Incremental and decremental support vector machine learning. In NIPS, 2000.
2001
R. Polikar, L. Upda, S. S. Upda, and V. Honavar. Learn++: An incremental learning algorithm for supervised neural networks. IEEE Trans. Systems, Man, and Cybernetics, Part C, 31(4):497–508, 2001.
2002
D. L. Silver and R. E. Mercer, “The task rehearsal method of life-long learning: Overcoming impoverished data,” in Conference of the Canadian Society for Computational Studies of Intelligence. Springer, 2002, pp. 90–101.
2005
O.-M. Moe-Helgesen and H. Stranden. Catastophic forgetting in neural networks. Technical report, Norwegian University of Science and Technology (NTNU), 2005.
2012
Zhou, Guanyu, Kihyuk Sohn, and Honglak Lee. “Online incremental feature learning with denoising autoencoders.” Artificial intelligence and statistics. 2012.
Abhishek Kumar and Hal Daume III. Learning task grouping and overlap in multi-task learning. In Proceedings of the International Conference on Machine Learning (ICML), 2012.
2013
I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio, “An empirical investigation of catastrophic forgetting in gradient-based neural networks,” arXiv preprint arXiv:1312.6211, 2013.
I. Kuzborskij, F. Orabona, and B. Caputo. From n to n + 1: Multiclass transfer incremental learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2013.
D. L. Silver, Q. Yang, and L. Li. Lifelong machine learning systems: Beyond learning algorithms. In AAAI Spring Symposium: Lifelong Machine Learning, pages 49–55. Citeseer, 2013.
Srivastava, Rupesh K, Masci, Jonathan, Kazerounian, Sohrob, Gomez, Faustino, and Schmidhuber, Juergen. Compete to Compute. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 26, pp. 2310–2318. Curran Associates, Inc., 2013.
Paul Ruvolo and Eric Eaton. Ella: An efficient lifelong learning algorithm. In Proceedings of the International Conference on Machine Learning (ICML), 2013.
T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Distance-based image classification: Generalizing to new classes at near-zero cost. PAMI, 35(11):2624–2637, 2013.
Rupesh K Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, and Jürgen Schmidhuber. Compete to compute. In Advances in neural information processing systems, pages 2310–2318, 2013.
2014
M. Ristin, M. Guillaumin, J. Gall, and L. Van Gool. Incremental learning of NCM forests for large-scale image classification. In Conference on Computer Vision and Pattern Recognition (CVPR), 2014.
I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of catastrophic forgeting in gradient-based neural networks. In International Conference on Learning Representations (ICLR), 2014.
T. Xiao, J. Zhang, K. Yang, Y. Peng, and Z. Zhang. Error-driven incremental learning in deep convolutional neural network for large-scale image classification. In International Conference on Multimedia (ACM MM), 2014.
Razavian, Ali Sharif, Azizpour, Hossein, Sullivan, Josephine, and Carlsson, Stefan. Cnn features off-theshelf: an astounding baseline for recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813, 2014.
Donahue, Jeff, Jia, Yangqing, Vinyals, Oriol, Hoffman, Judy, Zhang, Ning, Tzeng, Eric, and Darrell, Trevor. Decaf: A deep convolutional activation feature for generic visual recognition. In International Conference in Machine Learning (ICML), 2014.
Yosinski, Jason, Clune, Jeff, Bengio, Yoshua, and Lipson, Hod. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320–3328, 2014.
Zhiyuan Chen and Bing Liu. 2014. Topic Modeling using Topics from Many Domains, Lifelong Learning and Big Data. In ICML.
2015
A. Pentina and C. H. Lampert. Lifelong learning with non-iid tasks. In Advances in Neural Information Processing Systems, pages 1540–1548, 2015.
Zhiyuan Chen, Nianzu Ma, and Bing Liu. 2015. Lifelong learning for sentiment classification. In ACL. 750–756.
2016
√ Zhizhong Li, et al. “Learning without forgetting.” ECCV. 2016
√ Andrei A. Rusu, et al. “Progressive neural networks.” arXiv:1606.04671. 2016.
Sang-Woo Lee, Chung-Yeon Lee, Dong Hyun Kwak, Jiwon Kim, Jeonghee Kim, and Byoung-Tak Zhang. Dual-memory deep learning architectures for lifelong learning of everyday human behaviors. In Twenty-Fifth International Joint Conference on Artificial Intelligencee, pages 1669–1675, 2016.
A. Gepperth and C. Karaoguz, “A bio-inspired incremental learning architecture for applied perceptual problems,” Cognitive Computation, vol. 8, no. 5, pp. 924–934, 2016.
Kieran Milan, Joel Veness, James Kirkpatrick, Michael Bowling, Anna Koop, and Demis Hassabis. The forget-me-not process. In NeurIPS, 2016.
David Isele, Mohammad Rostami, and Eric Eaton. Using task features for zero-shot knowledge transfer in lifelong learning. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI’16, pp. 1620–1626. AAAI Press, 2016. ISBN 978-1-57735-770-4.
Jung, Heechul, Ju, Jeongwoo, Jung, Minju, and Kim, Junmo. Less-forgetting Learning in Deep Neural Networks. arXiv:1607.00122 [cs], July 2016. arXiv: 1607.00122.
2017
√ Amal Rannen, et al. Encoder based lifelong learning. ICCV. 2017
√ Friedemann Zenke, et al. Continual Learning Through Synaptic Intelligence. ICML. 2017
√ David Lopez-Paz, et al. “Gradient episodic memory for continual learning.” NIPS. 2017.
Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pp. 2990–2999, 2017.
R. Aljundi, P. Chakravarty, and T. Tuytelaars, “Expert gate: Lifelong learning with a network of experts,” in CVPR, 2017, pp. 3366–3375.
Rebuffi, S.-A., Bilen, H., and Vedaldi, A. Learning multiple visual domains with residual adapters. In Advances in Neural Information Processing Systems, pp. 506–516, 2017
Yuchun Fang, Zhengyan Ma, Zhaoxiang Zhang, Xu-Yao Zhang, and Xiang Bai. Dynamic multi-task learning with convolutional neural network. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2017.
Yu-Xiong Wang, Deva Ramanan, and Martial Hebert. Growing a brain: Fine-tuning by increasing
model capacity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2471–2480, 2017.
2018
√ Rahaf Aljundi, et al. Memory aware synapses: Learning what (not) to forget. ECCV. 2018.
√ Jaehong Yoon, et al. Lifelong learning with dynamically expandable networks. ICLR. 2018.
√ Joan Serra, et al. “Overcoming Catastrophic Forgetting with Hard Attention to the Task”. ICML. 2018.
√ Ju Xu, et al. Reinforced continual learning. NIPS. 2018
Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning. ICLR, 2018.
Ronald Kemker, Marc McClure, Angelina Abitino, Tyler L Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural networks. In Thirty-second AAAI conference on artificial intelligence, 2018.
Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018.
Yen-Chang Hsu, Yen-Cheng Liu, and Zsolt Kira. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488, 2018.
Z. Chen and B. Liu, “Lifelong machine learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 12, no. 3, pp. 1–207, 2018.
S. Farquhar and Y. Gal, “Towards robust evaluations of continual learning,” arXiv preprint arXiv:1805.09733, 2018.
Jonathan Schwarz, Jelena Luketina, Wojciech M Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. In ICML, 2018.
Christos Kaplanis, Murray Shanahan, and Claudia Clopath. Continual reinforcement learning with complex synapses. In ICML, 2018.
David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In AAAI, 2018.
Clemens Rosenbaum, Tim Klinger, and Matthew Riemer. Routing networks: Adaptive selection of non-linear functions for multi-task learning. In International Conference on Learning Representations, 2018.
Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In ECCV, 2018.
Mohammad Rostami, Soheil Kolouri, Kyungnam Kim, and Eric Eaton. Multi-agent distributed lifelong learning for collective knowledge acquisition. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 2018.
Mallya, A. and Lazebnik, S. Piggyback: Adding multiple tasks to a single, fixed network by learning to mask. arXiv preprint arXiv:1801.06519, 2018.
Rebuffi, S.-A., Bilen, H., and Vedaldi, A. Efficient parametrization of multi-domain deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8119–8127, 2018.
Mancini, Massimiliano, et al. “Adding new tasks to a single network with weight transformations using binary masks.” Proceedings of the European Conference on Computer Vision (ECCV). 2018.
Rahaf Aljundi, Marcus Rohrbach, and Tinne Tuytelaars. Selfless sequential learning. In International Conference on Learning Representations (ICLR), 2018.
Ronald Kemker and Christopher Kanan. Fearnet: Brain-inspired model for incremental learning. In International Conference on Learning Representations (ICLR), 2018.
Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In Proceedings of the European Conference on Computer Vision (ECCV), pages 532–547, 2018.
Amir Rosenfeld and John K Tsotsos. 2018. Incremental learning through deep adaptation. IEEE transactions on pattern analysis and machine intelligence (2018).
Xu He and Herbert Jaeger. 2018. Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation. In ICLR.
Nicolas Y. Masse, Gregory D. Grant, and David J. Freedman. 2018. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proc. Natl. Acad. Sci. U.S.A. (2018).
Chenshen Wu, Luis Herranz, Xialei Liu, Joost van de Weijer, Bogdan Raducanu, et al. 2018. Memory replay GANs: Learning to generate new categories without forgetting. In NIPS.
Hippolyt Ritter, Aleksandar Botev, and David Barber. Online structured laplace approximations for overcoming catastrophic forgetting. In Advances in Neural Information Processing Systems, pp. 3738–3748, 2018.
Nicolas Y. Masse, Gregory D. Grant, and David J. Freedman. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proceedings of the National Academy of Sciences of the United States of America, 115 44, 2018.
Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367–4375, 2018.
David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pp. 3302–3309, 2018.
Alessandro Achille, Tom Eccles, Loic Matthey, Chris Burgess, Nicholas Watters, Alexander Lerchner, and Irina Higgins. Life-long disentangled representation learning with cross-domain latent homologies. In Advances in Neural Information Processing Systems 31 (NeurIPS-18), pp. 9873–9883, 2018.
Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton, and Swarat Chaudhuri. Houdini: Lifelong learning as program synthesis. In Advances in Neural Information Processing Systems 31 (NeurIPS-18), pp. 8687–8698, 2018.
Amir Rosenfeld and John K Tsotsos. Incremental learning through deep adaptation. IEEE transactions on pattern analysis and machine intelligence, 2018.
FranciscoMCastro, Manuel J Mar´ın-Jim´enez, Nicol´as Guil, Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 233–248, 2018.
Chen He, Ruiping Wang, Shiguang Shan, and Xilin Chen. Exemplar-supported generative reproduction for class incremental learning. In British Machine Vision Conference, 2018.
2019
√ Arslan Chaudhry, et al. Efficient lifelong learning with a-gem. ICLR. 2019.
√ David Rolnick, et al. Experience replay for continual learning. NIPS. 2019
Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Patrick Jahnichen, and Moin Nabi. Learning to remember: A synaptic plasticity driven framework for continual learning. In IEEE International Conference on Computer Vision and Pattern Recognition, 2019.
A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ranzato, “Continual learning with tiny episodic memories,” arXiv preprint arXiv:1902.10486, 2019.
Arslan Chaudhry, Albert Gordo, Puneet Kumar Dokania, Philip H. S. Torr, and David Lopez-Paz. Using hindsight to anchor past knowledge in continual learning. ArXiv, abs/2002.08165, 2019.
Michalis K Titsias, Jonathan Schwarz, Alexander G de G Matthews, Razvan Pascanu, and Yee Whye Teh. Functional regularisation for continual learning using gaussian processes. arXiv preprint arXiv:1901.11356, 2019.
Matthias Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ale Leonardis, Gregory G.Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. ArXiv, abs/1909.08383, 2019.
Cuong V Nguyen, Alessandro Achille, Michael Lam, Tal Hassner, Vijay Mahadevan, and Stefano Soatto. Toward understanding catastrophic forgetting in continual learning. arXiv preprint arXiv:1908.01091, 2019.
Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. Orthogonal gradient descent for continual learning. ArXiv, abs/1910.07104, 2019.
G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks: A review,” Neural Networks, 2019.
Pfülb, Benedikt, and Alexander Gepperth. “A comprehensive, application-oriented study of catastrophic forgetting in dnns.” arXiv preprint arXiv:1905.08101 (2019).
Christos Kaplanis, Murray Shanahan, and Claudia Clopath. Policy consolidation for continual reinforcement learning. In ICML, 2019.
Aljundi, Rahaf, et al. “Gradient based sample selection for online continual learning.” Advances in Neural Information Processing Systems. 2019.
Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.
Ching-Yi Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen. Compacting, picking and growing for unforgetting continual learning. In Advances in Neural Information Processing Systems (NIPS), 2019.
Hongjoon Ahn, Sungmin Cha, Donggyu Lee, and Taesup Moon. Uncertainty-based continual learning with adaptive regularization. In Advances in Neural Information Processing Systems (NeurIPS), pages 4394–4404, 2019.
★Siavash Golkar, Michael Kagan, and Kyunghyun Cho. Continual learning via neural pruning. Advances in Neural Information Processing Systems (NeurIPS) Workshop, 2019.
German Ignacio Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. 2019. Continual lifelong learning with neural networks: A review. Neural Networks (2019).
Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Jinwen Ma, Dongyan Zhao, and Rui Yan. 2019. Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation. In ICLR.
Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, and Rama Chellappa. 2019. Learning without Memorizing. In CVPR.
Khurram Javed and Martha White. 2019. Meta-Learning Representations for Continual Learning. In NeurIPS-2019.
Jathushan Rajasegaran, Munawar Hayat, Salman Khan, Fahad Shahbaz, and Khan Ling Shao. 2019. Random Path Selection for Incremental Learning. In NeurIPS.
Mohammad Rostami, Soheil Kolouri, and Praveen K. Pilly. 2019. Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay. In IJCAI.
van de Ven, Gido M., and Andreas S. Tolias. “Three scenarios for continual learning.” arXiv preprint arXiv:1904.07734 (2019).
Arslan Chaudhry, Albert Gordo, Puneet Kumar Dokania, Philip H. S. Torr, and David Lopez-Paz. Using hindsight to anchor past knowledge in continual learning. ArXiv, abs/2002.08165, 2019.
Michalis K Titsias, Jonathan Schwarz, Alexander G de G Matthews, Razvan Pascanu, and Yee Whye Teh. Functional regularisation for continual learning using gaussian processes. arXiv preprint arXiv:1901.11356, 2019.
Ghassen Jerfel, Erin Grant, Thomas L. Griffiths, and Katherine A. Heller. Reconciling meta-learning and continual learning with online mixtures of tasks. In NeurIPS, 2019.
★Dushyant Rao, Francesco Visin, Andrei Rusu, Razvan Pascanu, Yee Whye Teh, and Raia Hadsell. Continual unsupervised representation learning. In Advances in Neural Information Processing Systems, pp. 7645–7655, 2019.
★Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Online continual learning with no task boundaries. arXiv preprint arXiv:1903.08671, 2019.
★Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A Rusu, Yee Whye Teh, and Razvan Pascanu. Task agnostic continual learning via meta learning. arXiv preprint arXiv:1906.05201, 2019.
Rajasegaran J, Hayat M, Khan S, et al. Random path selection for incremental learning[J]. Advances in Neural Information Processing Systems, 2019.
Eden Belouadah and Adrian Popescu. Il2m: Class incremental learning with dual memory. In The IEEE International Conference on Computer Vision (ICCV), October 2019.
Yue Wu, et al. Large scale incremental learning. CVPR. 2019.
Prithviraj Dhar, et al. Learning without memorizing. CVPR. 2019.
2020
Saha, Gobinda, et al. “Structured Compression and Sharing of Representational Space for Continual Learning.” arXiv preprint arXiv:2001.08650 (2020).
Dong Yin, Mehrdad Farajtabar, and Ang Li. SOLA: Continual learning with second-order loss approximation. arXiv preprint arXiv:2006.10974, 2020.
★ Seyed-Iman Mirzadeh, Mehrdad Farajtabar, and Hassan Ghasemzadeh. Dropout as an implicit gating mechanism for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 232–233, 2020.
Matthew Wallingford, Aditya Kusupati, Keivan Alizadeh-Vahid, Aaron Walsman, Aniruddha Kembhavi, and Ali Farhadi. In the wild: From ml models to pragmatic ml systems. ArXiv, abs/2007.02519, 2020.
Michalis K Titsias, Jonathan Schwarz, Alexander G de G Matthews, Razvan Pascanu, and Yee Whye Teh. Functional regularisation for continual learning with gaussian processes. In Proceedings of the International Conference on Learning Representations (ICLR), 2020.
Yunhui Guo, Mingrui Liu, Tianbao Yang, and T. Rosing. Improved schemes for episodic memorybased lifelong learning. In Advances in Neural Information Processing Systems 33, 2020.
Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Mubarak Shah. 2020. iTAML: An Incremental Task-Agnostic Meta-learning Approach. In CVPR. 13588–13597.
Johannes von Oswald, Christian Henning, João Sacramento, and Benjamin F Grewe. 2020. Continual learning with hypernetworks. In ICLR.
★ Mitchell Wortsman, V. Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, J. Yosinski, and Ali Farhadi. Supermasks in superposition. ArXiv, abs/2006.14769, 2020.
Tyler L Hayes and Christopher Kanan. Lifelong machine learning with deep streaming linear discriminant analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 220–221, 2020.
Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D Bagdanov, Shangling Jui, and Joost van de Weijer. Generative feature replay for class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 226–227, 2020.
Ghada Sokar, et al. Spacenet: Make free space for continual learning. Neurocomputing. 2020.
Sayna Ebrahimi, et al. Adversarial continual learning. ECCV. 2020.
Xiaoyu Tao, et al. Fewshot class-incremental learning. CVPR. 2020.
2021
√ Gobinda Saha, et al. Gradient Projection Memory for Continual Learning. ICLR 2021
√ Seyed Iman Mirzadeh, et al. Linear Mode Connectivity in Multitask and Continual Learning. ICLR 2021
√ Jorge A Mendez, et al. Lifelong Learning of Compositional Structures. ICLR. 2021
Kevin Lu, et al. Reset-Free Lifelong Learning with Skill-Space Planning. ICLR. 2021
- 强化学习相关的,研究的是持续强化学习
Kuilin Chen, et al. Incremental few-shot learning via vector quantization in deep embedded space. ICLR. 2021
- In this study, we propose a nonparametric method in deep embedded space to tackle incremental few-shot learning problems. The knowledge about the learned tasks are compressed into a small number of quantized reference vectors. The proposed method learns new tasks sequentially by adding more reference vectors to the model using few-shot samples in each novel task.
Vinay Venkatesh Ramasesh, et al. Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics. ICLR. 2021
- “灾难性遗忘”是开发通用深度学习模型的一个反复出现的挑战。尽管它无处不在,但对它与神经网络(隐藏)表示和任务语义的联系的理解还很有限。在本文中,我们解决了这一重要的知识鸿沟。通过对神经表示的定量分析,我们发现更深的层对遗忘有着更大比例的影响,顺序训练导致擦除了较早的任务表示子空间。减轻遗忘的方法可以巩固这些较深的层,但在细微效果上表现各有不同,其中一些方法增加了特征复用,而另一些方法则正交存储任务表示,从而防止了干扰。These insights also enable the development of an analytic argument and empirical picture relating forgetting to task semantic similarity, where we find that maximal forgetting occurs for task sequences with intermediate similarity.
Binh Tang, et al. Graph-Based Continual Learning. ICLR. 2021
- Rehearsal approaches alleviate the problem by maintaining and replaying a small episodic memory of previous samples, often implemented as an array of independent memory slots. In this work, we propose to augment such an array with a learnable random graph that captures pairwise similarities between its samples, and use it not only to learn new tasks but also to guard against forgetting.
Benjamin Ehret, et al. Continual learning in recurrent neural networks. ICLR. 2021
Generalized Variational Continual Learning. ICLR. 2021
Contextual Transformation Networks for Online Continual Learning. ICLR. 2021
Efficient Continual Learning with Modular Networks and Task-Driven Priors. ICLR. 2021
CPR: Classifier-Projection Regularization for Continual Learning. ICLR. 2021
EEC: Learning to Encode and Regenerate Images for Continual Learning. ICLR. 2021