Reading List: Papers on Continual Learning

1982

Stephen Grossberg. How does a brain build a cognitive code? In Studies of mind and brain, pages 1–52. Springer, 1982.

1986

J. C. Schlimmer and D. H. Fisher. A case study of incremental concept induction. In AAAI, 1986.

1989

M. McCloskey and N. J. Cohen. Catastrophic interference in connectionist networks: The sequential learning problem. Psychology of learning and motivation, 24:109–165, 1989.

1990

R. Ratcliff. Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychological review, 97 2:285–308, 1990.

1995

S. Thrun. A lifelong learning perspective for mobile robot control. In V. Graefe (ed.), Intelligent Robots and Systems. Elsevier, 1995.

Anthony V. Robins. Catastrophic forgetting, rehearsal and pseudorehearsal. Connect. Sci., 7:123–146, 1995.

Thrun, S. and Mitchell, T. Lifelong robot learning. Robotics and Autonomous Systems, 15:25–46, 1995.

1996

S. Thrun. Is learning the n-th thing any easier than learning the first? In NIPS, 1996.

1997

Mark B Ring. Child: A first step towards continual learning. Machine Learning, 28(1):77–104, 1997.

1998

Sebastian Thrun. Lifelong learning algorithms. In Learning to learn, pp. 181–209. Springer, 1998.

Mark B. Ring. Child: A first step towards continual learning. In Learning to Learn, 1998.

1999

French, R.M.: Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3(4), 128–135 (1999)

2000

G. Cauwenberghs and T. Poggio. Incremental and decremental support vector machine learning. In NIPS, 2000.

2001

R. Polikar, L. Upda, S. S. Upda, and V. Honavar. Learn++: An incremental learning algorithm for supervised neural networks. IEEE Trans. Systems, Man, and Cybernetics, Part C, 31(4):497–508, 2001.

2002

D. L. Silver and R. E. Mercer, “The task rehearsal method of life-long learning: Overcoming impoverished data,” in Conference of the Canadian Society for Computational Studies of Intelligence. Springer, 2002, pp. 90–101.

2005

O.-M. Moe-Helgesen and H. Stranden. Catastophic forgetting in neural networks. Technical report, Norwegian University of Science and Technology (NTNU), 2005.

2012

Zhou, Guanyu, Kihyuk Sohn, and Honglak Lee. “Online incremental feature learning with denoising autoencoders.” Artificial intelligence and statistics. 2012.

Abhishek Kumar and Hal Daume III. Learning task grouping and overlap in multi-task learning. In Proceedings of the International Conference on Machine Learning (ICML), 2012.

2013

I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio, “An empirical investigation of catastrophic forgetting in gradient-based neural networks,” arXiv preprint arXiv:1312.6211, 2013.

I. Kuzborskij, F. Orabona, and B. Caputo. From n to n + 1: Multiclass transfer incremental learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2013.

D. L. Silver, Q. Yang, and L. Li. Lifelong machine learning systems: Beyond learning algorithms. In AAAI Spring Symposium: Lifelong Machine Learning, pages 49–55. Citeseer, 2013.

Srivastava, Rupesh K, Masci, Jonathan, Kazerounian, Sohrob, Gomez, Faustino, and Schmidhuber, Juergen. Compete to Compute. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 26, pp. 2310–2318. Curran Associates, Inc., 2013.

Paul Ruvolo and Eric Eaton. Ella: An efficient lifelong learning algorithm. In Proceedings of the International Conference on Machine Learning (ICML), 2013.

T. Mensink, J. Verbeek, F. Perronnin, and G. Csurka. Distance-based image classification: Generalizing to new classes at near-zero cost. PAMI, 35(11):2624–2637, 2013.

Rupesh K Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, and Jürgen Schmidhuber. Compete to compute. In Advances in neural information processing systems, pages 2310–2318, 2013.

2014

M. Ristin, M. Guillaumin, J. Gall, and L. Van Gool. Incremental learning of NCM forests for large-scale image classification. In Conference on Computer Vision and Pattern Recognition (CVPR), 2014.

I. J. Goodfellow, M. Mirza, D. Xiao, A. Courville, and Y. Bengio. An empirical investigation of catastrophic forgeting in gradient-based neural networks. In International Conference on Learning Representations (ICLR), 2014.

T. Xiao, J. Zhang, K. Yang, Y. Peng, and Z. Zhang. Error-driven incremental learning in deep convolutional neural network for large-scale image classification. In International Conference on Multimedia (ACM MM), 2014.

Razavian, Ali Sharif, Azizpour, Hossein, Sullivan, Josephine, and Carlsson, Stefan. Cnn features off-theshelf: an astounding baseline for recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 806–813, 2014.

Donahue, Jeff, Jia, Yangqing, Vinyals, Oriol, Hoffman, Judy, Zhang, Ning, Tzeng, Eric, and Darrell, Trevor. Decaf: A deep convolutional activation feature for generic visual recognition. In International Conference in Machine Learning (ICML), 2014.

Yosinski, Jason, Clune, Jeff, Bengio, Yoshua, and Lipson, Hod. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320–3328, 2014.

Zhiyuan Chen and Bing Liu. 2014. Topic Modeling using Topics from Many Domains, Lifelong Learning and Big Data. In ICML.

2015

A. Pentina and C. H. Lampert. Lifelong learning with non-iid tasks. In Advances in Neural Information Processing Systems, pages 1540–1548, 2015.

Zhiyuan Chen, Nianzu Ma, and Bing Liu. 2015. Lifelong learning for sentiment classification. In ACL. 750–756.

2016

Zhizhong Li, et al. “Learning without forgetting.” ECCV. 2016

Andrei A. Rusu, et al. “Progressive neural networks.” arXiv:1606.04671. 2016.

Sang-Woo Lee, Chung-Yeon Lee, Dong Hyun Kwak, Jiwon Kim, Jeonghee Kim, and Byoung-Tak Zhang. Dual-memory deep learning architectures for lifelong learning of everyday human behaviors. In Twenty-Fifth International Joint Conference on Artificial Intelligencee, pages 1669–1675, 2016.

A. Gepperth and C. Karaoguz, “A bio-inspired incremental learning architecture for applied perceptual problems,” Cognitive Computation, vol. 8, no. 5, pp. 924–934, 2016.

Kieran Milan, Joel Veness, James Kirkpatrick, Michael Bowling, Anna Koop, and Demis Hassabis. The forget-me-not process. In NeurIPS, 2016.

David Isele, Mohammad Rostami, and Eric Eaton. Using task features for zero-shot knowledge transfer in lifelong learning. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI’16, pp. 1620–1626. AAAI Press, 2016. ISBN 978-1-57735-770-4.

Jung, Heechul, Ju, Jeongwoo, Jung, Minju, and Kim, Junmo. Less-forgetting Learning in Deep Neural Networks. arXiv:1607.00122 [cs], July 2016. arXiv: 1607.00122.

2017

Sylvestre-Alvise Rebuffi, et al. icarl: Incremental classifier and representation learning. CVPR. 2017

Amal Rannen, et al. Encoder based lifelong learning. ICCV. 2017

Konstantin Shmelkov, et al. Incremental learning of object detectors without catastrophic forgetting. ICCV. 2017

Friedemann Zenke, et al. Continual Learning Through Synaptic Intelligence. ICML. 2017

David Lopez-Paz, et al.  “Gradient episodic memory for continual learning.” NIPS. 2017.

Sang-Woo Lee, et al. “Overcoming catastrophic forgetting by incremental moment matching.” NIPS. 2017.

James Kirkpatrick, et al. “Overcoming catastrophic forgetting in neural networks.” Proceedings of the National Academy of Sciences of the United States of America. 2017

Chrisantha Fernando, et al. “Pathnet: Evolution channels gradient descent in super neural networks”. arXiv:1701.08734. 2017.

Hanul Shin, Jung Kwon Lee, Jaehong Kim, and Jiwon Kim. Continual learning with deep generative replay. In Advances in Neural Information Processing Systems, pp. 2990–2999, 2017.

R. Aljundi, P. Chakravarty, and T. Tuytelaars, “Expert gate: Lifelong learning with a network of experts,” in CVPR, 2017, pp. 3366–3375.

Rebuffi, S.-A., Bilen, H., and Vedaldi, A. Learning multiple visual domains with residual adapters. In Advances in Neural Information Processing Systems, pp. 506–516, 2017

Yuchun Fang, Zhengyan Ma, Zhaoxiang Zhang, Xu-Yao Zhang, and Xiang Bai. Dynamic multi-task learning with convolutional neural network. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2017.

Yu-Xiong Wang, Deva Ramanan, and Martial Hebert. Growing a brain: Fine-tuning by increasing
model capacity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2471–2480, 2017.

2018

Arun Mallya, et al. “Packnet: Adding multiple tasks to a single network by iterative pruning.” CVPR. 2018.

Rahaf Aljundi, et al. Memory aware synapses: Learning what (not) to forget. ECCV. 2018.

Jaehong Yoon, et al. Lifelong learning with dynamically expandable networks. ICLR. 2018.

Joan Serra, et al. “Overcoming Catastrophic Forgetting with Hard Attention to the Task”. ICML. 2018.

Ju Xu, et al. Reinforced continual learning. NIPS. 2018

Cuong V Nguyen, Yingzhen Li, Thang D Bui, and Richard E Turner. Variational continual learning. ICLR, 2018.

Ronald Kemker, Marc McClure, Angelina Abitino, Tyler L Hayes, and Christopher Kanan. Measuring catastrophic forgetting in neural networks. In Thirty-second AAAI conference on artificial intelligence, 2018.

Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, and Geoffrey J Gordon. An empirical study of example forgetting during deep neural network learning. arXiv preprint arXiv:1812.05159, 2018.

Yen-Chang Hsu, Yen-Cheng Liu, and Zsolt Kira. Re-evaluating continual learning scenarios: A categorization and case for strong baselines. arXiv preprint arXiv:1810.12488, 2018.

Z. Chen and B. Liu, “Lifelong machine learning,” Synthesis Lectures on Artificial Intelligence and Machine Learning, vol. 12, no. 3, pp. 1–207, 2018.

S. Farquhar and Y. Gal, “Towards robust evaluations of continual learning,” arXiv preprint arXiv:1805.09733, 2018.

Jonathan Schwarz, Jelena Luketina, Wojciech M Czarnecki, Agnieszka Grabska-Barwinska, Yee Whye Teh, Razvan Pascanu, and Raia Hadsell. Progress & compress: A scalable framework for continual learning. In ICML, 2018.

Christos Kaplanis, Murray Shanahan, and Claudia Clopath. Continual reinforcement learning with complex synapses. In ICML, 2018.

David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In AAAI, 2018.

Clemens Rosenbaum, Tim Klinger, and Matthew Riemer. Routing networks: Adaptive selection of non-linear functions for multi-task learning. In International Conference on Learning Representations, 2018.

Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In ECCV, 2018.

Mohammad Rostami, Soheil Kolouri, Kyungnam Kim, and Eric Eaton. Multi-agent distributed lifelong learning for collective knowledge acquisition. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, 2018.

Mallya, A. and Lazebnik, S. Piggyback: Adding multiple tasks to a single, fixed network by learning to mask. arXiv preprint arXiv:1801.06519, 2018.

Rebuffi, S.-A., Bilen, H., and Vedaldi, A. Efficient parametrization of multi-domain deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8119–8127, 2018.

Mancini, Massimiliano, et al. “Adding new tasks to a single network with weight transformations using binary masks.” Proceedings of the European Conference on Computer Vision (ECCV). 2018.

Rahaf Aljundi, Marcus Rohrbach, and Tinne Tuytelaars. Selfless sequential learning. In International Conference on Learning Representations (ICLR), 2018.

Ronald Kemker and Christopher Kanan. Fearnet: Brain-inspired model for incremental learning. In International Conference on Learning Representations (ICLR), 2018.

Arslan Chaudhry, Puneet K Dokania, Thalaiyasingam Ajanthan, and Philip HS Torr. Riemannian walk for incremental learning: Understanding forgetting and intransigence. In Proceedings of the European Conference on Computer Vision (ECCV), pages 532–547, 2018.

Amir Rosenfeld and John K Tsotsos. 2018. Incremental learning through deep adaptation. IEEE transactions on pattern analysis and machine intelligence (2018).

Xu He and Herbert Jaeger. 2018. Overcoming Catastrophic Interference using Conceptor-Aided Backpropagation. In ICLR.

Nicolas Y. Masse, Gregory D. Grant, and David J. Freedman. 2018. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proc. Natl. Acad. Sci. U.S.A. (2018).

Chenshen Wu, Luis Herranz, Xialei Liu, Joost van de Weijer, Bogdan Raducanu, et al. 2018. Memory replay GANs: Learning to generate new categories without forgetting. In NIPS.

Hippolyt Ritter, Aleksandar Botev, and David Barber. Online structured laplace approximations for overcoming catastrophic forgetting. In Advances in Neural Information Processing Systems, pp. 3738–3748, 2018.

Nicolas Y. Masse, Gregory D. Grant, and David J. Freedman. Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proceedings of the National Academy of Sciences of the United States of America, 115 44, 2018.

Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4367–4375, 2018.

David Isele and Akansel Cosgun. Selective experience replay for lifelong learning. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), pp. 3302–3309, 2018.

Alessandro Achille, Tom Eccles, Loic Matthey, Chris Burgess, Nicholas Watters, Alexander Lerchner, and Irina Higgins. Life-long disentangled representation learning with cross-domain latent homologies. In Advances in Neural Information Processing Systems 31 (NeurIPS-18), pp. 9873–9883, 2018.

Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton, and Swarat Chaudhuri. Houdini: Lifelong learning as program synthesis. In Advances in Neural Information Processing Systems 31 (NeurIPS-18), pp. 8687–8698, 2018.

Amir Rosenfeld and John K Tsotsos. Incremental learning through deep adaptation. IEEE transactions on pattern analysis and machine intelligence, 2018.

FranciscoMCastro, Manuel J Mar´ın-Jim´enez, Nicol´as Guil, Cordelia Schmid, and Karteek Alahari. End-to-end incremental learning. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 233–248, 2018.

Chen He, Ruiping Wang, Shiguang Shan, and Xilin Chen. Exemplar-supported generative reproduction for class incremental learning. In British Machine Vision Conference, 2018.

2019

Arslan Chaudhry, et al. Efficient lifelong learning with a-gem. ICLR. 2019.

Xilai Li, et al. Learn to Grow: A Continual Structure Learning Framework for Overcoming Catastrophic Forgetting. ICML. 2019

David Rolnick, et al. Experience replay for continual learning. NIPS. 2019

Oleksiy Ostapenko, Mihai Puscas, Tassilo Klein, Patrick Jahnichen, and Moin Nabi. Learning to remember: A synaptic plasticity driven framework for continual learning. In IEEE International Conference on Computer Vision and Pattern Recognition, 2019.

A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P. K. Dokania, P. H. Torr, and M. Ranzato, “Continual learning with tiny episodic memories,” arXiv preprint arXiv:1902.10486, 2019.

Arslan Chaudhry, Albert Gordo, Puneet Kumar Dokania, Philip H. S. Torr, and David Lopez-Paz. Using hindsight to anchor past knowledge in continual learning. ArXiv, abs/2002.08165, 2019.

Michalis K Titsias, Jonathan Schwarz, Alexander G de G Matthews, Razvan Pascanu, and Yee Whye Teh. Functional regularisation for continual learning using gaussian processes. arXiv preprint arXiv:1901.11356, 2019.

Matthias Lange, Rahaf Aljundi, Marc Masana, Sarah Parisot, Xu Jia, Ale Leonardis, Gregory G.Slabaugh, and Tinne Tuytelaars. A continual learning survey: Defying forgetting in classification tasks. ArXiv, abs/1909.08383, 2019.

Cuong V Nguyen, Alessandro Achille, Michael Lam, Tal Hassner, Vijay Mahadevan, and Stefano Soatto. Toward understanding catastrophic forgetting in continual learning. arXiv preprint arXiv:1908.01091, 2019.

Mehrdad Farajtabar, Navid Azizan, Alex Mott, and Ang Li. Orthogonal gradient descent for continual learning. ArXiv, abs/1910.07104, 2019.

G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, “Continual lifelong learning with neural networks: A review,” Neural Networks, 2019.

Pfülb, Benedikt, and Alexander Gepperth. “A comprehensive, application-oriented study of catastrophic forgetting in dnns.” arXiv preprint arXiv:1905.08101 (2019).

Christos Kaplanis, Murray Shanahan, and Claudia Clopath. Policy consolidation for continual reinforcement learning. In ICML, 2019.

Aljundi, Rahaf, et al. “Gradient based sample selection for online continual learning.” Advances in Neural Information Processing Systems. 2019.

Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. In Proceedings of the International Conference on Learning Representations (ICLR), 2019.

Ching-Yi Hung, Cheng-Hao Tu, Cheng-En Wu, Chien-Hung Chen, Yi-Ming Chan, and Chu-Song Chen. Compacting, picking and growing for unforgetting continual learning. In Advances in Neural Information Processing Systems (NIPS), 2019.

Hongjoon Ahn, Sungmin Cha, Donggyu Lee, and Taesup Moon. Uncertainty-based continual learning with adaptive regularization. In Advances in Neural Information Processing Systems (NeurIPS), pages 4394–4404, 2019.

Siavash Golkar, Michael Kagan, and Kyunghyun Cho. Continual learning via neural pruning. Advances in Neural Information Processing Systems (NeurIPS) Workshop, 2019.

German Ignacio Parisi, Ronald Kemker, Jose L. Part, Christopher Kanan, and Stefan Wermter. 2019. Continual lifelong learning with neural networks: A review. Neural Networks (2019).

Wenpeng Hu, Zhou Lin, Bing Liu, Chongyang Tao, Zhengwei Tao, Jinwen Ma, Dongyan Zhao, and Rui Yan. 2019. Overcoming Catastrophic Forgetting for Continual Learning via Model Adaptation. In ICLR.

Prithviraj Dhar, Rajat Vikram Singh, Kuan-Chuan Peng, Ziyan Wu, and Rama Chellappa. 2019. Learning without Memorizing. In CVPR.

Khurram Javed and Martha White. 2019. Meta-Learning Representations for Continual Learning. In NeurIPS-2019.

Jathushan Rajasegaran, Munawar Hayat, Salman Khan, Fahad Shahbaz, and Khan Ling Shao. 2019. Random Path Selection for Incremental Learning. In NeurIPS.

Mohammad Rostami, Soheil Kolouri, and Praveen K. Pilly. 2019. Complementary Learning for Overcoming Catastrophic Forgetting Using Experience Replay. In IJCAI.

van de Ven, Gido M., and Andreas S. Tolias. “Three scenarios for continual learning.” arXiv preprint arXiv:1904.07734 (2019).

Arslan Chaudhry, Albert Gordo, Puneet Kumar Dokania, Philip H. S. Torr, and David Lopez-Paz. Using hindsight to anchor past knowledge in continual learning. ArXiv, abs/2002.08165, 2019.

Michalis K Titsias, Jonathan Schwarz, Alexander G de G Matthews, Razvan Pascanu, and Yee Whye Teh. Functional regularisation for continual learning using gaussian processes. arXiv preprint arXiv:1901.11356, 2019.

Ghassen Jerfel, Erin Grant, Thomas L. Griffiths, and Katherine A. Heller. Reconciling meta-learning and continual learning with online mixtures of tasks. In NeurIPS, 2019.

Dushyant Rao, Francesco Visin, Andrei Rusu, Razvan Pascanu, Yee Whye Teh, and Raia Hadsell. Continual unsupervised representation learning. In Advances in Neural Information Processing Systems, pp. 7645–7655, 2019.

Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Online continual learning with no task boundaries. arXiv preprint arXiv:1903.08671, 2019.

Xu He, Jakub Sygnowski, Alexandre Galashov, Andrei A Rusu, Yee Whye Teh, and Razvan Pascanu. Task agnostic continual learning via meta learning. arXiv preprint arXiv:1906.05201, 2019.

Rajasegaran J, Hayat M, Khan S, et al. Random path selection for incremental learning[J]. Advances in Neural Information Processing Systems, 2019.

Eden Belouadah and Adrian Popescu. Il2m: Class incremental learning with dual memory. In The IEEE International Conference on Computer Vision (ICCV), October 2019.

Yue Wu, et al. Large scale incremental learning. CVPR. 2019.

Prithviraj Dhar, et al. Learning without memorizing. CVPR. 2019.

2020

Davide Abati, et al. “Conditional Channel Gated Networks for Task-Aware Continual Learning.” CVPR. 2020.

Jaehong Yoon, et al. Scalable and order-robust continual learning with additive parameter decomposition. ICLR. 2020.

Sangwon Jung, et al. Continual Learning with Node-Importance based Adaptive Group Sparse Regularization. NIPS. 2020.

Zixuan Ke, et al. “Continual Learning of a Mixed Sequence of Similar and Dissimilar Tasks.” NIPS. 2020.

Saha, Gobinda, et al. “Structured Compression and Sharing of Representational Space for Continual Learning.” arXiv preprint arXiv:2001.08650 (2020).

Dong Yin, Mehrdad Farajtabar, and Ang Li. SOLA: Continual learning with second-order loss approximation. arXiv preprint arXiv:2006.10974, 2020.

Seyed-Iman Mirzadeh, Mehrdad Farajtabar, and Hassan Ghasemzadeh. Dropout as an implicit gating mechanism for continual learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 232–233, 2020.

Matthew Wallingford, Aditya Kusupati, Keivan Alizadeh-Vahid, Aaron Walsman, Aniruddha Kembhavi, and Ali Farhadi. In the wild: From ml models to pragmatic ml systems. ArXiv, abs/2007.02519, 2020.

Michalis K Titsias, Jonathan Schwarz, Alexander G de G Matthews, Razvan Pascanu, and Yee Whye Teh. Functional regularisation for continual learning with gaussian processes. In Proceedings of the International Conference on Learning Representations (ICLR), 2020.

Yunhui Guo, Mingrui Liu, Tianbao Yang, and T. Rosing. Improved schemes for episodic memorybased lifelong learning. In Advances in Neural Information Processing Systems 33, 2020.

Jathushan Rajasegaran, Salman Khan, Munawar Hayat, Fahad Shahbaz Khan, and Mubarak Shah. 2020. iTAML: An Incremental Task-Agnostic Meta-learning Approach. In CVPR. 13588–13597.

Johannes von Oswald, Christian Henning, João Sacramento, and Benjamin F Grewe. 2020. Continual learning with hypernetworks. In ICLR.

Mitchell Wortsman, V. Ramanujan, Rosanne Liu, Aniruddha Kembhavi, Mohammad Rastegari, J. Yosinski, and Ali Farhadi. Supermasks in superposition. ArXiv, abs/2006.14769, 2020.

Tyler L Hayes and Christopher Kanan. Lifelong machine learning with deep streaming linear discriminant analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 220–221, 2020.

Xialei Liu, Chenshen Wu, Mikel Menta, Luis Herranz, Bogdan Raducanu, Andrew D Bagdanov, Shangling Jui, and Joost van de Weijer. Generative feature replay for class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 226–227, 2020.

Ghada Sokar, et al. Spacenet: Make free space for continual learning. Neurocomputing. 2020.

Sayna Ebrahimi, et al. Adversarial continual learning. ECCV. 2020.

Xiaoyu Tao, et al. Fewshot class-incremental learning. CVPR. 2020.

2021

Gobinda Saha, et al. Gradient Projection Memory for Continual Learning. ICLR 2021

Seyed Iman Mirzadeh, et al. Linear Mode Connectivity in Multitask and Continual Learning. ICLR 2021

Jorge A Mendez, et al. Lifelong Learning of Compositional Structures. ICLR. 2021

Tianlong Chen, et al. Long Live the Lottery: The Existence of Winning Tickets in Lifelong Learning. ICLR. 2021

Kevin Lu, et al. Reset-Free Lifelong Learning with Skill-Space Planning. ICLR. 2021

  • 强化学习相关的,研究的是持续强化学习

Sayna Ebrahimi, et al. Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting. ICLR. 2021

Kuilin Chen, et al. Incremental few-shot learning via vector quantization in deep embedded space. ICLR. 2021

  • In this study, we propose a nonparametric method in deep embedded space to tackle incremental few-shot learning problems. The knowledge about the learned tasks are compressed into a small number of quantized reference vectors. The proposed method learns new tasks sequentially by adding more reference vectors to the model using few-shot samples in each novel task.

Vinay Venkatesh Ramasesh, et al. Anatomy of Catastrophic Forgetting: Hidden Representations and Task Semantics. ICLR. 2021

  • “灾难性遗忘”是开发通用深度学习模型的一个反复出现的挑战。尽管它无处不在,但对它与神经网络(隐藏)表示和任务语义的联系的理解还很有限。在本文中,我们解决了这一重要的知识鸿沟。通过对神经表示的定量分析,我们发现更深的层对遗忘有着更大比例的影响,顺序训练导致擦除了较早的任务表示子空间。减轻遗忘的方法可以巩固这些较深的层,但在细微效果上表现各有不同,其中一些方法增加了特征复用,而另一些方法则正交存储任务表示,从而防止了干扰。These insights also enable the development of an analytic argument and empirical picture relating forgetting to task semantic similarity, where we find that maximal forgetting occurs for task sequences with intermediate similarity.

Binh Tang, et al. Graph-Based Continual Learning. ICLR. 2021

  • Rehearsal approaches alleviate the problem by maintaining and replaying a small episodic memory of previous samples, often implemented as an array of independent memory slots. In this work, we propose to augment such an array with a learnable random graph that captures pairwise similarities between its samples, and use it not only to learn new tasks but also to guard against forgetting.

Benjamin Ehret, et al. Continual learning in recurrent neural networks. ICLR. 2021

Generalized Variational Continual Learning. ICLR. 2021

Contextual Transformation Networks for Online Continual Learning. ICLR. 2021

Efficient Continual Learning with Modular Networks and Task-Driven Priors. ICLR. 2021

CPR: Classifier-Projection Regularization for Continual Learning. ICLR. 2021

EEC: Learning to Encode and Regenerate Images for Continual Learning. ICLR. 2021

SVN 快速入门

参考资料:

  1. SVN 教程 - 菜鸟教程
  2. svn_authz配置文件总结
  3. SVN服务器在Ubuntu16.04下搭建多版本库详细教程
  4. Ubuntu SVN安装及使用
  5. 3分钟练成SVN命令高手:SVN常用命令

SVN 简介

Subversion(SVN) 是一个开源的版本控制系統, 也就是说 Subversion 管理着随时间改变的数据。 这些数据放置在一个中央资料档案库(repository) 中。 这个档案库很像一个普通的文件服务器, 不过它会记住每一次文件的变动。 这样你就可以把档案恢复到旧的版本, 或是浏览文件的变动历史。

SVN 的一些概念

  • repository(源代码库): 源代码统一存放的地方
  • Checkout(提取): 当你手上没有源代码的时候,你需要从repository checkout一份
  • Commit(提交): 当你已经修改了代码,你就需要Commit到repository
  • Update (更新): 当你已经Checkout了一份源代码, Update一下你就可以和Repository上的源代码同步,你手上的代码就会有最新的变更

日常开发过程其实就是这样的(假设你已经Checkout并且已经工作了几天):Update(获得最新的代码) → 作出自己的修改并调试成功 → Commit(大家就可以看到你的修改了) 。

如果两个程序员同时修改了同一个文件呢,SVN 可以合并这两个程序员的改动,实际上SVN管理源代码是以行为单位的,就是说两个程序员只要不是修改了同一行程序,SVN都会自动合并两种修改。如果是同一行,SVN 会提示文件 Conflict(冲突),需要手动确认。

在 Ubuntu 系统上安装SVN

1
apt install subversion

通过命令 svn --version 验证安装成功。

SVN 实践

(Server端)创建版本库

版本库相当于一个集中的空间,用于存放开发者所有的工作成果。版本库不仅能存放文件,还包括了每次修改的历史,即每个文件的变动历史。

Create 操作是用来创建一个新的版本库。大多数情况下这个操作只会执行一次。当你创建一个新的版本库的时候,你的版本控制系统会让你提供一些信息来标识版本库,例如创建的位置和版本库的名字。

使用 svn 命令创建资源库

1
2
3
4
5
6
# 创建版本库目录
mkdir -p /opt/svn/example_repo_1
mkdir -p /opt/svn/example_repo_2
# 使用svn命令创建版本库
svnadmin create /opt/svn/example_repo_1
svnadmin create /opt/svn/example_repo_2

为了便于管理,将所有版本库的密码和权限设置放在同一个文件中,操作步骤如下:

example_repo_1 目录下 conf 文件夹中的 authzpasswd 两个文件复制到 svn 根目录下面

svn服务配置文件svnserve.conf

svn服务配置文件为版本库目录中的文件 conf/svnserve.conf。该文件仅由一个 [general] 配置段组成。

修改每个版本库目录 conf 文件夹下面的 svnserve.conf 文件, 将

1
2
3
4
# anon-access = read
# auth-access = write
# password-db = passwd
# authz-db = authz

修改为:

1
2
3
4
anon-access = none 
auth-access= write
password-db = ../../passwd
authz-db = ../../authz
  • anon-access: 控制未认证(匿名)用户访问版本库的权限,取值范围为"write"、“read"和"none”。 即"write"为可读可写,"read"为只读,"none"表示无访问权限。 默认值:read
  • auth-access: 控制认证用户访问版本库的权限。取值范围为"write"、“read"和"none”。 即"write"为可读可写,"read"为只读,"none"表示无访问权限。 默认值:write
  • authz-db: 指定权限配置文件名,通过该文件可以实现以路径为基础(path-based)的访问控制。 除非指定绝对路径,否则文件位置为相对 conf 目录的相对路径。 默认值:authz
  • realm: 指定版本库的认证域,即在登录时提示的认证域名称。若两个版本库的认证域相同,建议使用相同的用户名口令文件。 默认值:一个UUID(Universal Unique IDentifier,全局唯一标识)。

用户名口令文件passwd

用户名口令文件由 svnserve.conf 的配置项 password-db 指定,默认为 conf 目录中的 passwd但是,我们在上面将svn服务配置文件指向了svn根目录下的passwd文件。该文件仅由一个 [users] 配置段组成。

[users] 配置段的配置行格式为:<用户名> = <口令>,注意 = 前后都需要有空格。

编辑svn根目录下的 passwd 文件:

1
2
3
4
5
[users]
admin = admin
writer = writer
reader = reader
test = test

权限配置文件authz

权限配置文件由 svnserve.conf 的配置项 authz-db 指定,默认为 conf 目录中的 authz但是,我们在上面将svn服务配置文件指向了svn根目录下的authz文件。该配置文件由一个 [groups] 配置段和若干个版本库路径权限段组成。

[groups] 配置段中配置行格式为:<用户组> = <用户列表>

1
2
3
4
5
[groups]
g_admin = admin
g_writer = writer
g_reader = reader
g_test = test

版本库路径权限段的段名格式为:[<版本库名>:<路径>]

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[/]
@g_admin = rw #表示svn根目录下只有g_admin才有读写权限
* = #其余用户组(在这里即 user)没有任何权限

[example_repo_1:/]
@g_admin = rw #在example_repo_1库根目录下,g_admin有读写权限
@g_writer = rw #在example_repo_1库根目录下,g_writer有读写权限
* = r #在example_repo_1库根目录下,其余用户组只有读权限

[example_repo_2:/]
@g_admin = rw #在example_repo_2库根目录下,g_admin有读写权限
* = r #在example_repo_2库根目录下,其余用户组只有读权限

[example_repo_2:/write]
@g_admin = rw #在example_repo_2库write目录下,g_admin有读写权限
@g_writer = rw #在example_repo_2库write目录下,g_writer有读写权限
@g_reader = r #在example_repo_2库write目录下,g_reader只有读权限
* = #在example_repo_2库write目录下,其余用户组(在这里即 g_test)没有任何权限

启动 SVN 服务:

1
2
# 使用svnserve命令启动服务
svnserve --listen-port=8899 -d -r /opt/svn/
  • –listen-port: 指定SVN监听端口,不加此参数,SVN默认监听3690

可通过 ps -ef|grep svn 或者 netstat -antp | grep svnserve 查看是否存在svn进程已确定svn是否启动成功。

如果想关闭服务,可使用 pkill svnserve

(Client端)检出操作

Checkout 操作是用来从版本库创建一个工作副本。工作副本是开发者私人的工作空间,可以进行内容的修改,然后提交到版本库中。

检出操作 checkout 即将版本库克隆到本地。示例操作如下:

1
2
3
cd ~
mkdir svn_client
cd svn_client

检出操作如下:

1
svn checkout svn://localhost:8899/example_repo_1 --username=admin

将 Ubuntu 软件镜像源改为国内源的方法

参考资料:

  1. ubuntu18.04配置镜像源

将 Ubuntu 16.04 软件镜像源改为国内源的方法

备份原来的镜像源

1
sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup

更换软件镜像源

1
sudo vi /etc/apt/sources.list

清空原有内容,将下边的阿里源复制进去,然后保存关闭。

阿里源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
deb http://mirrors.aliyun.com/ubuntu/ xenial main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial main

deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main

deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates universe

deb http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main
deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security universe

更新镜像源

1
2
3
sudo apt-get update
sudo apt-get -f install
sudo apt-get upgrade

以上三条命令的功能分别是:

  • 更新软件镜像源
  • 修复损坏的软件包,尝试卸载出错的包,重新安装正确版本的。
  • 更新软件版本

其它常用源

西电源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
deb http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial main restricted universe multiverse
#deb-src http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial main restricted universe multiverse

deb http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial-security main restricted universe multiverse
#deb-src http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial-security main restricted universe multiverse

deb http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial-updates main restricted universe multiverse
#deb-src http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial-updates main restricted universe multiverse

#deb http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial-backports main restricted universe multiverse
#deb-src http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial-backports main restricted universe multiverse

#deb http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial-proposed main restricted universe multiverse
#deb-src http://linux.xidian.edu.cn/mirrors/ubuntu/ xenial-proposed main restricted universe multiverse

清华源

1
2
3
4
5
6
7
8
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ xenial-security main restricted universe multiverse

网易源

1
2
3
4
5
6
7
8
9
10
deb http://mirrors.163.com/ubuntu/ wily main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ wily-security main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ wily-updates main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ wily-proposed main restricted universe multiverse
deb http://mirrors.163.com/ubuntu/ wily-backports main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily-security main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily-updates main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily-proposed main restricted universe multiverse
deb-src http://mirrors.163.com/ubuntu/ wily-backports main restricted universe multiverse

将 Ubuntu 18.04 软件镜像源改为国内源的方法

备份原来的镜像源

1
sudo cp /etc/apt/sources.list /etc/apt/sources.list.backup

更换软件镜像源

1
sudo vi /etc/apt/sources.list

清空原有内容,将下边的清华镜像源复制进去,然后保存关闭。

清华镜像源

1
2
3
4
5
6
7
8
9
10
11
12
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse
deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-security main restricted universe multiverse

# 预发布软件源,不建议启用
# deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ bionic-proposed main restricted universe multiverse

更新镜像源

1
2
3
sudo apt-get update
sudo apt-get -f install
sudo apt-get upgrade

在 AWS 中国区方便安全的使用海外公开容器镜像

参考资料:

  1. 在 AWS 中国区方便安全的使用海外公开容器镜像 - AWS Team - 亚马逊AWS官方博客

Step 1 申请海外云服务器

此处使用的是 AWS-EC2- us-east-2 -Ubuntu 18.04。

1
2
apt update
apt upgrade

Step 2 安装Node.js

这部分内容参考 Ubuntu 18.04 安装 Node.js

Step 3 安装 AWS CLI工具

awscli是用Python写的,安装Python和pip后直接运行下面命令就可以了。

1
2
#  apt install python3-pip
pip3 install awscli

可以通过 aws 来确认:

Step 4 创建IAM user

Ref: AWS CLI 基本使用

注意:这里使用的是 AWS全球区(即海外) IAM用户的Access Key和Secret Access Key

如果之前没有 IAM user

以 Root user 登录,访问 IAM 服务,进入 IAM 控制面板。如图所示,添加用户:

注意勾选以下选项:

在第一步:

在第二步:

最后记得保存记录有密钥信息、登录网址的csv文件。

如果已经是以 IAM user 身份登录

如图操作即可:

Note: 如果之前创建过访问密钥,并且仍然保存着的话,没必要重复创建

1
2
3
4
5
$ aws configure
AWS Access Key ID [None]: <your-accesskeyID>
AWS Secret Access Key [None]: <your-secretAccessKey>
Default region name [None]: us-east-2 # 根据自己的IAM用户所属区域进行修改
Default output format [None]: json

Step 5 构建CDK项目

克隆github 项目并构建此CDK项目:

1
2
3
$ git clone https://github.com/aws-samples/amazon-ecr-replication-for-pub-container-images.git
$ cd amazon-ecr-replication-for-pub-container-images
$ npm install

部署此方案到AWS海外区域,其中 targetRegiontargetRegionAKtargetRegionSK 分别代表AWS国内区域、AWS用户的 access keysecret access key。此秘钥会保存在AWS Secrets Manager中以供CodeBuild登录AWS国内区域ECR。

1
2
3
# 如果海外的IAM用户没有 CDKToolkit 堆栈,需要先执行下面这条命令,否则报错(其实根据报错信息也可以知道需要执行下面这条命令)
# $ npx cdk bootstrap aws://unknown-account/unknown-region
$ npx cdk deploy --parameters targetRegion=cn-northwest-1 --parameters targetRegionAK=<your-accesskeyID> --parameters targetRegionSK=<your-secretAccessKey>

执行成功后在海外区账户的控制台中查看 CloudFormation 服务,可以看到如下堆栈:

继续查看 CodeCommit 服务,可以看到如下存储库:

Step 6 在海外IAM用户上更新 pub-images-mirror/images.txt

如图,点击“编辑”:

把要传送回国内的镜像如图中红框所示,加入到 images.txt 中即可。国内的IAM用户就可以在其 ECR-存储库 中看到所拉取的镜像:

点开一个看看:

到此为止,该服务搭建成功!

使用PuTTY通过密钥对从Windows连接到Linux实例

参考资料:

  1. 使用 PuTTY 从 Windows 连接到 Linux 实例

使用 PuTTYgen 转换私有密钥

PuTTY 自身并不支持由 SSH 密钥的生成的私有密钥格式 (.pem)。PuTTY 提供一个名为 PuTTYgen 的工具,此工具可以将密钥转换为所需的 PuTTY 格式。您必须如下所示将私有密钥(.pem 文件)转换为此格式(.ppk 文件),以便使用 PuTTY 连接到您的实例。

转换您的私有密钥

  1. Start (开始) 菜单中,依次选择 All Programs (所有程序)PuTTYPuTTYgen
  2. Type of key to generate 下,选择 RSA。如果您使用的是旧版本的 PuTTYgen,请选择 SSH-2 RSA
  1. 选择 Load。默认情况下,PuTTYgen 仅显示扩展名为 .ppk 的文件。要找到您的 .pem 文件,请选择显示所有类型的文件的选项。
  1. 选择在启动实例时指定的密钥对的 .pem 文件,然后选择 Open (打开) 。PuTTYgen 会显示一个通知,指示已成功导入 .pem 文件。选择 OK

  2. 要以 PuTTY 可使用的格式保存密钥,请选择保存私有密钥。PuTTYgen 将显示有关保存没有密码的密钥的警告。选择


注意

私有密钥上的密码提供额外一层保护。即使发现了您的私有密钥,也不能在没有密码的情况下使用该密钥。使用密码的缺点是自动化更难实现,因为需要人工干预以登录到实例或将文件复制到实例中。


  1. 为密钥指定您用于密钥对的相同名称(例如 my-key-pair)并选择 Save (保存) 。PuTTY 会自动添加 .ppk 文件扩展名。

您的私有密钥格式现在是正确的 PuTTY 使用格式了。您现在可以使用 PuTTY 的 SSH 客户端连接到实例。



连接到 Linux 实例

通过以下过程使用 PuTTY 连接到您的 Linux 实例。您需要使用为私有密钥创建的 .ppk 文件。

使用 PuTTY 连接到您的实例

  1. 启动 PuTTY(在开始菜单中,选择所有程序 > PuTTY > PuTTY)。

  2. Category 窗格中,选择 Session 并填写以下字段:

    a. 在主机名框中,执行以下操作之一:

    • (公有 DNS)要使用实例的公有 DNS 名称进行连接,请输入 my-instance-user-name@my-instance-public-dns-name
    • (IPv6) 或者,如果实例具有 IPv6 地址,要使用实例的 IPv6 地址进行连接,请输入 my-instance-user-name@my-instance-IPv6-address
    • 其实IPv4地址登录就可以了

    b. 确保端口值为 22。

    c. 在连接类型下,选择 SSH

  1. (可选) 您可以配置 PuTTY 以定期自动发送“保持连接”数据以将会话保持活动状态。要避免由于会话处于不活动状态而与实例断开连接,这是非常有用的。在 Category 窗格中,选择 Connection,然后在 Seconds between keepalives 字段中输入所需的间隔。例如,如果您的会话在处于不活动状态 10 分钟后断开连接,请输入 180 以将 PuTTY 配置为每隔 3 分钟发送一次保持活动数据。

  2. Category 窗格中,展开 Connection,再展开 SSH,然后选择 Auth。完成以下操作:

    a. 选择 Browse

    b. 选择为密钥对生成的 .ppk 文件,然后选择打开

    c. (可选) 如果打算稍后重新启动此会话,则可以保存此会话信息以便日后使用。在类别下面,选择会话,在保存的会话中输入会话的名称,然后选择保存

    d. 选择 Open

  3. 如果这是第一次连接到该实例,PuTTY 将显示安全警报对话框,以询问您是否信任要连接到的主机。选择。将打开一个窗口,并且您连接到实例。


注意

如果您在将私有密钥转换成 PuTTY 格式时指定了密码,当您登录到实例时,您必须提供该密码。


Ubuntu 18.04 安装 Node.js

参考资料:

  1. Ubuntu 安装 Node.js 的正确姿势

Step 1

如果你已经安装了 Node,最好先把原来的卸载掉。

卸载 Node,可能需要 root 权限。

1
$ sudo apt remove nodejs

移除之前的全局 node_modules 包。

1
2
#执行前请确认这个包是否存在这个位置
$ sudo rm -rf /usr/lib/node_moudles

Step 2 安装NVM

NVM: 全称是 Node Version Manager, 也就是 Node 版本管理器。

查看 NVM 的 Github 仓库

1
$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.37.0/install.sh | bash

Step 3 安装Node

1
2
3
4
# 安装最新版本
$ nvm install node
# 安装某一指定版本
$ nvm install 12.14.1 # or 10.10.0, 8.9.1, etc

查看所安装Node的版本 node --version