概要

機械学習の研究や社会実装は日々顕著な発展を遂げています.しかし,これまでに機械学習の能力が人間の有する知能水準に到達することはありませんでした.機械学習が高水準な知能を獲得するためには,不特定のタスクを継続的に学習していく必要があります.現在の機械学習のもつ汎化性能は,新規タスクを学習する度に過去タスクに対するテスト精度が低下していくという,破滅的忘却と呼ばれる現象に阻害されています.継続学習とは,機械学習から破滅的忘却を防ぎ,さらに過去タスクでの知識を活用して継続的な学習を達成しようとする分野であり,近年盛り上がりを見せつつあります.本セミナーでは,継続学習の観点から破滅的忘却を防ぐためにこれまで提案されてきた方法について解説し,手法ごとの比較や限界について説明します.

目的

  • 継続学習の目的と定義を与える.
  • 破滅的忘却を防ぐ手法の分類と代表的な手法について解説する.
  • 解説した手法どうしを比較し,また限界について理解する.

発表日時

場所: オンライン (Zoom)
日時: 2021年3月14日10時 - 11時

参考資料

[1] R. Aljundi et al. Expert Gate: Lifelong Learning with a Network of Experts. CVPR ,2017. [arXiv]
[2] R. Aljundi et al. Memory Aware Synapses: Learning What (not) to Forget. ECCV, 2018. [arXiv]
[3] A. Chaudhry et al. Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence. ECCV, 2018. [arXiv]
[4] A. Chaudhry et al. Efficient lifelong learning with a-gem. ICLR, 2019. [arXiv]
[5] A. Chaudhry et al. On tiny episodic memories in continual learning. 2019]. [arXiv]
[6] M. Delange et al. A continual learning survey: Defying forgetting in classification tasks. PAMI 2021] [arXiv]
[7] S. Farquhar and Y. Gal. Towards Robust Evaluations of Continual Learning. 2018. [arXiv]
[8] C. Fernando et al. PathNet: Evolution channels gradient descent in super neural networks. 2017. [arXiv]
[9] I. Goodfellow et al. An empirical investigation of catastrophic forgetting in gradient-based neural networks. ICLR 2014. [arXiv]
[10] D. Isele and A. Cosgun. Selective experience replay for lifelong learning. AAAI 2018. [arXiv]
[11] H. Jung et al. Less-Forgetful Learning for Domain Expansion in Deep Neural Networks. AAAI 2018. [arXiv]
[12] J. Kirkpatrick et al. Overcoming catastrophic forgetting in neural networks. PNAS, 2017. [arXiv]
[13] J. Knoblauch et al. Optimal Continual Learning has Perfect Memory and is NP-HARD. ICML, 2020. [arXiv]
[14] S. Lee et al. Overcoming catastrophic forgetting by incremental moment matching. NeurIPS, 2017. [arXiv]
[15] S. Levine et al. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. 2020. [arXiv]
[16] Z. Li and D. Hoiem. Learning without Forgetting. PAMI 2018. [arXiv]
[17] X. Liu et al. Rotate your Networks: Better Weight Consolidation and Less Catastrophic Forgetting. ICPR, 2018. [arXiv]
[18] D. Lopez-Paz and M. Ranzato. Gradient episodic memory for continual learning. NeurIPS, 2017. [arXiv]
[19] A. Mallya and S. Lazebnik. PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning. CVPR, 2018. [arXiv]
[20] A. Mallya et al. Piggyback: Adapting a Single Network to Multiple Tasks by Learning to Mask Weights. ECCV, 2018. [arXiv]
[21] Y. Netzer et al. Reading Digits in Natural Images with Unsupervised Feature Learning. NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011. [pdf]
[22] G. Parisi et al. Continual lifelong learning with neural networks: A review. Neural Networks, 2019. [arXiv]
[23] A. Pentina et al. A PAC-Bayesian Bound for Lifelong Learning. ICML, 2014. [arXiv]
[24] J. Ramapuram et al. Lifelong generative modeling. Neurocomputing, 2020. [arXiv]
[25] S. Rebuffi et al. iCaRL: Incremental classifier and representation learning. CVPR, 2017. [arXiv]
[26] D. Rolnick et al. Experience Replay for Continual Learning. NeurIPS, 2019. [arXiv]
[27] A. Rusu et al. Progressive Neural Networks. 2016. [arXiv]
[28] J. Serr\UTF{00E0} et al. Overcoming Catastrophic Forgetting with Hard Attention to the Task. ICML, 2018. [arXiv]
[29] H. Shin et al. Continual learning with deep generative replay. NeurIPS, 2017. [arXiv]
[30] A. R. Triki et al. Encoder Based Lifelong Learning. ICCV, 2017. [arXiv]
[31] G. van de Ven and A. S. Torias. Three scenarios for continual learning. NeurIPS Continual Learning workshop 2018. [arXiv]
[32] J. Xu and Z. Zhu. Reinforced continual learning. NeurIPS, 2018. [arXiv]
[33] J. Zhang et al. Class-incremental learning via deep model consolidation. WACV, 2020. [arXiv]
[34] F. Zenka et al. Continual learning through synaptic intelligence. ICML, 2017. [arXiv]