Your browser doesn't support javascript.
loading
Comparing continual task learning in minds and machines.
Flesch, Timo; Balaguer, Jan; Dekker, Ronald; Nili, Hamed; Summerfield, Christopher.
Afiliación
  • Flesch T; Department of Experimental Psychology, University of Oxford, OX2 6BW Oxford, United Kingdom; timo.flesch@psy.ox.ac.uk.
  • Balaguer J; Department of Experimental Psychology, University of Oxford, OX2 6BW Oxford, United Kingdom.
  • Dekker R; DeepMind, EC4A 3TW London, United Kingdom.
  • Nili H; Department of Experimental Psychology, University of Oxford, OX2 6BW Oxford, United Kingdom.
  • Summerfield C; Department of Experimental Psychology, University of Oxford, OX2 6BW Oxford, United Kingdom.
Proc Natl Acad Sci U S A ; 115(44): E10313-E10322, 2018 10 30.
Article en En | MEDLINE | ID: mdl-30322916
Humans can learn to perform multiple tasks in succession over the lifespan ("continual" learning), whereas current machine learning systems fail. Here, we investigated the cognitive mechanisms that permit successful continual learning in humans and harnessed our behavioral findings for neural network design. Humans categorized naturalistic images of trees according to one of two orthogonal task rules that were learned by trial and error. Training regimes that focused on individual rules for prolonged periods (blocked training) improved human performance on a later test involving randomly interleaved rules, compared with control regimes that trained in an interleaved fashion. Analysis of human error patterns suggested that blocked training encouraged humans to form "factorized" representation that optimally segregated the tasks, especially for those individuals with a strong prior bias to represent the stimulus space in a well-structured way. By contrast, standard supervised deep neural networks trained on the same tasks suffered catastrophic forgetting under blocked training, due to representational interference in the deeper layers. However, augmenting deep networks with an unsupervised generative model that allowed it to first learn a good embedding of the stimulus space (similar to that observed in humans) reduced catastrophic forgetting under blocked training. Building artificial agents that first learn a model of the world may be one promising route to solving continual task performance in artificial intelligence research.
Asunto(s)
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Aprendizaje / Red Nerviosa Tipo de estudio: Prognostic_studies Límite: Adult / Female / Humans / Male / Middle aged Idioma: En Revista: Proc Natl Acad Sci U S A Año: 2018 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Aprendizaje / Red Nerviosa Tipo de estudio: Prognostic_studies Límite: Adult / Female / Humans / Male / Middle aged Idioma: En Revista: Proc Natl Acad Sci U S A Año: 2018 Tipo del documento: Article Pais de publicación: Estados Unidos