Your browser doesn't support javascript.
loading
Humans Predict Action using Grammar-like Structures.
Wörgötter, F; Ziaeetabar, F; Pfeiffer, S; Kaya, O; Kulvicius, T; Tamosiunaite, M.
Afiliación
  • Wörgötter F; Universität Göttingen, Department for Computational Neuroscience at the Bernstein Center Göttingen, Inst. of Physics 3 and Leibniz Science Campus for Primate Cognition, Göttingen, Germany. worgott@gwdg.de.
  • Ziaeetabar F; Universität Göttingen, Department for Computational Neuroscience at the Bernstein Center Göttingen, Inst. of Physics 3 and Leibniz Science Campus for Primate Cognition, Göttingen, Germany.
  • Pfeiffer S; Universität Göttingen, Department for Computational Neuroscience at the Bernstein Center Göttingen, Inst. of Physics 3 and Leibniz Science Campus for Primate Cognition, Göttingen, Germany.
  • Kaya O; Universität Göttingen, Department for Computational Neuroscience at the Bernstein Center Göttingen, Inst. of Physics 3 and Leibniz Science Campus for Primate Cognition, Göttingen, Germany.
  • Kulvicius T; Universität Göttingen, Department for Computational Neuroscience at the Bernstein Center Göttingen, Inst. of Physics 3 and Leibniz Science Campus for Primate Cognition, Göttingen, Germany.
  • Tamosiunaite M; Universität Göttingen, Department for Computational Neuroscience at the Bernstein Center Göttingen, Inst. of Physics 3 and Leibniz Science Campus for Primate Cognition, Göttingen, Germany.
Sci Rep ; 10(1): 3999, 2020 03 04.
Article en En | MEDLINE | ID: mdl-32132602
Efficient action prediction is of central importance for the fluent workflow between humans and equally so for human-robot interaction. To achieve prediction, actions can be algorithmically encoded by a series of events, where every event corresponds to a change in a (static or dynamic) relation between some of the objects in the scene. These structures are similar to a context-free grammar and, importantly, within this framework the actual objects are irrelevant for prediction, only their relational changes matter. Manipulation actions and others can be uniquely encoded this way. Using a virtual reality setup and testing several different manipulation actions, here we show that humans predict actions in an event-based manner following the sequence of relational changes. Testing this with chained actions, we measure the percentage predictive temporal gain for humans and compare it to action-chains performed by robots showing that the gain is approximately equal. Event-based and, thus, object independent action recognition and prediction may be important for cognitively deducing properties of unknown objects seen in action, helping to address bootstrapping of object knowledge especially in infants.
Asunto(s)

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Percepción Visual / Reconocimiento en Psicología / Realidad Virtual / Lingüística Tipo de estudio: Prognostic_studies / Risk_factors_studies Límite: Female / Humans / Male Idioma: En Revista: Sci Rep Año: 2020 Tipo del documento: Article País de afiliación: Alemania Pais de publicación: Reino Unido

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Asunto principal: Percepción Visual / Reconocimiento en Psicología / Realidad Virtual / Lingüística Tipo de estudio: Prognostic_studies / Risk_factors_studies Límite: Female / Humans / Male Idioma: En Revista: Sci Rep Año: 2020 Tipo del documento: Article País de afiliación: Alemania Pais de publicación: Reino Unido