Your browser doesn't support javascript.
loading
Embodied Object Representation Learning and Recognition.
Van de Maele, Toon; Verbelen, Tim; Çatal, Ozan; Dhoedt, Bart.
Afiliación
  • Van de Maele T; IDLab, Department of Information Technology, Ghent University - imec, Ghent, Belgium.
  • Verbelen T; IDLab, Department of Information Technology, Ghent University - imec, Ghent, Belgium.
  • Çatal O; IDLab, Department of Information Technology, Ghent University - imec, Ghent, Belgium.
  • Dhoedt B; IDLab, Department of Information Technology, Ghent University - imec, Ghent, Belgium.
Front Neurorobot ; 16: 840658, 2022.
Article en En | MEDLINE | ID: mdl-35496899
Scene understanding and decomposition is a crucial challenge for intelligent systems, whether it is for object manipulation, navigation, or any other task. Although current machine and deep learning approaches for object detection and classification obtain high accuracy, they typically do not leverage interaction with the world and are limited to a set of objects seen during training. Humans on the other hand learn to recognize and classify different objects by actively engaging with them on first encounter. Moreover, recent theories in neuroscience suggest that cortical columns in the neocortex play an important role in this process, by building predictive models about objects in their reference frame. In this article, we present an enactive embodied agent that implements such a generative model for object interaction. For each object category, our system instantiates a deep neural network, called Cortical Column Network (CCN), that represents the object in its own reference frame by learning a generative model that predicts the expected transform in pixel space, given an action. The model parameters are optimized through the active inference paradigm, i.e., the minimization of variational free energy. When provided with a visual observation, an ensemble of CCNs each vote on their belief of observing that specific object category, yielding a potential object classification. In case the likelihood on the selected category is too low, the object is detected as an unknown category, and the agent has the ability to instantiate a novel CCN for this category. We validate our system in an simulated environment, where it needs to learn to discern multiple objects from the YCB dataset. We show that classification accuracy improves as an embodied agent can gather more evidence, and that it is able to learn about novel, previously unseen objects. Finally, we show that an agent driven through active inference can choose their actions to reach a preferred observation.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Front Neurorobot Año: 2022 Tipo del documento: Article País de afiliación: Bélgica Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Revista: Front Neurorobot Año: 2022 Tipo del documento: Article País de afiliación: Bélgica Pais de publicación: Suiza