Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 72: 13-30, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26559472

RESUMEN

Humans can point fairly accurately to memorized states when closing their eyes despite slow or even missing sensory feedback. It is also common that the arm dynamics changes during development or from injuries. We propose a biologically motivated implementation of an arm controller that includes an adaptive observer. Our implementation is based on the neural field framework, and we show how a path integration mechanism can be trained from few examples. Our results illustrate successful generalization of path integration with a dynamic neural field by which the robotic arm can move in arbitrary directions and velocities. Also, by adapting the strength of the motor effect the observer implicitly learns to compensate an image acquisition delay in the sensory system. Our dynamic implementation of an observer successfully guides the arm toward the target in the dark, and the model produces movements with a bell-shaped velocity profile, consistent with human behavior data.


Asunto(s)
Encéfalo/fisiología , Modelos Neurológicos , Movimiento/fisiología , Desempeño Psicomotor/fisiología , Brazo , Humanos , Aprendizaje , Robótica
2.
Neural Netw ; 67: 121-30, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-25897512

RESUMEN

Biological systems are capable of learning that certain stimuli are valuable while ignoring the many that are not, and thus perform feature selection. In machine learning, one effective feature selection approach is the least absolute shrinkage and selection operator (LASSO) form of regularization, which is equivalent to assuming a Laplacian prior distribution on the parameters. We review how such Bayesian priors can be implemented in gradient descent as a form of weight decay, which is a biologically plausible mechanism for Bayesian feature selection. In particular, we describe a new prior that offsets or "raises" the Laplacian prior distribution. We evaluate this alongside the Gaussian and Cauchy priors in gradient descent using a generic regression task where there are few relevant and many irrelevant features. We find that raising the Laplacian leads to less prediction error because it is a better model of the underlying distribution. We also consider two biologically relevant online learning tasks, one synthetic and one modeled after the perceptual expertise task of Krigolson et al. (2009). Here, raising the Laplacian prior avoids the fast erosion of relevant parameters over the period following training because it only allows small weights to decay. This better matches the limited loss of association seen between days in the human data of the perceptual expertise task. Raising the Laplacian prior thus results in a biologically plausible form of Bayesian feature selection that is effective in biologically relevant contexts.


Asunto(s)
Teorema de Bayes , Aprendizaje Automático , Algoritmos , Simulación por Computador , Humanos , Modelos Neurológicos , Distribución Normal , Sistemas en Línea , Recompensa
3.
IEEE Trans Neural Netw Learn Syst ; 26(10): 2323-35, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-25546864

RESUMEN

Kohonen's self-organizing map (SOM) is used to map high-dimensional data into a low-dimensional representation (typically a 2-D or 3-D space) while preserving their topological characteristics. A major reason for its application is to be able to visualize data while preserving their relation in the high-dimensional input data space as much as possible. Here, we are seeking to go further by incorporating semantic meaning in the low-dimensional representation. In a conventional SOM, the semantic context of the data, such as class labels, does not have any influence on the formation of the map. As an abstraction of neural function, the SOM models bottom-up self-organization but not feedback modulation which is also ubiquitous in the brain. In this paper, we demonstrate a hierarchical neural network, which learns a topographical map that also reflects the semantic context of the data. Our method combines unsupervised, bottom-up topographical map formation with top-down supervised learning. We discuss the mathematical properties of the proposed hierarchical neural network and demonstrate its abilities with empirical experiments.


Asunto(s)
Aprendizaje/fisiología , Modelos Neurológicos , Redes Neurales de la Computación , Neuronas/fisiología , Algoritmos , Análisis por Conglomerados , Humanos , Factores de Tiempo
4.
Behav Brain Sci ; 36(3): 232-3, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23663363

RESUMEN

While the target article provides a glowing account for the excitement in the field, we stress that hierarchical predictive learning in the brain requires sparseness of the representation. We also question the relation between Bayesian cognitive processes and hierarchical generative models as discussed by the target article.


Asunto(s)
Atención/fisiología , Encéfalo/fisiología , Cognición/fisiología , Ciencia Cognitiva/tendencias , Percepción/fisiología , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA