Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 21(9): 1272-7, 2008 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-18701255

RESUMEN

We present and study a probabilistic neural automaton in which the fraction of simultaneously-updated neurons is a parameter, rhoin(0,1). For small rho, there is relaxation towards one of the attractors and a great sensibility to external stimuli and, for rho > or = rho(c), itinerancy among attractors. Tuning rho in this regime, oscillations may abruptly change from regular to chaotic and vice versa, which allows one to control the efficiency of the searching process. We argue on the similarity of the model behavior with recent observations, and on the possible role of chaos in neurobiology.


Asunto(s)
Modelos Neurológicos , Modelos Estadísticos , Redes Neurales de la Computación , Neuronas/fisiología , Dinámicas no Lineales , Sinapsis/fisiología , Algoritmos , Humanos
2.
Phys Rev E Stat Nonlin Soft Matter Phys ; 76(1 Pt 1): 011102, 2007 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-17677405

RESUMEN

We reformulate the cavity approximation (CA), a class of algorithms recently introduced for improving the Bethe approximation estimates of marginals in graphical models. In our formulation, which allows for the treatment of multivalued variables, a further generalization to factor graphs with arbitrary order of interaction factors is explicitly carried out, and a message passing algorithm that implements the first order correction to the Bethe approximation is described. Furthermore, we investigate an implementation of the CA for pairwise interactions. In all cases considered we could confirm that CA[k] with increasing k provides a sequence of approximations of markedly increasing precision. Furthermore, in some cases we could also confirm the general expectation that the approximation of order k , whose computational complexity is O(N(k+1)) has an error that scales as 1/N(k+1) with the size of the system. We discuss the relation between this approach and some recent developments in the field.

3.
Neural Netw ; 17(1): 29-36, 2004 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-14690704

RESUMEN

A toy model of a neural network in which both Hebbian learning and reinforcement learning occur is studied. The problem of 'path interference', which makes that the neural net quickly forgets previously learned input-output relations is tackled by adding a Hebbian term (proportional to the learning rate nu) to the reinforcement term (proportional to delta) in the learning rule. It is shown that the number of learning steps is reduced considerably if 1/4

Asunto(s)
Retroalimentación Psicológica , Red Nerviosa/fisiología , Redes Neurales de la Computación , Refuerzo en Psicología , Animales , Simulación por Computador , Condicionamiento Psicológico , Humanos , Modelos Neurológicos , Procesos Estocásticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA