Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 5: 18112, 2015 Dec 16.
Artículo en Inglés | MEDLINE | ID: mdl-26669858

RESUMEN

Though widely hypothesized, limited evidence exists that human brain functions organize in global gradients of abstraction starting from sensory cortical inputs. Hierarchical representation is accepted in computational networks, and tentatively in visual neuroscience, yet no direct holistic demonstrations exist in vivo. Our methods developed network models enriched with tiered directionality, by including input locations, a critical feature for localizing representation in networks generally. Grouped primary sensory cortices defined network inputs, displaying global connectivity to fused inputs. Depth-oriented networks guided analyses of fMRI databases (~17,000 experiments;~1/4 of fMRI literature). Formally, we tested whether network depth predicted localization of abstract versus concrete behaviors over the whole set of studied brain regions. For our results, new cortical graph metrics, termed network-depth, ranked all databased cognitive function activations by network-depth. Thus, we objectively sorted stratified landscapes of cognition, starting from grouped sensory inputs in parallel, progressing deeper into cortex. This exposed escalating amalgamation of function or abstraction with increasing network-depth, globally. Nearly 500 new participants confirmed our results. In conclusion, data-driven analyses defined a hierarchically ordered connectome, revealing a related continuum of cognitive function. Progressive functional abstraction over network depth may be a fundamental feature of brains, and is observed in artificial networks.


Asunto(s)
Mapeo Encefálico , Corteza Cerebral/patología , Cognición/fisiología , Algoritmos , Mapeo Encefálico/métodos , Análisis por Conglomerados , Conectoma , Bases de Datos Factuales , Humanos , Imagen por Resonancia Magnética , Modelos Biológicos , Reproducibilidad de los Resultados
2.
Chaos ; 11(1): 160-169, 2001 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-12779450

RESUMEN

We analyze a class of ordinary differential equations representing a simplified model of a genetic network. In this network, the model genes control the production rates of other genes by a logical function. The dynamics in these equations are represented by a directed graph on an n-dimensional hypercube (n-cube) in which each edge is directed in a unique orientation. The vertices of the n-cube correspond to orthants of state space, and the edges correspond to boundaries between adjacent orthants. The dynamics in these equations can be represented symbolically. Starting from a point on the boundary between neighboring orthants, the equation is integrated until the boundary is crossed for a second time. Each different cycle, corresponding to a different sequence of orthants that are traversed during the integration of the equation always starting on a boundary and ending the first time that same boundary is reached, generates a different letter of the alphabet. A word consists of a sequence of letters corresponding to a possible sequence of orthants that arise from integration of the equation starting and ending on the same boundary. The union of the words defines the language. Letters and words correspond to analytically computable Poincare maps of the equation. This formalism allows us to define bifurcations of chaotic dynamics of the differential equation that correspond to changes in the associated language. Qualitative knowledge about the dynamics found by integrating the equation can be used to help solve the inverse problem of determining the underlying network generating the dynamics. This work places the study of dynamics in genetic networks in a context comprising both nonlinear dynamics and the theory of computation. (c) 2001 American Institute of Physics.

3.
Neural Comput ; 12(10): 2331-53, 2000 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-11032037

RESUMEN

This article introduces a method for clustering irregularly shaped data arrangements using high-order neurons. Complex analytical shapes are modeled by replacing the classic synaptic weight of the neuron by high-order tensors in homogeneous coordinates. In the first- and second-order cases, this neuron corresponds to a classic neuron and to an ellipsoidalmetric neuron. We show how high-order shapes can be formulated to follow the maximum-correlation activation principle and permit simple local Hebbian learning. We also demonstrate decomposition of spatial arrangements of data clusters, including very close and partially overlapping clusters, which are difficult to distinguish using classic neurons. Superior results are obtained for the Iris data.


Asunto(s)
Modelos Neurológicos , Neuronas/citología , Neuronas/fisiología , Algoritmos , Inteligencia Artificial , Tamaño de la Célula/fisiología , Sinapsis/fisiología
4.
IEEE Trans Biomed Eng ; 47(6): 822-6, 2000 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-10833858

RESUMEN

We present a novel approach to the problem of event-related potential (ERP) identification, based on a competitive artificial neural network (ANN) structure. Our method uses ensembled electroencephalogram (EEG) data just as used in conventional averaging, however without the need for a priori data subgrouping into distinct categories (e.g., stimulus- or event-related), and thus avoids conventional assumptions on response invariability. The competitive ANN, often described as a winner takes all neural structure, is based on dynamic competition among the net neurons where learning takes place only with the winning neuron. Using a simple single-layered structure, the proposed scheme results in convergence of the actual neural weights to the embedded ERP patterns. The method is applied to real event-related potential data recorded during a common odd-ball type paradigm. For the first time, within-session variable signal patterns are automatically identified, dismissing the strong and limiting requirement of a priori stimulus-related selective grouping of the recorded data. The results present new possibilities in ERP research.


Asunto(s)
Encéfalo/fisiología , Potenciales Evocados/fisiología , Artefactos , Simulación por Computador , Electroencefalografía , Humanos , Aprendizaje/fisiología , Modelos Neurológicos , Red Nerviosa/fisiología , Redes Neurales de la Computación
5.
Neural Comput ; 11(3): 715-46, 1999 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-10085427

RESUMEN

This article studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net function and a sigmoid-like continuous activation function. We introduce arithmetic networks as ARNN augmented with a few simple discontinuous (e.g., threshold or zero test) neurons. We argue that even with weights restricted to polynomial time computable reals, arithmetic networks are able to compute arbitrarily complex recursive functions. We identify many types of neural networks that are at least as powerful as arithmetic nets, some of which are not in fact discontinuous, but they boost other arithmetic operations in the net function (e.g., neurons that can use divisions and polynomial net functions inside sigmoid-like continuous activation functions). These arithmetic networks are equivalent to the Blum-Shub-Smale model, when the latter is restricted to a bounded number of registers. With respect to implementation on digital computers, we show that arithmetic networks with rational weights can be simulated with exponential precision, but even with polynomial-time computable real weights, arithmetic networks are not subject to any fixed precision bounds. This is in contrast with the ARNN that are known to demand precision that is linear in the computation time. When nontrivial periodic functions (e.g., fractional part, sine, tangent) are added to arithmetic networks, the resulting networks are computationally equivalent to a massively parallel machine. Thus, these highly discontinuous networks can solve the presumably intractable class of PSPACE-complete problems in polynomial time.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Simulación por Computador , Lenguaje , Modelos Estadísticos , Reproducibilidad de los Resultados
6.
Neural Netw ; 12(4-5): 593-600, 1999 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-12662670

RESUMEN

In a previous work Pollack showed that a particular type of heterogeneous processor network is Turing universal. Siegelmann and Sontag (1991) showed the universality of homogeneous networks of first-order neurons having piecewise-linear activation functions. Their result was generalized by Kilian and Siegelmann (1996) to include various sigmoidal activation functions. Here we focus on a type of high-order neurons called switch-affine neurons, with piecewise-linear activation functions, and prove that nine such neurons suffice for simulating universal Turing machines.

7.
Artículo en Inglés | MEDLINE | ID: mdl-18255858

RESUMEN

Recently, fully connected recurrent neural networks have been proven to be computationally rich-at least as powerful as Turing machines. This work focuses on another network which is popular in control applications and has been found to be very effective at learning a variety of problems. These networks are based upon Nonlinear AutoRegressive models with eXogenous Inputs (NARX models), and are therefore called NARX networks. As opposed to other recurrent networks, NARX networks have a limited feedback which comes only from the output neuron rather than from hidden states. They are formalized by y(t)=Psi(u(t-n(u)), ..., u(t-1), u(t), y(t-n(y)), ..., y(t-1)) where u(t) and y(t) represent input and output of the network at time t, n(u) and n(y) are the input and output order, and the function Psi is the mapping performed by a Multilayer Perceptron. We constructively prove that the NARX networks with a finite number of parameters are computationally as strong as fully connected recurrent networks and thus Turing machines. We conclude that in theory one can use the NARX models, rather than conventional recurrent networks without any computational loss even though their feedback is limited. Furthermore, these results raise the issue of what amount of feedback or recurrence is necessary for any network to be Turing equivalent and what restrictions on feedback limit computational power.

8.
Science ; 268(5210): 545-8, 1995 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-17756722

RESUMEN

Extensive efforts have been made to prove the Church-Turing thesis, which suggests that all realizable dynamical and physical systems cannot be more powerful than classical models of computation. A simply described but highly chaotic dynamical system called the analog shift map is presented here, which has computational power beyond the Turing limit (super-Turing); it computes exactly like neural networks and analog machines. This dynamical system is conjectured to describe natural physical phenomena.

9.
IEEE Trans Neural Netw ; 6(6): 1490-504, 1995.
Artículo en Inglés | MEDLINE | ID: mdl-18263442

RESUMEN

Deals with computational issues of loading a fixed-architecture neural network with a set of positive and negative examples. This is the first result on the hardness of loading a simple three-node architecture which does not consist of the binary-threshold neurons, but rather utilizes a particular continuous activation function, commonly used in the neural-network literature. The authors observe that the loading problem is polynomial-time if the input dimension is constant. Otherwise, however, any possible learning algorithm based on particular fixed architectures faces severe computational barriers. Similar theorems have already been proved by Megiddo and by Blum and Rivest, to the case of binary-threshold networks only. The authors' theoretical results lend further suggestion to the use of incremental (architecture-changing) techniques for training networks rather than fixed architectures. Furthermore, they imply hardness of learnability in the probably approximately correct sense as well.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA