Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Biomed Phys Eng Express ; 7(6)2021 10 05.
Artículo en Inglés | MEDLINE | ID: mdl-34551397

RESUMEN

Objective.Extraction of temporal features of neuronal activity from electrophysiological data can be used for accurate classification of neural networks in healthy and pathologically perturbed conditions. In this study, we provide an extensive approach for the classification of humanin vitroneural networks with and without an underlying pathology, from electrophysiological recordings obtained using a microelectrode array (MEA) platform.Approach.We developed a Dirichlet mixture (DM) Point Process statistical model able to extract temporal features related to neurons. We then applied a machine learning algorithm to discriminate between healthy control and pathologically perturbedin vitroneural networks.Main Results.We found a high degree of separability between the classes using DM point process features (p-value <0.001 for all the features, paired t-test), which reaches 93.10 of accuracy (92.37 of ROC AUC) with the Random Forest classifier. In particular, results show a higher latency in firing for pathologically perturbed neurons (43 ± 16 ms versus 67 ± 31 ms,µIGfeature distribution).Significance.Our approach has been successful in extracting temporal features related to the neurons' behaviour, as well as distinguishing healthy from pathologically perturbed networks, including classification of responses to a transient induced perturbation.


Asunto(s)
Redes Neurales de la Computación , Aprendizaje Automático Supervisado , Algoritmos , Teorema de Bayes , Aprendizaje Automático
2.
Front Comput Neurosci ; 15: 611183, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33643017

RESUMEN

It has been hypothesized that the brain optimizes its capacity for computation by self-organizing to a critical point. The dynamical state of criticality is achieved by striking a balance such that activity can effectively spread through the network without overwhelming it and is commonly identified in neuronal networks by observing the behavior of cascades of network activity termed "neuronal avalanches." The dynamic activity that occurs in neuronal networks is closely intertwined with how the elements of the network are connected and how they influence each other's functional activity. In this review, we highlight how studying criticality with a broad perspective that integrates concepts from physics, experimental and theoretical neuroscience, and computer science can provide a greater understanding of the mechanisms that drive networks to criticality and how their disruption may manifest in different disorders. First, integrating graph theory into experimental studies on criticality, as is becoming more common in theoretical and modeling studies, would provide insight into the kinds of network structures that support criticality in networks of biological neurons. Furthermore, plasticity mechanisms play a crucial role in shaping these neural structures, both in terms of homeostatic maintenance and learning. Both network structures and plasticity have been studied fairly extensively in theoretical models, but much work remains to bridge the gap between theoretical and experimental findings. Finally, information theoretical approaches can tie in more concrete evidence of a network's computational capabilities. Approaching neural dynamics with all these facets in mind has the potential to provide a greater understanding of what goes wrong in neural disorders. Criticality analysis therefore holds potential to identify disruptions to healthy dynamics, granted that robust methods and approaches are considered.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA