Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Biophys Rev ; 15(4): 767-785, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37681105

RESUMO

Explaining the foundation of cognitive abilities in the processing of information by neural systems has been in the beginnings of biophysics since McCulloch and Pitts pioneered work within the biophysics school of Chicago in the 1940s and the interdisciplinary cybernetists meetings in the 1950s, inseparable from the birth of computing and artificial intelligence. Since then, neural network models have traveled a long path, both in the biophysical and the computational disciplines. The biological, neurocomputational aspect reached its representational maturity with the Distributed Associative Memory models developed in the early 70 s. In this framework, the inclusion of signal-signal multiplication within neural network models was presented as a necessity to provide matrix associative memories with adaptive, context-sensitive associations, while greatly enhancing their computational capabilities. In this review, we show that several of the most successful neural network models use a form of multiplication of signals. We present several classical models that included such kind of multiplication and the computational reasons for the inclusion. We then turn to the different proposals about the possible biophysical implementation that underlies these computational capacities. We pinpoint the important ideas put forth by different theoretical models using a tensor product representation and show that these models endow memories with the context-dependent adaptive capabilities necessary to allow for evolutionary adaptation to changing and unpredictable environments. Finally, we show how the powerful abilities of contemporary computationally deep-learning models, inspired in neural networks, also depend on multiplications, and discuss some perspectives in view of the wide panorama unfolded. The computational relevance of multiplications calls for the development of new avenues of research that uncover the mechanisms our nervous system uses to achieve multiplication.

2.
Healthcare (Basel) ; 11(11)2023 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-37297740

RESUMO

Parkinson's disease (PD) is a neurological condition that is chronic and worsens over time, which presents a challenging diagnosis. An accurate diagnosis is required to recognize PD patients from healthy individuals. Diagnosing PD at early stages can reduce the severity of this disorder and improve the patient's living conditions. Algorithms based on associative memory (AM) have been applied in PD diagnosis using voice samples of patients with this health condition. Even though AM models have achieved competitive results in PD classification, they do not have any embedded component in the AM model that can identify and remove irrelevant features, which would consequently improve the classification performance. In this paper, we present an improvement to the smallest normalized difference associative memory (SNDAM) algorithm by means of a learning reinforcement phase that improves classification performance of SNDAM when it is applied to PD diagnosis. For the experimental phase, two datasets that have been widely applied for PD diagnosis were used. Both datasets were gathered from voice samples from healthy people and from patients who suffer from this condition at an early stage of PD. These datasets are publicly accessible in the UCI Machine Learning Repository. The efficiency of the ISNDAM model was contrasted with that of seventy other models implemented in the WEKA workbench and was compared to the performance of previous studies. A statistical significance analysis was performed to verify that the performance differences between the compared models were statistically significant. The experimental findings allow us to affirm that the proposed improvement in the SNDAM algorithm, called ISNDAM, effectively increases the classification performance compared against well-known algorithms. ISNDAM achieves a classification accuracy of 99.48%, followed by ANN Levenberg-Marquardt with 95.89% and SVM RBF kernel with 88.21%, using Dataset 1. ISNDAM achieves a classification accuracy of 99.66%, followed by SVM IMF1 with 96.54% and RF IMF1 with 94.89%, using Dataset 2. The experimental findings show that ISNDAM achieves competitive performance on both datasets and that statistical significance tests confirm that ISNDAM delivers classification performance equivalent to that of models published in previous studies.

3.
Sensors (Basel) ; 18(8)2018 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-30115832

RESUMO

The rapid proliferation of connectivity, availability of ubiquitous computing, miniaturization of sensors and communication technology, have changed healthcare in all its areas, creating the well-known healthcare paradigm of e-Health. In this paper, an embedded system capable of monitoring, learning and classifying biometric signals is presented. The machine learning model is based on associative memories to predict the presence or absence of coronary artery disease in patients. Classification accuracy, sensitivity and specificity results show that the performance of our proposal exceeds the performance achieved by each of the fifty widely known algorithms against which it was compared.


Assuntos
Algoritmos , Biometria/métodos , Tomada de Decisão Clínica , Doença da Artéria Coronariana/diagnóstico , Aprendizado de Máquina , Conjuntos de Dados como Assunto , Feminino , Humanos , Masculino , Sensibilidade e Especificidade
4.
Diagnosis (Berl) ; 4(4): 251-259, 2017 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-29536941

RESUMO

BACKGROUND: One of the central challenges of third millennium medicine is the abatement of medical errors. Among the most frequent and hardiest causes of misdiagnosis are cognitive errors produced by faulty medical reasoning. These errors have been analyzed from the perspectives of cognitive psychology and empirical medical studies. We introduce a neurocognitive model of medical diagnosis to address this issue. METHODS: We construct a connectionist model based on the associative nature of human memory to explore the non-analytical, pattern-recognition mode of diagnosis. A context-dependent matrix memory associates signs and symptoms with their corresponding diseases. The weights of these associations depend on the frequencies of occurrence of each disease and on the different combinations of signs and symptoms of each presentation of that disease. The system receives signs and symptoms and by a second input, the degree of diagnostic uncertainty. Its output is a probabilistic map on the set of possible diseases. RESULTS: The model reproduces different kinds of well-known cognitive errors in diagnosis. Errors in the model come from two sources. One, dependent on the knowledge stored in memory, varies with the accumulated experience of the physician and explains age-dependent errors and effects such as epidemiological masking. The other is independent of experience and explains contextual effects such as anchoring. CONCLUSIONS: Our results strongly suggest that cognitive biases are inevitable consequences of associative storage and recall. We found that this model provides valuable insight into the mechanisms of cognitive error and we hope it will prove useful in medical education.


Assuntos
Cognição , Erros de Diagnóstico/psicologia , Erros Médicos/prevenção & controle , Memória/fisiologia , Modelos Teóricos , Humanos , Redes Neurais de Computação , Médicos/psicologia , Resolução de Problemas
5.
Bull Math Biol ; 78(9): 1847-1865, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27651155

RESUMO

Every cognitive activity has a neural representation in the brain. When humans deal with abstract mathematical structures, for instance finite groups, certain patterns of activity are occurring in the brain that constitute their neural representation. A formal neurocognitive theory must account for all the activities developed by our brain and provide a possible neural representation for them. Associative memories are neural network models that have a good chance of achieving a universal representation of cognitive phenomena. In this work, we present a possible neural representation of mathematical group structures based on associative memory models that store finite groups through their Cayley graphs. A context-dependent associative memory stores the transitions between elements of the group when multiplied by each generator of a given presentation of the group. Under a convenient election of the vector basis mapping the elements of the group in the neural activity, the input of a vector corresponding to a generator of the group collapses the context-dependent rectangular matrix into a virtual square permutation matrix that is the matrix representation of the generator. This neural representation corresponds to the regular representation of the group, in which to each element is assigned a permutation matrix. This action of the generator on the memory matrix can also be seen as the dissection of the corresponding monochromatic subgraph of the Cayley graph of the group, and the adjacency matrix of this subgraph is the permutation matrix corresponding to the generator.


Assuntos
Modelos Neurológicos , Redes Neurais de Computação , Encéfalo/fisiologia , Cognição , Simulação por Computador , Humanos , Conceitos Matemáticos , Memória , Modelos Psicológicos
6.
Cogn Neurodyn ; 9(5): 523-34, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26379802

RESUMO

We organize our behavior and store structured information with many procedures that require the coding of spatial and temporal order in specific neural modules. In the simplest cases, spatial and temporal relations are condensed in prepositions like "below" and "above", "behind" and "in front of", or "before" and "after", etc. Neural operators lie beneath these words, sharing some similarities with logical gates that compute spatial and temporal asymmetric relations. We show how these operators can be modeled by means of neural matrix memories acting on Kronecker tensor products of vectors. The complexity of these memories is further enhanced by their ability to store episodes unfolding in space and time. How does the brain scale up from the raw plasticity of contingent episodic memories to the apparent stable connectivity of large neural networks? We clarify this transition by analyzing a model that flexibly codes episodic spatial and temporal structures into contextual markers capable of linking different memory modules.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA