Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Front Hum Neurosci ; 18: 1319574, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38545515

RESUMO

Within the field of Humanities, there is a recognized need for educational innovation, as there are currently no reported tools available that enable individuals to interact with their environment to create an enhanced learning experience in the humanities (e.g., immersive spaces). This project proposes a solution to address this gap by integrating technology and promoting the development of teaching methodologies in the humanities, specifically by incorporating emotional monitoring during the learning process of humanistic context inside an immersive space. In order to achieve this goal, a real-time emotion recognition EEG-based system was developed to interpret and classify specific emotions. These emotions aligned with the early proposal by Descartes (Passions), including admiration, love, hate, desire, joy, and sadness. This system aims to integrate emotional data into the Neurohumanities Lab interactive platform, creating a comprehensive and immersive learning environment. This work developed a ML, real-time emotion recognition model that provided Valence, Arousal, and Dominance (VAD) estimations every 5 seconds. Using PCA, PSD, RF, and Extra-Trees, the best 8 channels and their respective best band powers were extracted; furthermore, multiple models were evaluated using shift-based data division and cross-validations. After assessing their performance, Extra-Trees achieved a general accuracy of 94%, higher than the reported in the literature (88% accuracy). The proposed model provided real-time predictions of VAD variables and was adapted to classify Descartes' six main passions. However, with the VAD values obtained, more than 15 emotions can be classified (reported in the VAD emotion mapping) and extend the range of this application.

2.
Comput Intell Neurosci ; 2019: 9374802, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31885534

RESUMO

In this paper, we evaluate a semiautonomous brain-computer interface (BCI) for manipulation tasks. In such a system, the user controls a robotic arm through motor imagery commands. In traditional process-control BCI systems, the user has to provide those commands continuously in order to manipulate the effector of the robot step-by-step, which results in a tiresome process for simple tasks such as pick and replace an item from a surface. Here, we take a semiautonomous approach based on a conformal geometric algebra model that solves the inverse kinematics of the robot on the fly, and then the user only has to decide on the start of the movement and the final position of the effector (goal-selection approach). Under these conditions, we implemented pick-and-place tasks with a disk as an item and two target areas placed on the table at arbitrary positions. An artificial vision (AV) algorithm was used to obtain the positions of the items expressed in the robot frame through images captured with a webcam. Then, the AV algorithm is integrated into the inverse kinematics model to perform the manipulation tasks. As proof-of-concept, different users were trained to control the pick-and-place tasks through the process-control and semiautonomous goal-selection approaches so that the performance of both schemes could be compared. Our results show the superiority in performance of the semiautonomous approach as well as evidence of less mental fatigue with it.


Assuntos
Inteligência Artificial , Interfaces Cérebro-Computador , Robótica/métodos , Fenômenos Biomecânicos , Encéfalo/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados P300 , Feminino , Objetivos , Humanos , Imaginação/fisiologia , Masculino , Fadiga Mental/etiologia , Modelos Teóricos , Atividade Motora/fisiologia , Estudo de Prova de Conceito , Processamento de Sinais Assistido por Computador , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA