Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.753
Filtrar
1.
Cogn Sci ; 48(9): e13484, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39228272

RESUMEN

When people talk about kinship systems, they often use co-speech gestures and other representations to elaborate. This paper investigates such polysemiotic (spoken, gestured, and drawn) descriptions of kinship relations, to see if they display recurring patterns of conventionalization that capture specific social structures. We present an exploratory hypothesis-generating study of descriptions produced by a lesser-known ethnolinguistic community to the cognitive sciences: the Paamese people of Vanuatu. Forty Paamese speakers were asked to talk about their family in semi-guided kinship interviews. Analyses of the speech, gesture, and drawings produced during these interviews revealed that lineality (i.e., mother's side vs. father's side) is lateralized in the speaker's gesture space. In other words, kinship members of the speaker's matriline are placed on the left side of the speaker's body and those of the patriline are placed on their right side, when they are mentioned in speech. Moreover, we find that the gesture produced by Paamese participants during verbal descriptions of marital relations are performed significantly more often on two diagonal directions of the sagittal axis. We show that these diagonals are also found in the few diagrams that participants drew on the ground to augment their verbo-gestural descriptions of marriage practices with drawing. We interpret this behavior as evidence of a spatial template, which Paamese speakers activate to think and communicate about family relations. We therefore argue that extending investigations of kinship structures beyond kinship terminologies alone can unveil additional key factors that shape kinship cognition and communication and hereby provide further insights into the diversity of social structures.


Asunto(s)
Cognición , Comunicación , Familia , Gestos , Humanos , Masculino , Femenino , Familia/psicología , Adulto , Habla , Persona de Mediana Edad
2.
J Acoust Soc Am ; 156(3): 1720-1733, 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39283150

RESUMEN

Previous research has shown that prosodic structure can regulate the relationship between co-speech gestures and speech itself. Most co-speech studies have focused on manual gestures, but head movements have also been observed to accompany speech events by Munhall, Jones, Callan, Kuratate, and Vatikiotis-Bateson [(2004). Psychol. Sci. 15(2), 133-137], and these co-verbal gestures may be linked to prosodic prominence, as shown by Esteve-Gibert, Borrás-Comes, Asor, Swerts, and Prieto [(2017). J. Acoust. Soc. Am. 141(6), 4727-4739], Hadar, Steiner, Grant, and Rose [(1984). Hum. Mov. Sci. 3, 237-245], and House, Beskow, and Granström [(2001). Lang. Speech 26(2), 117-129]. This study examines how the timing and magnitude of head nods may be related to degrees of prosodic prominence connected to different focus conditions. Using electromagnetic articulometry, a time-varying signal of vertical head movement for 12 native French speakers was generated to examine the relationship between head nod gestures and F0 peaks. The results suggest that speakers use two different alignment strategies, which integrate both temporal and magnitudinal aspects of the gesture. Some evidence of inter-speaker preferences in the use of the two strategies was observed, although the inter-speaker variability is not categorical. Importantly, prosodic prominence itself is not the cause of the difference between the two strategies, but instead magnifies their inherent differences. In this way, the use of co-speech head nod gestures under French focus conditions can be considered as a method of prosodic enhancement.


Asunto(s)
Movimientos de la Cabeza , Acústica del Lenguaje , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Medición de la Producción del Habla/métodos , Factores de Tiempo , Gestos , Calidad de la Voz , Francia , Lenguaje
3.
Sci Rep ; 14(1): 20247, 2024 08 30.
Artículo en Inglés | MEDLINE | ID: mdl-39215011

RESUMEN

Long-term electroencephalography (EEG) recordings have primarily been used to study resting-state fluctuations. These recordings provide valuable insights into various phenomena such as sleep stages, cognitive processes, and neurological disorders. However, this study explores a new angle, focusing for the first time on the evolving nature of EEG dynamics over time within the context of movement. Twenty-two healthy individuals were measured six times from 2 p.m. to 12 a.m. with intervals of 2 h while performing four right-hand gestures. Analysis of movement-related cortical potentials (MRCPs) revealed a reduction in amplitude for the motor and post-motor potential during later hours of the day. Evaluation in source space displayed an increase in the activity of M1 of the contralateral hemisphere and the SMA of both hemispheres until 8 p.m. followed by a decline until midnight. Furthermore, we investigated how changes over time in MRCP dynamics affect the ability to decode motor information. This was achieved by developing classification schemes to assess performance across different scenarios. The observed variations in classification accuracies over time strongly indicate the need for adaptive decoders. Such adaptive decoders would be instrumental in delivering robust results, essential for the practical application of BCIs during day and nighttime usage.


Asunto(s)
Electroencefalografía , Gestos , Mano , Humanos , Electroencefalografía/métodos , Masculino , Femenino , Mano/fisiología , Adulto , Adulto Joven , Movimiento/fisiología , Corteza Motora/fisiología , Interfaces Cerebro-Computador
4.
Artículo en Inglés | MEDLINE | ID: mdl-39186426

RESUMEN

Hand motor impairment has seriously affected the daily life of the elderly. We developed an electromyography (EMG) exosuit system with bidirectional hand support for bilateral coordination assistance based on a dynamic gesture recognition model using graph convolutional network (GCN) and long short-term memory network (LSTM). The system included a hardware subsystem and a software subsystem. The hardware subsystem included an exosuit jacket, a backpack module, an EMG recognition module, and a bidirectional support glove. The software subsystem based on the dynamic gesture recognition model was designed to identify dynamic and static gestures by extracting the spatio-temporal features of the patient's EMG signals and to control glove movement. The offline training experiment built the gesture recognition models for each subject and evaluated the feasibility of the recognition model; the online control experiments verified the effectiveness of the exosuit system. The experimental results showed that the proposed model achieve a gesture recognition rate of 96.42% ± 3.26 %, which is higher than the other three traditional recognition models. All subjects successfully completed two daily tasks within a short time and the success rate of bilateral coordination assistance are 88.75% and 86.88%. The exosuit system can effectively help patients by bidirectional hand support strategy for bilateral coordination assistance in daily tasks, and the proposed method can be applied to various limb assistance scenarios.


Asunto(s)
Electromiografía , Gestos , Mano , Humanos , Mano/fisiología , Masculino , Femenino , Dispositivo Exoesqueleto , Adulto , Algoritmos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Programas Informáticos , Actividades Cotidianas , Adulto Joven , Estudios de Factibilidad
5.
Sensors (Basel) ; 24(16)2024 Aug 12.
Artículo en Inglés | MEDLINE | ID: mdl-39204920

RESUMEN

Medication adherence is an essential aspect of healthcare for patients and is important for achieving medical objectives. However, the lack of standard techniques for measuring adherence is a global concern, making it challenging to accurately monitor and measure patient medication regimens. The use of sensor technology for medication adherence monitoring has received much attention lately since it makes it possible to continuously observe patients' medication adherence behavior. Sensor devices or smart wearables utilize state-of-the-art machine learning (ML) methods to analyze intricate data patterns and provide predictions accurately. The key aim of this work is to develop a sensor-based hand gesture recognition model to predict medication activities. In this research, a smart sensor device-based hand gesture prediction model is developed to recognize medication intake activities. The device includes a tri-axial gyroscope, geometric, and accelerometer sensors to sense and gather data from hand gestures. A smartphone application gathers hand gesture data from the sensor device, which is then stored in the cloud database in a .csv format. These data are collected, processed, and classified to recognize the medication intake activity using the proposed novel neural network model called Sea Horse Optimization-Deep Neural Network (SHO-DNN). The SHO technique is implemented to update the biases and weights and the number of hidden layers in the DNN model. By updating these parameters, the DNN model is improved in classifying the samples of hand gestures to identify the medication activities. The research model demonstrates impressive performance, with an accuracy of 98.59%, sensitivity of 97.82%, precision of 98.69%, and an F1 score of 98.48%. Hence, the proposed model outperformed the most available models in all the aforementioned aspects. The results indicate that this model is a promising approach for medication adherence monitoring in healthcare applications, instilling confidence in its effectiveness.


Asunto(s)
Gestos , Mano , Cumplimiento de la Medicación , Redes Neurales de la Computación , Humanos , Mano/fisiología , Teléfono Inteligente , Dispositivos Electrónicos Vestibles , Algoritmos , Aplicaciones Móviles , Aprendizaje Automático
6.
Sensors (Basel) ; 24(16)2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39204927

RESUMEN

This study delves into decoding hand gestures using surface electromyography (EMG) signals collected via a precision Myo-armband sensor, leveraging machine learning algorithms. The research entails rigorous data preprocessing to extract features and labels from raw EMG data. Following partitioning into training and testing sets, four traditional machine learning models are scrutinized for their efficacy in classifying finger movements across seven distinct gestures. The analysis includes meticulous parameter optimization and five-fold cross-validation to evaluate model performance. Among the models assessed, the Random Forest emerges as the top performer, consistently delivering superior precision, recall, and F1-score values across gesture classes, with ROC-AUC scores surpassing 99%. These findings underscore the Random Forest model as the optimal classifier for our EMG dataset, promising significant advancements in healthcare rehabilitation engineering and enhancing human-computer interaction technologies.


Asunto(s)
Algoritmos , Electromiografía , Gestos , Mano , Aprendizaje Automático , Humanos , Electromiografía/métodos , Mano/fisiología , Masculino , Femenino , Adulto , Procesamiento de Señales Asistido por Computador , Adulto Joven , Reconocimiento de Normas Patrones Automatizadas/métodos , Movimiento/fisiología
7.
Artículo en Inglés | MEDLINE | ID: mdl-39172614

RESUMEN

Surface electromyography (sEMG), a human-machine interface for gesture recognition, has shown promising potential for decoding motor intentions, but a variety of nonideal factors restrict its practical application in assistive robots. In this paper, we summarized the current mainstream gesture recognition strategies and proposed a gesture recognition method based on multimodal canonical correlation analysis feature fusion classification (MCAFC) for a nonideal condition that occurs in daily life, i.e., posture variations. The deep features of the sEMG and acceleration signals were first extracted via convolutional neural networks. A canonical correlation analysis was subsequently performed to associate the deep features of the two modalities. The transformed features were utilized as inputs to a linear discriminant analysis classifier to recognize the corresponding gestures. Both offline and real-time experiments were conducted on eight non-disabled subjects. The experimental results indicated that MCAFC achieved an average classification accuracy, average motion completion rate, and average motion completion time of 93.44%, 94.05%, and 1.38 s, respectively, with multiple dynamic postures, indicating significantly better performance than that of comparable methods. The results demonstrate the feasibility and superiority of the proposed multimodal signal feature fusion method for gesture recognition with posture variations, providing a new scheme for myoelectric control.


Asunto(s)
Algoritmos , Electromiografía , Gestos , Mano , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas , Postura , Humanos , Postura/fisiología , Mano/fisiología , Masculino , Reconocimiento de Normas Patrones Automatizadas/métodos , Adulto , Femenino , Adulto Joven , Análisis Discriminante , Aprendizaje Profundo , Voluntarios Sanos
8.
Sci Rep ; 14(1): 18564, 2024 08 09.
Artículo en Inglés | MEDLINE | ID: mdl-39122791

RESUMEN

High-density electromyography (HD-EMG) can provide a natural interface to enhance human-computer interaction (HCI). This study aims to demonstrate the capability of a novel HD-EMG forearm sleeve equipped with up to 150 electrodes to capture high-resolution muscle activity, decode complex hand gestures, and estimate continuous hand position via joint angle predictions. Ten able-bodied participants performed 37 hand movements and grasps while EMG was recorded using the HD-EMG sleeve. Simultaneously, an 18-sensor motion capture glove calculated 23 joint angles from the hand and fingers across all movements for training regression models. For classifying across the 37 gestures, our decoding algorithm was able to differentiate between sequential movements with 97.3 ± 0.3 % accuracy calculated on a 100 ms bin-by-bin basis. In a separate mixed dataset consisting of 19 movements randomly interspersed, decoding performance achieved an average bin-wise accuracy of 92.8 ± 0.8 % . When evaluating decoders for use in real-time scenarios, we found that decoders can reliably decode both movements and movement transitions, achieving an average accuracy of 93.3 ± 0.9 % on the sequential set and 88.5 ± 0.9 % on the mixed set. Furthermore, we estimated continuous joint angles from the EMG sleeve data, achieving a R 2 of 0.884 ± 0.003 in the sequential set and 0.750 ± 0.008 in the mixed set. Median absolute error (MAE) was kept below 10° across all joints, with a grand average MAE of 1.8 ± 0 . 04 ∘ and 3.4 ± 0 . 07 ∘ for the sequential and mixed datasets, respectively. We also assessed two algorithm modifications to address specific challenges for EMG-driven HCI applications. To minimize decoder latency, we used a method that accounts for reaction time by dynamically shifting cue labels in time. To reduce training requirements, we show that pretraining models with historical data provided an increase in decoding performance compared with models that were not pretrained when reducing the in-session training data to only one attempt of each movement. The HD-EMG sleeve, combined with sophisticated machine learning algorithms, can be a powerful tool for hand gesture recognition and joint angle estimation. This technology holds significant promise for applications in HCI, such as prosthetics, assistive technology, rehabilitation, and human-robot collaboration.


Asunto(s)
Electromiografía , Gestos , Mano , Dispositivos Electrónicos Vestibles , Humanos , Electromiografía/métodos , Masculino , Femenino , Adulto , Mano/fisiología , Algoritmos , Movimiento/fisiología , Adulto Joven
9.
Philos Trans R Soc Lond B Biol Sci ; 379(1911): 20230156, 2024 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-39155717

RESUMEN

The gestures we produce serve a variety of functions-they affect our communication, guide our attention and help us think and change the way we think. Gestures can consequently also help us learn, generalize what we learn and retain that knowledge over time. The effects of gesture-based instruction in mathematics have been well studied. However, few of these studies are directly applicable to classroom environments. Here, we review literature that highlights the benefits of producing and observing gestures when teaching and learning mathematics, and we provide suggestions for designing research studies with an eye towards how gestures can feasibly be applied to classroom learning. This article is part of the theme issue 'Minds in movement: embodied cognition in the age of artificial intelligence'.


Asunto(s)
Gestos , Aprendizaje , Matemática , Humanos , Niño , Matemática/educación , Enseñanza , Maestros/psicología , Cognición , Instituciones Académicas
10.
Cogn Sci ; 48(8): e13486, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39155515

RESUMEN

Research shows that high- and low-pitch sounds can be associated with various meanings. For example, high-pitch sounds are associated with small concepts, whereas low-pitch sounds are associated with large concepts. This study presents three experiments revealing that high-pitch sounds are also associated with open concepts and opening hand actions, while low-pitch sounds are associated with closed concepts and closing hand actions. In Experiment 1, this sound-meaning correspondence effect was shown using the two-alternative forced-choice task, while Experiments 2 and 3 used reaction time tasks to show this interaction. In Experiment 2, high-pitch vocalizations were found to facilitate opening hand gestures, and low-pitch vocalizations were found to facilitate closing hand gestures, when performed simultaneously. In Experiment 3, high-pitched vocalizations were produced particularly rapidly when the visual target stimulus presented an open object, and low-pitched vocalizations were produced particularly rapidly when the target presented a closed object. These findings are discussed concerning the meaning of intonational cues. They are suggested to be based on cross-modally representing conceptual spatial knowledge in sensory, motor, and affective systems. Additionally, this pitch-opening effect might share cognitive processes with other pitch-meaning effects.


Asunto(s)
Tiempo de Reacción , Humanos , Masculino , Femenino , Adulto Joven , Adulto , Percepción de la Altura Tonal/fisiología , Percepción Espacial/fisiología , Gestos , Sonido , Estimulación Acústica , Señales (Psicología)
11.
F1000Res ; 13: 798, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39139467

RESUMEN

Background: The consensus in scientific literature is that each child undergoes a unique linguistic development path, albeit with shared developmental stages. Some children excel or lag behind their peers in language skills. Consequently, a key challenge in language acquisition research is pinpointing factors influencing individual differences in language development. Methods: We observed children longitudinally from 3 to 24 months of life to explore early predictors of vocabulary size. Based on the productive vocabulary size of children at 24 months, 30 children met our sample selection criteria: 10 late talkers and 10 early talkers, and we compared them with 10 typical talkers. We evaluated interactive behaviors at 3, 6, 9 and 12 months, considering vocal production, gaze at mother's face, and gestural production during mother-child interactions, and we considered mothers' report of children's actions and gestures and receptive-vocabulary size at 15 and 18 months. Results: Results indicated early precursors of language outcome at 24 months identifiable as early as 3 months in vocal productions, 6 months for gaze at mother's face and 12 months for gestural productions. Conclusions: Our research highlights both theoretical and practical implications. Theoretically, identifying the early indicators of belonging to the group of late or early talkers underscores the significant role of this developmental period for future studies. On a practical note, our findings emphasize the crucial need for early investigations to identify predictors of vocabulary development before the typical age at which lexical delay is identified.


Asunto(s)
Desarrollo del Lenguaje , Humanos , Lactante , Femenino , Masculino , Preescolar , Vocabulario , Relaciones Madre-Hijo , Habla/fisiología , Estudios Longitudinales , Gestos
12.
Sensors (Basel) ; 24(15)2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39123896

RESUMEN

For successful human-robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human-robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS.


Asunto(s)
Gestos , Robótica , Robótica/métodos , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Inteligencia Artificial
13.
Sensors (Basel) ; 24(15)2024 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-39124090

RESUMEN

Human-Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test-time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable.


Asunto(s)
Algoritmos , Ultrasonografía , Humanos , Ultrasonografía/métodos , Interfaz Usuario-Computador , Muñeca/fisiología , Muñeca/diagnóstico por imagen , Redes Neurales de la Computación , Dedos/fisiología , Sistemas Hombre-Máquina , Gestos
14.
Comput Biol Med ; 179: 108817, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39004049

RESUMEN

Force myography (FMG) is increasingly gaining importance in gesture recognition because of it's ability to achieve high classification accuracy without having a direct contact with the skin. In this study, we investigate the performance of a bracelet with only six commercial force sensitive resistors (FSR) sensors for classifying many hand gestures representing all letters and numbers from 0 to 10 in the American sign language. For this, we introduce an optimized feature selection in combination with the Extreme Learning Machine (ELM) as a classifier by investigating three swarm intelligence algorithms, which are the binary grey wolf optimizer (BGWO), binary grasshopper optimizer (BGOA), and binary hybrid grey wolf particle swarm optimizer (BGWOPSO), which is used as an optimization method for ELM for the first time in this study. The findings reveal that the BGWOPSO, in which PSO supports the GWO optimizer by controlling its exploration and exploitation using inertia constant to improve the convergence speed to reach the best global optima, outperformed the other investigated algorithms. In addition, the results show that optimizing ELM with BGWOPSO for feature selection can efficiently improve the performance of ELM to enhance the classification accuracy from 32% to 69.84% for classifying 37 gestures collected from multiple volunteers and using only a band with 6 FSR sensors.


Asunto(s)
Algoritmos , Gestos , Humanos , Aprendizaje Automático , Miografía/métodos , Masculino , Femenino
15.
Infancy ; 29(5): 693-712, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39030871

RESUMEN

Infants' use of pointing gestures to direct and share attention develops during the first 2 years of life. Shyness, defined as an approach-avoidance motivational conflict during social interactions, may influence infants' use of pointing. Recent research distinguished between positive (gaze and/or head aversions while smiling) and non-positive (gaze and/or head aversions without smiling) shyness, which are related to different social and cognitive skills. We investigated whether positive and non-positive shyness in 12-month-old (n = 38; 15 girls) and 15-month-old (n = 45; 15 girls) infants were associated with their production of pointing gestures. Infants' expressions of shyness were observed during a social-exposure task in which the infant entered the laboratory room in their parent's arms and was welcomed by an unfamiliar person who provided attention and compliments. Infants' pointing was measured with a pointing task involving three stimuli: pleasant, unpleasant, and neutral. Positive shyness was positively associated with overall pointing at 15 months, especially in combination with high levels of non-positive shyness. In addition, infants who displayed more non-positive shyness pointed more frequently to direct the attention of the social partner to an unpleasant (vs. neutral) stimulus at both ages. Results indicate that shyness influences the early use of pointing to emotionally charged stimuli.


Asunto(s)
Gestos , Timidez , Humanos , Femenino , Masculino , Lactante , Conducta del Lactante , Desarrollo Infantil , Interacción Social , Atención
16.
Artículo en Inglés | MEDLINE | ID: mdl-39028609

RESUMEN

Motor imagery (MI) based brain computer interface (BCI) has been extensively studied to improve motor recovery for stroke patients by inducing neuroplasticity. However, due to the lower spatial resolution and signal-to-noise ratio (SNR) of electroencephalograph (EEG), MI based BCI system that involves decoding hand movements within the same limb remains lower classification accuracy and poorer practicality. To overcome the limitations, an adaptive hybrid BCI system combining MI and steady-state visually evoked potential (SSVEP) is developed to improve decoding accuracy while enhancing neural engagement. On the one hand, the SSVEP evoked by visual stimuli based on action-state flickering coding approach significantly improves the recognition accuracy compared to the pure MI based BCI. On the other hand, to reduce the impact of SSVEP on MI due to the dual-task interference effect, the event-related desynchronization (ERD) based neural engagement is monitored and employed for feedback in real-time to ensure the effective execution of MI tasks. Eight healthy subjects and six post-stroke patients were recruited to verify the effectiveness of the system. The results showed that the four-class gesture recognition accuracies of healthy individuals and patients could be improved to 94.37 ± 4.77 % and 79.38 ± 6.26 %, respectively. Moreover, the designed hybrid BCI could maintain the same degree of neural engagement as observed when subjects solely performed MI tasks. These phenomena demonstrated the interactivity and clinical utility of the developed system for the rehabilitation of hand function in stroke patients.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía , Potenciales Evocados Visuales , Mano , Rehabilitación de Accidente Cerebrovascular , Humanos , Rehabilitación de Accidente Cerebrovascular/métodos , Masculino , Electroencefalografía/métodos , Femenino , Potenciales Evocados Visuales/fisiología , Persona de Mediana Edad , Adulto , Algoritmos , Imaginación/fisiología , Accidente Cerebrovascular/fisiopatología , Gestos , Anciano , Voluntarios Sanos , Adulto Joven , Estimulación Luminosa , Relación Señal-Ruido , Reproducibilidad de los Resultados
17.
Appl Ergon ; 121: 104359, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39067282

RESUMEN

The integration of 3D gestural embodied human-computer interaction (eHCI) has provided an avenue for contactless interaction with systems. The design of gestural systems employs two approaches: Technology-based approach and Human-based approach. This study reviews the existing literature on development approaches for 3D gestural eHCI to understand the current state of research in 3D gestural eHCI using these approaches and identify potential areas for future exploration. Articles were gathered from three databases: PsycInfo, Science Direct and IEEE Xplore. A total of 35 articles were identified, of which 18 used human-based methods, and 17 used technology-based methods. Findings shed light on inconsistencies between developers and users in preferred hand gesture poses and identify factors influencing users' gesture choice. Implementation of the consolidated findings has the potential to improve human readiness for 3D gestural eHCI technologies.


Asunto(s)
Gestos , Interfaz Usuario-Computador , Humanos , Ergonomía/métodos , Mano/fisiología
18.
Cogn Sci ; 48(7): e13479, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38980965

RESUMEN

Gestures-hand movements that accompany speech and express ideas-can help children learn how to solve problems, flexibly generalize learning to novel problem-solving contexts, and retain what they have learned. But does it matter who is doing the gesturing? We know that producing gesture leads to better comprehension of a message than watching someone else produce gesture. But we do not know how producing versus observing gesture impacts deeper learning outcomes such as generalization and retention across time. Moreover, not all children benefit equally from gesture instruction, suggesting that there are individual differences that may play a role in who learns from gesture. Here, we consider two factors that might impact whether gesture leads to learning, generalization, and retention after mathematical instruction: (1) whether children see gesture or do gesture and (2) whether a child spontaneously gestures before instruction when explaining their problem-solving reasoning. For children who spontaneously gestured before instruction, both doing and seeing gesture led to better generalization and retention of the knowledge gained than a comparison manipulative action. For children who did not spontaneously gesture before instruction, doing gesture was less effective than the comparison action for learning, generalization, and retention. Importantly, this learning deficit was specific to gesture, as these children did benefit from doing the comparison manipulative action. Our findings are the first evidence that a child's use of a particular representational format for communication (gesture) directly predicts that child's propensity to learn from using the same representational format.


Asunto(s)
Gestos , Aprendizaje , Solución de Problemas , Humanos , Femenino , Masculino , Matemática , Niño , Preescolar , Generalización Psicológica/fisiología
19.
J Robot Surg ; 18(1): 297, 2024 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-39068261

RESUMEN

The objective of this study is to compare automated performance metrics (APM) and surgical gestures for technical skills assessment during simulated robot-assisted radical prostatectomy (RARP). Ten novices and six experienced RARP surgeons performed simulated RARPs on the RobotiX Mentor (Surgical Science, Sweden). Simulator APM were automatically recorded, and surgical videos were manually annotated with five types of surgical gestures. The consequences of the pass/fail levels, which were based on contrasting groups' methods, were compared for APM and surgical gestures. Intra-class correlation coefficient (ICC) analysis and a Bland-Altman plot were used to explore the correlation between APM and surgical gestures. Pass/fail levels for both APM and surgical gesture could fully distinguish between the skill levels of the surgeons with a specificity and sensitivity of 100%. The overall ICC (one-way, random) was 0.70 (95% CI: 0.34-0.88), showing moderate agreement between the methods. The Bland-Altman plot showed a high agreement between the two methods for assessing experienced surgeons but disagreed on the novice surgeons' skill level. APM and surgical gestures could both fully distinguish between novices and experienced surgeons in a simulated setting. Both methods of analyzing technical skills have their advantages and disadvantages and, as of now, those are only to a limited extent available in the clinical setting. The development of assessment methods in a simulated setting enables testing before implementing it in a clinical setting.


Asunto(s)
Competencia Clínica , Gestos , Prostatectomía , Procedimientos Quirúrgicos Robotizados , Procedimientos Quirúrgicos Robotizados/educación , Procedimientos Quirúrgicos Robotizados/métodos , Procedimientos Quirúrgicos Robotizados/normas , Humanos , Prostatectomía/métodos , Prostatectomía/educación , Masculino , Cirujanos/educación , Análisis y Desempeño de Tareas
20.
Hum Brain Mapp ; 45(11): e26762, 2024 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-39037079

RESUMEN

Hierarchical models have been proposed to explain how the brain encodes actions, whereby different areas represent different features, such as gesture kinematics, target object, action goal, and meaning. The visual processing of action-related information is distributed over a well-known network of brain regions spanning separate anatomical areas, attuned to specific stimulus properties, and referred to as action observation network (AON). To determine the brain organization of these features, we measured representational geometries during the observation of a large set of transitive and intransitive gestures in two independent functional magnetic resonance imaging experiments. We provided evidence for a partial dissociation between kinematics, object characteristics, and action meaning in the occipito-parietal, ventro-temporal, and lateral occipito-temporal cortex, respectively. Importantly, most of the AON showed low specificity to all the explored features, and representational spaces sharing similar information content were spread across the cortex without being anatomically adjacent. Overall, our results support the notion that the AON relies on overlapping and distributed coding and may act as a unique representational space instead of mapping features in a modular and segregated manner.


Asunto(s)
Mapeo Encefálico , Gestos , Imagen por Resonancia Magnética , Humanos , Masculino , Femenino , Fenómenos Biomecánicos/fisiología , Adulto , Adulto Joven , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Estimulación Luminosa/métodos , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA