Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.225
Filtrar
1.
JMIR Res Protoc ; 13: e55738, 2024 Sep 13.
Artículo en Inglés | MEDLINE | ID: mdl-39269750

RESUMEN

BACKGROUND: The practice of dental surgery requires a few different skills, including mental rotation of an object, precision of movement with good hand-eye coordination, and speed of technical movement. Learning these different skills begins during the preclinical phase of dental student training. Moreover, playing a musical instrument or video game seems to promote the early development of these skills. However, we found that studies specifically addressing this issue in the field of dental education are lacking. OBJECTIVE: The main aims of this study are to evaluate whether the ability to mentally represent a volume in 3D, the precision of gestures with their right and left hand, or the speed of gesture execution is better at baseline or progresses faster for players (video games or music or both). METHODS: A prospective monocentric controlled and longitudinal study will be conducted from September 2023 and will last until April 2025 in the Faculty of Dental Surgery of Nantes. Participants were students before starting their preclinical training. Different tests will be used such as Vandenberg and Kuse's mental rotation test, the modified Precision Manual Dexterity (PMD), and performing a pulpotomy on a permanent tooth. This protocol was approved by the Ethics, Deontology, and Scientific Integrity Committee of Nantes University (institutional review board approval number IORG0011023). RESULTS: A total of 86 second-year dental surgery students were enrolled to participate in the study in September 2023. They will take part in 4 iterations of the study, the last of which will take place in April 2025. CONCLUSIONS: Playing video games or a musical instrument or both could be a potential tool for initiating or facilitating the learning of certain technical skills in dental surgery. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): DERR1-10.2196/55738.


Asunto(s)
Música , Estudiantes de Odontología , Juegos de Video , Humanos , Estudiantes de Odontología/psicología , Estudios Prospectivos , Música/psicología , Estudios Longitudinales , Educación en Odontología/métodos , Competencia Clínica , Femenino , Masculino
2.
Sensors (Basel) ; 24(17)2024 Aug 26.
Artículo en Inglés | MEDLINE | ID: mdl-39275430

RESUMEN

Human-computer interaction (HCI) with screens through gestures is a pivotal method amidst the digitalization trend. In this work, a gesture recognition method is proposed that combines multi-band spectral features with spatial characteristics of screen-reflected light. Based on the method, a red-green-blue (RGB) three-channel spectral gesture recognition system has been developed, composed of a display screen integrated with narrowband spectral receivers as the hardware setup. During system operation, emitted light from the screen is reflected by gestures and received by the narrowband spectral receivers. These receivers at various locations are tasked with capturing multiple narrowband spectra and converting them into light-intensity series. The availability of multi-narrowband spectral data integrates multidimensional features from frequency and spatial domains, enhancing classification capabilities. Based on the RGB three-channel spectral features, this work formulates an RGB multi-channel convolutional neural network long short-term memory (CNN-LSTM) gesture recognition model. It achieves accuracies of 99.93% in darkness and 99.89% in illuminated conditions. This indicates the system's capability for stable operation across different lighting conditions and accurate interaction. The intelligent gesture recognition method can be widely applied for interactive purposes on various screens such as computers and mobile phones, facilitating more convenient and precise HCI.

3.
Sensors (Basel) ; 24(17)2024 Sep 05.
Artículo en Inglés | MEDLINE | ID: mdl-39275694

RESUMEN

Over the last few decades, a growing number of studies have used wearable technologies, such as inertial and pressure sensors, to investigate various domains of music experience, from performance to education. In this paper, we systematically review this body of literature using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) method. The initial search yielded a total of 359 records. After removing duplicates and screening for content, 23 records were deemed fully eligible for further analysis. Studies were grouped into four categories based on their main objective, namely performance-oriented systems, measuring physiological parameters, gesture recognition, and sensory mapping. The reviewed literature demonstrated the various ways in which wearable systems impact musical contexts, from the design of multi-sensory instruments to systems monitoring key learning parameters. Limitations also emerged, mostly related to the technology's comfort and usability, and directions for future research in wearables and music are outlined.


Asunto(s)
Música , Dispositivos Electrónicos Vestibles , Humanos
4.
Artículo en Inglés | MEDLINE | ID: mdl-39287713

RESUMEN

PURPOSE: In order to produce a surgical gesture recognition system that can support a wide variety of procedures, either a very large annotated dataset must be acquired, or fitted models must generalize to new labels (so-called zero-shot capability). In this paper we investigate the feasibility of latter option. METHODS: Leveraging the bridge-prompt framework, we prompt-tune a pre-trained vision-text model (CLIP) for gesture recognition in surgical videos. This can utilize extensive outside video data such as text, but also make use of label meta-data and weakly supervised contrastive losses. RESULTS: Our experiments show that prompt-based video encoder outperforms standard encoders in surgical gesture recognition tasks. Notably, it displays strong performance in zero-shot scenarios, where gestures/tasks that were not provided during the encoder training phase are included in the prediction phase. Additionally, we measure the benefit of inclusion text descriptions in the feature extractor training schema. CONCLUSION: Bridge-prompt and similar pre-trained + prompt-tuned video encoder models present significant visual representation for surgical robotics, especially in gesture recognition tasks. Given the diverse range of surgical tasks (gestures), the ability of these models to zero-shot transfer without the need for any task (gesture) specific retraining makes them invaluable.

5.
Learn Behav ; 2024 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-39261414

RESUMEN

Researchers have recently described the wing-fluttering signal of Japanese tits and eyeblink signal of concave-eared torrent frogs as bodily communication that elicits specific responses. I assess the evidence that these may be intentional, goal-directed signals using established criteria for gestural communication.

6.
Cogn Sci ; 48(9): e13484, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39228272

RESUMEN

When people talk about kinship systems, they often use co-speech gestures and other representations to elaborate. This paper investigates such polysemiotic (spoken, gestured, and drawn) descriptions of kinship relations, to see if they display recurring patterns of conventionalization that capture specific social structures. We present an exploratory hypothesis-generating study of descriptions produced by a lesser-known ethnolinguistic community to the cognitive sciences: the Paamese people of Vanuatu. Forty Paamese speakers were asked to talk about their family in semi-guided kinship interviews. Analyses of the speech, gesture, and drawings produced during these interviews revealed that lineality (i.e., mother's side vs. father's side) is lateralized in the speaker's gesture space. In other words, kinship members of the speaker's matriline are placed on the left side of the speaker's body and those of the patriline are placed on their right side, when they are mentioned in speech. Moreover, we find that the gesture produced by Paamese participants during verbal descriptions of marital relations are performed significantly more often on two diagonal directions of the sagittal axis. We show that these diagonals are also found in the few diagrams that participants drew on the ground to augment their verbo-gestural descriptions of marriage practices with drawing. We interpret this behavior as evidence of a spatial template, which Paamese speakers activate to think and communicate about family relations. We therefore argue that extending investigations of kinship structures beyond kinship terminologies alone can unveil additional key factors that shape kinship cognition and communication and hereby provide further insights into the diversity of social structures.


Asunto(s)
Cognición , Comunicación , Familia , Gestos , Humanos , Masculino , Femenino , Familia/psicología , Adulto , Habla , Persona de Mediana Edad
7.
Top Cogn Sci ; 2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39190828

RESUMEN

Languages are neither designed in classrooms nor drawn from dictionaries-they are products of human minds and human interactions. However, it is challenging to understand how structure grows in these circumstances because generations of use and transmission shape and reshape the structure of the languages themselves. Laboratory studies on language emergence investigate the origins of language structure by requiring participants, prevented from using their own natural language(s), to create a novel communication system and then transmit it to others. Because the participants in these lab studies are already speakers of a language, it is easy to question the relevance of lab-based findings to the creation of natural language systems. Here, we take the findings from a lab-based language emergence paradigm and test whether the same pattern is also found in a new natural language: Nicaraguan Sign Language. We find evidence that signers of Nicaraguan Sign Language may show the same biases seen in lab-based language emergence studies: (1) they appear to condition word order based on the semantic dimension of intensionality and extensionality, and (2) they adjust this conditioning to satisfy language-internal order constraints. Our study adds to the small, but growing literature testing the relevance of lab-based studies to natural language birth, and provides convincing evidence that the biases seen in the lab play a role in shaping a brand new language.

8.
ACS Sens ; 2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39193764

RESUMEN

Conductive hydrogel is considered to be one of the most potential sensing materials for wearable strain sensors. However, both the hydrophilicity of polymer chains and high water content severely inhibit the potential applications of hydrogel-based sensors in extreme conditions. In this study, a multicross-linked hydrogel was prepared by simultaneously introducing a double-network matrix, multiple conductive fillers, and free-moving ions, which can withstand an ultralow temperature below -80 °C. A superhydrophobic Ecoflex layer with a water contact angle of 159.1° was coated on the hydrogel using simple spraying and laser engraving methods. Additionally, the smart glove integrating five hydrogel strain sensors with a microprocessor was developed to recognize 12 types of diving gestures and synchronously transmit recognition results to smartphones. The superhydrophobic and antifreezing hydrogel strain sensor proposed in this study emerges promising potentials in wearable electronics, human-machine interfaces, and underwater applications.

9.
ACS Appl Mater Interfaces ; 16(32): 42242-42253, 2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39102499

RESUMEN

A multiple self-powered sensor-integrated mobile manipulator (MSIMM) system was proposed to address challenges in existing exploration devices, such as the need for a constant energy supply, limited variety of sensed information, and difficult human-computer interfaces. The MSIMM system integrates triboelectric nanogenerator (TENG)-based self-powered sensors, a bionic manipulator, and wireless gesture control, enhancing sensor data usability through machine learning. Specifically, the system includes a tracked vehicle platform carrying the manipulator and electronics, including a storage battery and a microcontroller unit (MCU). An integrated sensor glove and terminal application (APP) enable intuitive manipulator control, improving human-computer interaction. The system responds to and analyzes various environmental stimuli, including the droplet and fall height, temperature, pressure, material type, angles, angular velocity direction, and acceleration amplitude and direction. The manipulator, fabricated using 3D printing technology, integrates multiple sensors that generate electrical signals through the triboelectric effect of mechanical motion. These signals are classified using convolutional neural networks for accurate environmental monitoring. Our database shows signal recognition and classification accuracy exceeding 94%, with specific accuracies of 100% for pressure sensors, 99.55% for angle sensors, and 98.66, 95.91, 96.27, and 94.13% for material, droplet, temperature, and acceleration sensors, respectively.

10.
Sensors (Basel) ; 24(15)2024 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-39124090

RESUMEN

Human-Machine Interfaces (HMIs) have gained popularity as they allow for an effortless and natural interaction between the user and the machine by processing information gathered from a single or multiple sensing modalities and transcribing user intentions to the desired actions. Their operability depends on frequent periodic re-calibration using newly acquired data due to their adaptation needs in dynamic environments, where test-time data continuously change in unforeseen ways, a cause that significantly contributes to their abandonment and remains unexplored by the Ultrasound-based (US-based) HMI community. In this work, we conduct a thorough investigation of Unsupervised Domain Adaptation (UDA) algorithms for the re-calibration of US-based HMIs during within-day sessions, which utilize unlabeled data for re-calibration. Our experimentation led us to the proposal of a CNN-based architecture for simultaneous wrist rotation angle and finger gesture prediction that achieves comparable performance with the state-of-the-art while featuring 87.92% less trainable parameters. According to our findings, DANN (a Domain-Adversarial training algorithm), with proper initialization, offers an average 24.99% classification accuracy performance enhancement when compared to no re-calibration setting. However, our results suggest that in cases where the experimental setup and the UDA configuration may differ, observed enhancements would be rather small or even unnoticeable.


Asunto(s)
Algoritmos , Ultrasonografía , Humanos , Ultrasonografía/métodos , Interfaz Usuario-Computador , Muñeca/fisiología , Muñeca/diagnóstico por imagen , Redes Neurales de la Computación , Dedos/fisiología , Sistemas Hombre-Máquina , Gestos
11.
Sensors (Basel) ; 24(15)2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39123896

RESUMEN

For successful human-robot collaboration, it is crucial to establish and sustain quality interaction between humans and robots, making it essential to facilitate human-robot interaction (HRI) effectively. The evolution of robot intelligence now enables robots to take a proactive role in initiating and sustaining HRI, thereby allowing humans to concentrate more on their primary tasks. In this paper, we introduce a system known as the Robot-Facilitated Interaction System (RFIS), where mobile robots are employed to perform identification, tracking, re-identification, and gesture recognition in an integrated framework to ensure anytime readiness for HRI. We implemented the RFIS on an autonomous mobile robot used for transporting a patient, to demonstrate proactive, real-time, and user-friendly interaction with a caretaker involved in monitoring and nursing the patient. In the implementation, we focused on the efficient and robust integration of various interaction facilitation modules within a real-time HRI system that operates in an edge computing environment. Experimental results show that the RFIS, as a comprehensive system integrating caretaker recognition, tracking, re-identification, and gesture recognition, can provide an overall high quality of interaction in HRI facilitation with average accuracies exceeding 90% during real-time operations at 5 FPS.


Asunto(s)
Gestos , Robótica , Robótica/métodos , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Inteligencia Artificial
12.
Front Bioeng Biotechnol ; 12: 1401803, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39144478

RESUMEN

Introduction: Hand gestures are an effective communication tool that may convey a wealth of information in a variety of sectors, including medical and education. E-learning has grown significantly in the last several years and is now an essential resource for many businesses. Still, there has not been much research conducted on the use of hand gestures in e-learning. Similar to this, gestures are frequently used by medical professionals to help with diagnosis and treatment. Method: We aim to improve the way instructors, students, and medical professionals receive information by introducing a dynamic method for hand gesture monitoring and recognition. Six modules make up our approach: video-to-frame conversion, preprocessing for quality enhancement, hand skeleton mapping with single shot multibox detector (SSMD) tracking, hand detection using background modeling and convolutional neural network (CNN) bounding box technique, feature extraction using point-based and full-hand coverage techniques, and optimization using a population-based incremental learning algorithm. Next, a 1D CNN classifier is used to identify hand motions. Results: After a lot of trial and error, we were able to obtain a hand tracking accuracy of 83.71% and 85.71% over the Indian Sign Language and WLASL datasets, respectively. Our findings show how well our method works to recognize hand motions. Discussion: Teachers, students, and medical professionals can all efficiently transmit and comprehend information by utilizing our suggested system. The obtained accuracy rates highlight how our method might improve communication and make information exchange easier in various domains.

13.
Sci Rep ; 14(1): 20247, 2024 08 30.
Artículo en Inglés | MEDLINE | ID: mdl-39215011

RESUMEN

Long-term electroencephalography (EEG) recordings have primarily been used to study resting-state fluctuations. These recordings provide valuable insights into various phenomena such as sleep stages, cognitive processes, and neurological disorders. However, this study explores a new angle, focusing for the first time on the evolving nature of EEG dynamics over time within the context of movement. Twenty-two healthy individuals were measured six times from 2 p.m. to 12 a.m. with intervals of 2 h while performing four right-hand gestures. Analysis of movement-related cortical potentials (MRCPs) revealed a reduction in amplitude for the motor and post-motor potential during later hours of the day. Evaluation in source space displayed an increase in the activity of M1 of the contralateral hemisphere and the SMA of both hemispheres until 8 p.m. followed by a decline until midnight. Furthermore, we investigated how changes over time in MRCP dynamics affect the ability to decode motor information. This was achieved by developing classification schemes to assess performance across different scenarios. The observed variations in classification accuracies over time strongly indicate the need for adaptive decoders. Such adaptive decoders would be instrumental in delivering robust results, essential for the practical application of BCIs during day and nighttime usage.


Asunto(s)
Electroencefalografía , Gestos , Mano , Humanos , Electroencefalografía/métodos , Masculino , Femenino , Mano/fisiología , Adulto , Adulto Joven , Movimiento/fisiología , Corteza Motora/fisiología , Interfaces Cerebro-Computador
14.
Philos Trans R Soc Lond B Biol Sci ; 379(1911): 20230156, 2024 Oct 07.
Artículo en Inglés | MEDLINE | ID: mdl-39155717

RESUMEN

The gestures we produce serve a variety of functions-they affect our communication, guide our attention and help us think and change the way we think. Gestures can consequently also help us learn, generalize what we learn and retain that knowledge over time. The effects of gesture-based instruction in mathematics have been well studied. However, few of these studies are directly applicable to classroom environments. Here, we review literature that highlights the benefits of producing and observing gestures when teaching and learning mathematics, and we provide suggestions for designing research studies with an eye towards how gestures can feasibly be applied to classroom learning. This article is part of the theme issue 'Minds in movement: embodied cognition in the age of artificial intelligence'.


Asunto(s)
Gestos , Aprendizaje , Matemática , Humanos , Niño , Matemática/educación , Enseñanza , Maestros/psicología , Cognición , Instituciones Académicas
15.
Bioengineering (Basel) ; 11(8)2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39199769

RESUMEN

Surface electromyography (sEMG) is commonly used as an interface in human-machine interaction systems due to their high signal-to-noise ratio and easy acquisition. It can intuitively reflect motion intentions of users, thus is widely applied in gesture recognition systems. However, wearable sEMG-based gesture recognition systems are susceptible to changes in environmental noise, electrode placement, and physiological characteristics. This could result in significant performance degradation of the model in inter-session scenarios, bringing a poor experience to users. Currently, for noise from environmental changes and electrode shifting from wearing variety, numerous studies have proposed various data-augmentation methods and highly generalized networks to improve inter-session gesture recognition accuracy. However, few studies have considered the impact of individual physiological states. In this study, we assumed that user exercise could cause changes in muscle conditions, leading to variations in sEMG features and subsequently affecting the recognition accuracy of model. To verify our hypothesis, we collected sEMG data from 12 participants performing the same gesture tasks before and after exercise, and then used Linear Discriminant Analysis (LDA) for gesture classification. For the non-exercise group, the inter-session accuracy declined only by 2.86%, whereas that of the exercise group decreased by 13.53%. This finding proves that exercise is indeed a critical factor contributing to the decline in inter-session model performance.

16.
ACS Sens ; 9(8): 4216-4226, 2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39068608

RESUMEN

Thermoelectric (TE) hydrogels, mimicking human skin, possessing temperature and strain sensing capabilities, are well-suited for human-machine interaction interfaces and wearable devices. In this study, a TE hydrogel with high toughness and temperature responsiveness was created using the Hofmeister effect and TE current effect, achieved through the cross-linking of PVA/PAA/carboxymethyl cellulose triple networks. The Hofmeister effect, facilitated by Na+ and SO42- ions coordination, notably increased the hydrogel's tensile strength (800 kPa). Introduction of Fe2+/Fe3+ as redox pairs conferred a high Seebeck coefficient (2.3 mV K-1), thereby enhancing temperature responsiveness. Using this dual-responsive sensor, successful demonstration of a feedback mechanism combining deep learning with a robotic hand was accomplished (with a recognition accuracy of 95.30%), alongside temperature warnings at various levels. It is expected to replace manual work through the control of the manipulator in some high-temperature and high-risk scenarios, thereby improving the safety factor, underscoring the vast potential of TE hydrogel sensors in motion monitoring and human-machine interaction applications.


Asunto(s)
Aprendizaje Profundo , Hidrogeles , Temperatura , Dispositivos Electrónicos Vestibles , Humanos , Hidrogeles/química , Resinas Acrílicas/química , Carboximetilcelulosa de Sodio/química , Alcohol Polivinílico/química , Resistencia a la Tracción , Robótica
17.
Int J Biol Macromol ; 276(Pt 1): 133802, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38992552

RESUMEN

Pursuing high-performance conductive hydrogels is still hot topic in development of advanced flexible wearable devices. Herein, a tough, self-healing, adhesive double network (DN) conductive hydrogel (named as OSA-(Gelatin/PAM)-Ca, O-(G/P)-Ca) was prepared by bridging gelatin and polyacrylamide network with functionalized polysaccharide (oxidized sodium alginate, OSA) through Schiff base reaction. Thanks to the presence of multiple interactions (Schiff base bond, hydrogen bond, and metal coordination) within the network, the prepared hydrogel showed outstanding mechanical properties (tensile strain of 2800 % and stress of 630 kPa), high conductivity (0.72 S/m), repeatable adhesion performance and excellent self-healing ability (83.6 %/79.0 % of the original tensile strain/stress after self-healing). Moreover, the hydrogel-based sensor exhibited high strain sensitivity (GF = 3.66) and fast response time (<0.5 s), which can be used to monitor a wide range of human physiological signals. Based on this, excellent compression sensitivity (GF = 0.41 kPa-1 in the range of 90-120 kPa), a three-dimensional (3D) array of flexible sensor was designed to monitor the intensity of pressure and spatial force distribution. In addition, a gel-based wearable sensor was accurately classified and recognized ten types of gestures, achieving an accuracy rate of >96.33 % both before and after self-healing under three machine learning models (the decision tree, SVM, and KNN). This paper provides a simple method to prepare tough and self-healing conductive hydrogel as flexible multifunctional sensor devices for versatile applications in fields such as healthcare monitoring, human-computer interaction, and artificial intelligence.


Asunto(s)
Resinas Acrílicas , Alginatos , Conductividad Eléctrica , Gelatina , Hidrogeles , Dispositivos Electrónicos Vestibles , Alginatos/química , Resinas Acrílicas/química , Hidrogeles/química , Gelatina/química , Humanos , Oxidación-Reducción , Adhesivos/química , Resistencia a la Tracción , Técnicas Biosensibles/métodos
18.
Front Neurosci ; 18: 1306047, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39050666

RESUMEN

The surface electromyographic (sEMG) signals reflect human motor intention and can be utilized for human-machine interfaces (HMI). Comparing to the sparse multi-channel (SMC) electrodes, the high-density (HD) electrodes have a large number of electrodes and compact space between electrodes, which can achieve more sEMG information and have the potential to achieve higher performance in myocontrol. However, when the HD electrodes grid shift or damage, it will affect gesture recognition and reduce recognition accuracy. To minimize the impact resulting from the electrodes shift and damage, we proposed an attention deep fast convolutional neural network (attention-DFCNN) model by utilizing the temporary and spatial characteristics of high-density surface electromyography (HD-sEMG) signals. Contrary to the previous methods, which are mostly base on sEMG temporal features, the attention-DFCNN model can improve the robustness and stability by combining the spatial and temporary features. The performance of the proposed model was compared with other classical method and deep learning methods. We used the dataset provided by The University Medical Center Göttingen. Seven able-bodied subjects and one amputee involved in this work. Each subject executed nine gestures under the electrodes shift (10 mm) and damage (6 channels). As for the electrodes shift 10 mm in four directions (inwards; onwards; upwards; downwards) on seven able-bodied subjects, without any pre-training, the average accuracy of attention-DFCNN (0.942 ± 0.04) is significantly higher than LSDA (0.910 ± 0.04, p < 0.01), CNN (0.920 ± 0.05, p < 0.01), TCN (0.840 ± 0.07, p < 0.01), LSTM (0.864 ± 0.08, p < 0.01), attention-BiLSTM (0.852 ± 0.07, p < 0.01), Transformer (0.903 ± 0.07, p < 0.01) and Swin-Transformer (0.908 ± 0.09, p < 0.01). The proposed attention-DFCNN algorithm and the way of combining the spatial and temporary features of sEMG signals can significantly improve the recognition rate when the HD electrodes grid shift or damage during wear.

19.
Front Psychol ; 15: 1386187, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39027047

RESUMEN

Introduction: Hand gestures and actions-with-objects (hereafter 'actions') are both forms of movement that can promote learning. However, the two have unique affordances, which means that they have the potential to promote learning in different ways. Here we compare how children learn, and importantly retain, information after performing gestures, actions, or a combination of the two during instruction about mathematical equivalence. We also ask whether individual differences in children's understanding of mathematical equivalence (as assessed by spontaneous gesture before instruction) impacts the effects of gesture- and action-based instruction. Method: Across two studies, racially and ethnically diverse third and fourth-grade students (N=142) were given instruction about how to solve mathematical equivalence problems (eg., 2+9+4=__+4) as part of a pretest-training-posttest design. In Study 1, instruction involved teaching students to produce either actions or gestures. In Study 2, instruction involved teaching students to produce either actions followed by gestures or gestures followed by actions. Across both studies, speech and gesture produced during pretest explanations were coded and analyzed to measure individual differences in pretest understanding. Children completed written posttests immediately after instruction, as well as the following day, and four weeks later, to assess learning, generalization and retention. Results: In Study 1 we find that, regardless of individual differences in pre-test understanding of mathematical equivalence, children learn from both action and gesture, but gesture-based instruction promotes retention better than action-based instruction. In Study 2 we find an influence of individual differences: children who produced relatively few types of problem-solving strategies (as assessed by their pre-test gestures and speech) perform better when they receive action training before gesture training than when they receive gesture training first. In contrast, children who expressed many types of strategies, and thus had a more complex understanding of mathematical equivalence prior to instruction, performed equally with both orders. Discussion: These results demonstrate that action training, followed by gesture, can be a useful stepping-stone in the initial stages of learning mathematical equivalence, and that gesture training can help learners retain what they learn.

20.
Percept Mot Skills ; : 315125241266645, 2024 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-39033337

RESUMEN

Coaches often use pointing gestures alongside their speech to reinforce their message and emphasize important concepts during instructional communications, but the impact of simultaneous pointing gestures and speech on learners' recall remains unclear. We used eye-tracking and recalled performance to investigate the impact of a coach's variously timed pointing gestures and speech on two groups of learners' (novices and experts) visual attention and recall of tactical instructions. Participants were 96 basketball players (48 novice and 48 expert) who attempted to recall instructions about the evolution of a basketball game system under two teaching conditions: speech accompanied by gestures and speech followed by gestures. Overall, the results showed that novice players benefited more from instructional speech accompanied by gestures than from speech followed by gestures alone. This was evidenced by their greater visual attention to the diagrams, demonstrated through a higher fixation count and decreased saccadic shifts between the coach and the diagrams. Additionally, they exhibited improved recall and experienced reduced mental effort, despite having the same fixation time on the diagrams and equivalent recall time. Conversely, experts benefited more from instructional speech followed by gestures, indicating an expertise reversal effect. These results suggest that coaches and educators may improve their tactical instructions by timing the pairing of their hand gestures and speech in relation to the learner's level of expertise.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA