Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Neural Eng ; 21(3)2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38754410

RESUMEN

Objective.Upper limb loss can profoundly impact an individual's quality of life, posing challenges to both physical capabilities and emotional well-being. To restore limb function by decoding electromyography (EMG) signals, in this paper, we present a novel deep prototype learning method for accurate and generalizable EMG-based gesture classification. Existing methods suffer from limitations in generalization across subjects due to the diverse nature of individual muscle responses, impeding seamless applicability in broader populations.Approach.By leveraging deep prototype learning, we introduce a method that goes beyond direct output prediction. Instead, it matches new EMG inputs to a set of learned prototypes and predicts the corresponding labels.Main results.This novel methodology significantly enhances the model's classification performance and generalizability by discriminating subtle differences between gestures, making it more reliable and precise in real-world applications. Our experiments on four Ninapro datasets suggest that our deep prototype learning classifier outperforms state-of-the-art methods in terms of intra-subject and inter-subject classification accuracy in gesture prediction.Significance.The results from our experiments validate the effectiveness of the proposed method and pave the way for future advancements in the field of EMG gesture classification for upper limb prosthetics.


Asunto(s)
Electromiografía , Gestos , Semántica , Humanos , Electromiografía/métodos , Masculino , Femenino , Adulto , Aprendizaje Profundo , Adulto Joven
2.
Artículo en Inglés | MEDLINE | ID: mdl-38317414

RESUMEN

Electromyography (EMG) signals are primarily used to control prosthetic hands. Classifying hand gestures efficiently with EMG signals presents numerous challenges. In addition to overcoming these challenges, a successful combination of feature extraction and classification approaches will improve classification accuracy. In the current work, convolutional neural network (CNN) features are used to reduce the redundancy problems associated with time and frequency domain features to improve classification accuracy. The features from the EMG signal are extracted using a CNN and are fed to the 'k' nearest neighbor (KNN) classifier with a different number of neighbors (1NN,3NN,5NN,and 7NN). It results in an ensemble of classifiers that are combined using a hard voting-based classifier. Based on the benchmark Ninapro DB4 database and CapgMyo database, the proposed framework obtained 91.3% classification accuracy on CapgMyo and 89.5% on Ninapro DB4.

3.
Front Neurorobot ; 17: 1264802, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38023447

RESUMEN

Introduction: Muscular activation sequences have been shown to be suitable time-domain features for classification of motion gestures. However, their clinical application in myoelectric prosthesis control was never investigated so far. The aim of the paper is to evaluate the robustness of these features extracted from the EMG signal in transient state, on the forearm, for classifying common hand tasks. Methods: The signal associated to four hand gestures and the rest condition were acquired from ten healthy people and two persons with trans-radial amputation. A feature extraction algorithm allowed for encoding the EMG signals into muscular activation sequences, which were used to train four commonly used classifiers, namely Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Non-linear Logistic Regression (NLR) and Artificial Neural Network (ANN). The offline performances were assessed with the entire sample of recruited people. The online performances were assessed with the amputee subjects. Moreover, a comparison of the proposed method with approaches based on the signal envelope in the transient state and in the steady state was conducted. Results: The highest performance were obtained with the NLR classifier. Using the sequences, the offline classification accuracy was higher than 93% for healthy and amputee subjects and always higher than the approach with the signal envelope in transient state. As regards the comparison with the steady state, the performances obtained with the proposed method are slightly lower (<4%), but the classification occurred at least 200 ms earlier. In the online application, the motion completion rate reached up to 85% of the total classification attempts, with a motion selection time that never exceeded 218 ms. Discussion: Muscular activation sequences are suitable alternatives to the time-domain features commonly used in classification problems belonging to the sole EMG transient state and could be potentially exploited in control strategies of myoelectric prosthesis hands.

4.
Phys Eng Sci Med ; 46(4): 1427-1445, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37814077

RESUMEN

The increasing prevalence of behavioral disorders in children is of growing concern within the medical community. Recognising the significance of early identification and intervention for atypical behaviors, there is a consensus on their pivotal role in improving outcomes. Due to inadequate facilities and a shortage of medical professionals with specialized expertise, traditional diagnostic methods have been unable to effectively address the rising incidence of behavioral disorders. Hence, there is a need to develop automated approaches for the diagnosis of behavioral disorders in children, to overcome the challenges with traditional methods. The purpose of this study is to develop an automated model capable of analyzing videos to differentiate between typical and atypical repetitive head movements in. To address problems resulting from the limited availability of child datasets, various learning methods are employed to mitigate these issues. In this work, we present a fusion of transformer networks, and Non-deterministic Finite Automata (NFA) techniques, which classify repetitive head movements of a child as typical or atypical based on an analysis of gender, age, and type of repetitive head movement, along with count, duration, and frequency of each repetitive head movement. Experimentation was carried out with different transfer learning methods to enhance the performance of the model. The experimental results on five datasets: NIR face dataset, Bosphorus 3D face dataset, ASD dataset, SSBD dataset, and the Head Movements in the Wild dataset, indicate that our proposed model has outperformed many state-of-the-art frameworks when distinguishing typical and atypical repetitive head movements in children.


Asunto(s)
Movimientos de la Cabeza , Trastornos Mentales , Niño , Humanos , Conducta Estereotipada , Medición de Riesgo , Endoscopía
5.
Polymers (Basel) ; 15(18)2023 Sep 07.
Artículo en Inglés | MEDLINE | ID: mdl-37765548

RESUMEN

In wearable bioelectronics, various studies have focused on enhancing prosthetic control accuracy by improving the quality of physiological signals. The fabrication of conductive composites through the addition of metal fillers is one way to achieve stretchability, conductivity, and biocompatibility. However, it is difficult to measure stable biological signals using these soft electronics during physical activities because of the slipping issues of the devices, which results in the inaccurate placement of the device at the target part of the body. To address these limitations, it is necessary to reduce the stiffness of the conductive materials and enhance the adhesion between the device and the skin. In this study, we measured the electromyography (EMG) signals by applying a three-layered hydrogel structure composed of chitosan-alginate-chitosan (CAC) to a stretchable electrode fabricated using a composite of styrene-ethylene-butylene-styrene and eutectic gallium-indium. We observed stable adhesion of the CAC hydrogel to the skin, which aided in keeping the electrode attached to the skin during the subject movement. Finally, we fabricated a multichannel array of CAC-coated composite electrodes (CACCE) to demonstrate the accurate classification of the EMG signals based on hand movements and channel placement, which was followed by the movement of the robot arm.

6.
Bioengineering (Basel) ; 10(7)2023 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-37508798

RESUMEN

Stroke is a leading cause of disability and death worldwide, with a prevalence of 200 millions of cases worldwide. Motor disability is presented in 80% of patients. In this context, physical rehabilitation plays a fundamental role for gradually recovery of mobility. In this work, we designed a robotic hand exoskeleton to support rehabilitation of patients after a stroke episode. The system acquires electromyographic (EMG) signals in the forearm, and automatically estimates the movement intention for five gestures. Subsequently, we developed a predictive adaptive control of the exoskeleton to compensate for three different levels of muscle fatigue during the rehabilitation therapy exercises. The proposed system could be used to assist the rehabilitation therapy of the patients by providing a repetitive, intense, and adaptive assistance.

7.
ACS Appl Mater Interfaces ; 15(15): 19374-19383, 2023 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-37036803

RESUMEN

The human forearm is one of the most densely distributed parts of the human body, with the most irregular spatial distribution of muscles. A number of specific forearm muscles control hand motions. Acquiring high-fidelity sEMG signals from human forearm muscles is vital for human-machine interface (HMI) applications based on gesture recognition. Currently, the most commonly used commercial electrodes for detecting sEMG or other electrophysiological signals have a rigid nature without stretchability and cannot maintain conformal contact with the human skin during deformation, and the adhesive hydrogel used in them to reduce skin-electrode impedance may shrink and cause skin inflammation after long-term use. Therefore, developing elastic electrodes with stretchability and biocompatibility for sEMG signal recording is essential for developing HMI. Here, we fabricated a nanocomposite hybrid on-skin electrode by infiltrating silver nanowires (AgNWs), a one-dimensional (1D) nano metal material with conductivity, into polydimethylsiloxane (PDMS), a silicone elastomer with a similar Young's modulus to that of the human skin. The AgNW on-skin electrode has a thickness of 300 µm and low sheet resistance of 0.481 ± 0.014 Ω/sq and can withstand the mechanical strain of up to 54% and maintain a sheet resistance lower than 1 Ω/sq after 1000 dynamic strain cycles. The AgNW on-skin electrode can record high signal-to-noise ratio (SNR) sEMG signals from forearm muscles and can reflect various force levels of muscles by sEMG signals. Besides, four typical hand gestures were recognized by the multichannel AgNW on-skin electrodes with a recognition accuracy of 92.3% using machine learning method. The AgNW on-skin electrode proposed in this study has great potential and promise in various HMI applications that employ sEMG signals as control signals.


Asunto(s)
Gestos , Nanocables , Humanos , Electromiografía , Plata , Músculo Esquelético/fisiología , Electrodos , Aprendizaje Automático
8.
J Neural Eng ; 20(2)2023 04 03.
Artículo en Inglés | MEDLINE | ID: mdl-36917858

RESUMEN

Objective.Prosthetic systems are used to improve the quality of life of post-amputation patients, and research on surface electromyography (sEMG)-based gesture classification has yielded rich results. Nonetheless, current gesture classification algorithms focus on the same subject, and cross-individual classification studies that overcome physiological factors are relatively scarce, resulting in a high abandonment rate for clinical prosthetic systems. The purpose of this research is to propose an algorithm that can significantly improve the accuracy of gesture classification across individuals.Approach.Eight healthy adults were recruited, and sEMG data of seven daily gestures were recorded. A modified fuzzy granularized logistic regression (FG_LogR) algorithm is proposed for cross-individual gesture classification.Main results.The results show that the average classification accuracy of the four features based on the FG_LogR algorithm is 79.7%, 83.6%, 79.0%, and 86.1%, while the classification accuracy based on the logistic regression algorithm is 76.2%, 79.5%, 71.1%, and 81.3%, the overall accuracy improved ranging from 3.5% to 7.9%. The performance of the FG_LogR algorithm is also superior to the other five classic algorithms, and the average prediction accuracy has increased by more than 5%.Conclusion. The proposed FG_LogR algorithm improves the accuracy of cross-individual gesture recognition by fuzzy and granulating the features, and has the potential for clinical application.Significance. The proposed algorithm in this study is expected to be combined with other feature optimization methods to achieve more precise and intelligent prosthetic control and solve the problems of poor gesture recognition and high abandonment rate of prosthetic systems.


Asunto(s)
Gestos , Calidad de Vida , Adulto , Humanos , Electromiografía/métodos , Modelos Logísticos , Algoritmos , Mano
9.
Int J Mach Learn Cybern ; 14(4): 1119-1131, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36339898

RESUMEN

Bio-signal based hand motion recognition plays a critical role in the tasks of human-machine interaction, such as the natural control of multifunctional prostheses. Although a large number of classification technologies have been taken to improve the motion recognition accuracy, it is still a challenge to achieve acceptable performance for multiple modality input. This study proposes a multi-modality deep forest (MMDF) framework to identify hand motions, in which surface electromyographic signals (sEMG) and acceleration signals (ACC) are fused at the input level. The proposed MMDF framework constitutes of three main stages, sEMG and ACC feature extraction, feature dimension reduction, and a cascade structure deep forest for classification. A public database "Ninapro DB7" is used to evaluate the performance of the proposed framework, and the experimental results show that it can achieve a significantly higher accuracy than that of competitors. Besides, our experimental results also show that MMDF outperforms other traditional classifiers with the input of the single modality of sEMG signals. In sum, this study verifies that ACC signals can be an excellent supplementary for sEMG, and MMDF is a plausible solution to fuse mulit-modality bio-signals for human motion recognition.

10.
Neural Comput Appl ; 35(11): 8143-8156, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36532882

RESUMEN

There is an urgent need, accelerated by the COVID-19 pandemic, for methods that allow clinicians and neuroscientists to remotely evaluate hand movements. This would help detect and monitor degenerative brain disorders that are particularly prevalent in older adults. With the wide accessibility of computer cameras, a vision-based real-time hand gesture detection method would facilitate online assessments in home and clinical settings. However, motion blur is one of the most challenging problems in the fast-moving hands data collection. The objective of this study was to develop a computer vision-based method that accurately detects older adults' hand gestures using video data collected in real-life settings. We invited adults over 50 years old to complete validated hand movement tests (fast finger tapping and hand opening-closing) at home or in clinic. Data were collected without researcher supervision via a website programme using standard laptop and desktop cameras. We processed and labelled images, split the data into training, validation and testing, respectively, and then analysed how well different network structures detected hand gestures. We recruited 1,900 adults (age range 50-90 years) as part of the TAS Test project and developed UTAS7k-a new dataset of 7071 hand gesture images, split 4:1 into clear: motion-blurred images. Our new network, RGRNet, achieved 0.782 mean average precision (mAP) on clear images, outperforming the state-of-the-art network structure (YOLOV5-P6, mAP 0.776), and mAP 0.771 on blurred images. A new robust real-time automated network that detects static gestures from a single camera, RGRNet, and a new database comprising the largest range of individual hands, UTAS7k, both show strong potential for medical and research applications. Supplementary Information: The online version contains supplementary material available at 10.1007/s00521-022-08090-8.

11.
Front Neurosci ; 16: 849991, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35720725

RESUMEN

Electromyography (EMG) data has been extensively adopted as an intuitive interface for instructing human-robot collaboration. A major challenge to the real-time detection of human grasp intent is the identification of dynamic EMG from hand movements. Previous studies predominantly implemented the steady-state EMG classification with a small number of grasp patterns in dynamic situations, which are insufficient to generate differentiated control regarding the variation of muscular activity in practice. In order to better detect dynamic movements, more EMG variability could be integrated into the model. However, only limited research was conducted on such detection of dynamic grasp motions, and most existing assessments on non-static EMG classification either require supervised ground-truth timestamps of the movement status or only contain limited kinematic variations. In this study, we propose a framework for classifying dynamic EMG signals into gestures and examine the impact of different movement phases, using an unsupervised method to segment and label the action transitions. We collected and utilized data from large gesture vocabularies with multiple dynamic actions to encode the transitions from one grasp intent to another based on natural sequences of human grasp movements. The classifier for identifying the gesture label was constructed afterward based on the dynamic EMG signal, with no supervised annotation of kinematic movements required. Finally, we evaluated the performances of several training strategies using EMG data from different movement phases and explored the information revealed from each phase. All experiments were evaluated in a real-time style with the performance transitions presented over time.

12.
J Imaging ; 8(6)2022 May 26.
Artículo en Inglés | MEDLINE | ID: mdl-35735952

RESUMEN

Researchers have recently focused their attention on vision-based hand gesture recognition. However, due to several constraints, achieving an effective vision-driven hand gesture recognition system in real time has remained a challenge. This paper aims to uncover the limitations faced in image acquisition through the use of cameras, image segmentation and tracking, feature extraction, and gesture classification stages of vision-driven hand gesture recognition in various camera orientations. This paper looked at research on vision-based hand gesture recognition systems from 2012 to 2022. Its goal is to find areas that are getting better and those that need more work. We used specific keywords to find 108 articles in well-known online databases. In this article, we put together a collection of the most notable research works related to gesture recognition. We suggest different categories for gesture recognition-related research with subcategories to create a valuable resource in this domain. We summarize and analyze the methodologies in tabular form. After comparing similar types of methodologies in the gesture recognition field, we have drawn conclusions based on our findings. Our research also looked at how well the vision-based system recognized hand gestures in terms of recognition accuracy. There is a wide variation in identification accuracy, from 68% to 97%, with the average being 86.6 percent. The limitations considered comprise multiple text and interpretations of gestures and complex non-rigid hand characteristics. In comparison to current research, this paper is unique in that it discusses all types of gesture recognition techniques.

13.
Front Robot AI ; 9: 840335, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35516789

RESUMEN

Social touch is essential to everyday interactions, but current socially assistive robots have limited touch-perception capabilities. Rather than build entirely new robotic systems, we propose to augment existing rigid-bodied robots with an external touch-perception system. This practical approach can enable researchers and caregivers to continue to use robotic technology they have already purchased and learned about, but with a myriad of new social-touch interactions possible. This paper presents a low-cost, easy-to-build, soft tactile-perception system that we created for the NAO robot, as well as participants' feedback on touching this system. We installed four of our fabric-and-foam-based resistive sensors on the curved surfaces of a NAO's left arm, including its hand, lower arm, upper arm, and shoulder. Fifteen adults then performed five types of affective touch-communication gestures (hitting, poking, squeezing, stroking, and tickling) at two force intensities (gentle and energetic) on the four sensor locations; we share this dataset of four time-varying resistances, our sensor patterns, and a characterization of the sensors' physical performance. After training, a gesture-classification algorithm based on a random forest identified the correct combined touch gesture and force intensity on windows of held-out test data with an average accuracy of 74.1%, which is more than eight times better than chance. Participants rated the sensor-equipped arm as pleasant to touch and liked the robot's presence significantly more after touch interactions. Our promising results show that this type of tactile-perception system can detect necessary social-touch communication cues from users, can be tailored to a variety of robot body parts, and can provide HRI researchers with the tools needed to implement social touch in their own systems.

14.
Comput Methods Programs Biomed ; 219: 106753, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-35338885

RESUMEN

BACKGROUND: Thanks to the increased interest towards health and lifestyle, a larger adoption in wearable devices for activity tracking is present among the general population. Wearable devices such as smart wristbands integrate inertial units, including accelerometers and gyroscopes, which can be utilised to perform automatic classification of hand gestures. This technology could also find an important application in automatic medication adherence monitoring. Accordingly, this study aims at comparing the performance of several Machine-Learning (ML) and Deep-Learning (DL) approaches for the automatic identification of hand gestures, with a specific focus on the drinking gesture, commonly associated to the action of oral intake of a pill-packed medication. METHODS: A method to automatically recognize hand gestures in daily living is proposed in this work. The method relies on a commercially available wristband sensor (MetaMotionR, MbientLab Inc.) integrating tri-axial accelerometer and gyroscope. Both ML and DL algorithms were evaluated for both multi-gesture (drinking, eating, pouring water, opening a bottle, typing, answering a phone, combing hair, and cutting) and binary gesture (drinking versus other gestures) classification from wristband sensor signals. Twenty-two participants were involved in the experimental analysis, performing a 10 min acquisition in a laboratory setting. Leave-one-subject-out cross validation was performed for robust performance assessment. RESULTS: The highest performance was achieved using a convolutional neural network with long- short term memory (CNN-LSTM), with a median f1-score of 90.5 [first quartile: 84.5; third quartile: 92.5]% and 92.5 [81.5;98.0]% for multi-gesture and binary classification, respectively. CONCLUSIONS: Experimental results showed that hand gesture classification with ML/DL from wrist accelerometers and gyroscopes signals can be performed with reasonable accuracy in laboratory settings, paving the way for a new generation of medical devices for monitoring medical adherence.


Asunto(s)
Gestos , Dispositivos Electrónicos Vestibles , Algoritmos , Mano , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
15.
Front Neurosci ; 14: 637, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32903824

RESUMEN

Hand gestures are a form of non-verbal communication used by individuals in conjunction with speech to communicate. Nowadays, with the increasing use of technology, hand-gesture recognition is considered to be an important aspect of Human-Machine Interaction (HMI), allowing the machine to capture and interpret the user's intent and to respond accordingly. The ability to discriminate between human gestures can help in several applications, such as assisted living, healthcare, neuro-rehabilitation, and sports. Recently, multi-sensor data fusion mechanisms have been investigated to improve discrimination accuracy. In this paper, we present a sensor fusion framework that integrates complementary systems: the electromyography (EMG) signal from muscles and visual information. This multi-sensor approach, while improving accuracy and robustness, introduces the disadvantage of high computational cost, which grows exponentially with the number of sensors and the number of measurements. Furthermore, this huge amount of data to process can affect the classification latency which can be crucial in real-case scenarios, such as prosthetic control. Neuromorphic technologies can be deployed to overcome these limitations since they allow real-time processing in parallel at low power consumption. In this paper, we present a fully neuromorphic sensor fusion approach for hand-gesture recognition comprised of an event-based vision sensor and three different neuromorphic processors. In particular, we used the event-based camera, called DVS, and two neuromorphic platforms, Loihi and ODIN + MorphIC. The EMG signals were recorded using traditional electrodes and then converted into spikes to be fed into the chips. We collected a dataset of five gestures from sign language where visual and electromyography signals are synchronized. We compared a fully neuromorphic approach to a baseline implemented using traditional machine learning approaches on a portable GPU system. According to the chip's constraints, we designed specific spiking neural networks (SNNs) for sensor fusion that showed classification accuracy comparable to the software baseline. These neuromorphic alternatives have increased inference time, between 20 and 40%, with respect to the GPU system but have a significantly smaller energy-delay product (EDP) which makes them between 30× and 600× more efficient. The proposed work represents a new benchmark that moves neuromorphic computing toward a real-world scenario.

16.
Sensors (Basel) ; 19(8)2019 Apr 25.
Artículo en Inglés | MEDLINE | ID: mdl-31027292

RESUMEN

Conventional pattern-recognition algorithms for surface electromyography (sEMG)-based hand-gesture classification have difficulties in capturing the complexity and variability of sEMG. The deep structures of deep learning enable the method to learn high-level features of data to improve both accuracy and robustness of a classification. However, the features learned through deep learning are incomprehensible, and this issue has precluded the use of deep learning in clinical applications where model comprehension is required. In this paper, a generative flow model (GFM), which is a recent flourishing branch of deep learning, is used with a SoftMax classifier for hand-gesture classification. The proposed approach achieves 63.86 ± 5.12 % accuracy in classifying 53 different hand gestures from the NinaPro database 5. The distribution of all 53 hand gestures is modelled by the GFM, and each dimension of the feature learned by the GFM is comprehensible using the reverse flow of the GFM. Moreover, the feature appears to be related to muscle synergy to some extent.

17.
Artículo en Inglés | MEDLINE | ID: mdl-31921794

RESUMEN

Background: Various human machine interfaces (HMIs) are used to control prostheses, such as robotic hands. One of the promising HMIs is Force Myography (FMG). Previous research has shown the potential for the use of high density FMG (HD-FMG) that can lead to higher accuracy of prosthesis control. Motivation: The more sensors used in an FMG controlled system, the more complicated and costlier the system becomes. This study proposes a design method that can produce powered prostheses with performance comparable to that of HD-FMG controlled systems using a fewer number of sensors. An HD-FMG apparatus would be used to collect information from the user only in the design phase. Channel selection would then be applied to the collected data to determine the number and location of sensors that are vital to performance of the device. This study assessed the use of multiple channel selection (CS) methods for this purpose. Methods: In this case study, three datasets were used. These datasets were collected from force sensitive resistors embedded in the inner socket of a subject with transradial amputation. Sensor data were collected as the subject carried out five repetitions of six gestures. Collected data were then used to asses five CS methods: Sequential forward selection (SFS) with two different stopping criteria, minimum redundancy-maximum relevance (mRMR), genetic algorithm (GA), and Boruta. Results: Three out of the five methods (mRMR, GA, and Boruta) were able to decrease channel numbers significantly while maintaining classification accuracy in all datasets. Neither of them outperformed the other two in all datasets. However, GA resulted in the smallest channel subset in all three of the datasets. The three selected methods were also compared in terms of stability [i.e., consistency of the channel subset chosen by the method as new training data were introduced or some training data were removed (Chandrashekar and Sahin, 2014)]. Boruta and mRMR resulted in less instability compared to GA when applied to the datasets of this study. Conclusion: This study shows feasibility of using the proposed design method that can produce prosthetic systems that are simpler than HD-FMG systems but have performance comparable to theirs.

18.
Proc Inst Mech Eng H ; 232(6): 588-596, 2018 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29683373

RESUMEN

The characterization and analysis of hand gestures are challenging tasks with an important number of applications in human-computer interaction, machine vision and control, and medical gesture recognition. Specifically, several researchers have tried to develop objective evaluation methods of surgical skills for medical training. As a result, the adequate selection and extraction of similarities and differences between experts and novices have become an important challenge in this area. In particular, some of these works have shown that human movements performed during surgery can be described as a sequence of constant affine-speed trajectories. In this article, we will show that affine speed can be used to segment medical hand movements and present how the mechanical energy computed in the segment is analyzed to compare surgical skills. The position and orientation of the instrument end effectors are determined by six video photographic cameras. In addition, two laparoscopic instruments are capable of measuring simultaneously the forces and torques applied to the tool. Finally, we will report the results of these experiments and present a correlation between the mechanical energy values, dissipated during a procedure, and the surgical skills.


Asunto(s)
Gestos , Reconocimiento de Normas Patrones Automatizadas , Humanos , Laparoscopía , Fenómenos Mecánicos , Modelos Teóricos , Programas Informáticos , Torque
19.
Med Image Anal ; 17(7): 732-45, 2013 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23706754

RESUMEN

Much of the existing work on automatic classification of gestures and skill in robotic surgery is based on dynamic cues (e.g., time to completion, speed, forces, torque) or kinematic data (e.g., robot trajectories and velocities). While videos could be equally or more discriminative (e.g., videos contain semantic information not present in kinematic data), they are typically not used because of the difficulties associated with automatic video interpretation. In this paper, we propose several methods for automatic surgical gesture classification from video data. We assume that the video of a surgical task (e.g., suturing) has been segmented into video clips corresponding to a single gesture (e.g., grabbing the needle, passing the needle) and propose three methods to classify the gesture of each video clip. In the first one, we model each video clip as the output of a linear dynamical system (LDS) and use metrics in the space of LDSs to classify new video clips. In the second one, we use spatio-temporal features extracted from each video clip to learn a dictionary of spatio-temporal words, and use a bag-of-features (BoF) approach to classify new video clips. In the third one, we use multiple kernel learning (MKL) to combine the LDS and BoF approaches. Since the LDS approach is also applicable to kinematic data, we also use MKL to combine both types of data in order to exploit their complementarity. Our experiments on a typical surgical training setup show that methods based on video data perform equally well, if not better, than state-of-the-art approaches based on kinematic data. In turn, the combination of both kinematic and video data outperforms any other algorithm based on one type of data alone.


Asunto(s)
Gestos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Fotograbar/métodos , Robótica/métodos , Cirugía Asistida por Computador/métodos , Grabación en Video/métodos , Algoritmos , Aumento de la Imagen/métodos , Imagenología Tridimensional/métodos , Movimiento (Física) , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Técnicas de Sutura
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA