Predicting Continuous Locomotion Modes via Multidimensional Feature Learning From sEMG.
IEEE J Biomed Health Inform
; 28(11): 6629-6640, 2024 Nov.
Article
en En
| MEDLINE
| ID: mdl-39133593
ABSTRACT
Walking-assistive devices require adaptive control methods to ensure smooth transitions between various modes of locomotion. For this purpose, detecting human locomotion modes (e.g., level walking or stair ascent) in advance is crucial for improving the intelligence and transparency of such robotic systems. This study proposes Deep-STF, a unified end-to-end deep learning model designed for integrated feature extraction in spatial, temporal, and frequency dimensions from surface electromyography (sEMG) signals. Our model enables accurate and robust continuous prediction of nine locomotion modes and 15 transitions at varying prediction time intervals, ranging from 100 to 500 ms. Experimental results showcased Deep-STP's cutting-edge prediction performance across diverse locomotion modes and transitions, relying solely on sEMG data. When forecasting 100 ms ahead, Deep-STF achieved an improved average prediction accuracy of 96.60%, outperforming seven benchmark models. Even with an extended 500ms prediction horizon, the accuracy only marginally decreased to 93.22%. The averaged stable prediction times for detecting next upcoming transitions spanned from 31.47 to 371.58 ms across the 100-500 ms time advances. Although the prediction accuracy of the trained Deep-STF initially dropped to 71.12% when tested on four new terrains, it achieved a satisfactory accuracy of 92.51% after fine-tuning with just 5 trials and further improved to 96.27% with 15 calibration trials. These results demonstrate the remarkable prediction ability and adaptability of Deep-STF, showing great potential for integration with walking-assistive devices and leading to smoother, more intuitive user interactions.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Procesamiento de Señales Asistido por Computador
/
Electromiografía
/
Aprendizaje Profundo
Límite:
Adult
/
Female
/
Humans
/
Male
Idioma:
En
Revista:
IEEE J Biomed Health Inform
Año:
2024
Tipo del documento:
Article
Pais de publicación:
Estados Unidos