Your browser doesn't support javascript.
loading
Robust Hand Gesture Recognition Using a Deformable Dual-Stream Fusion Network Based on CNN-TCN for FMCW Radar.
Zhu, Meiyi; Zhang, Chaoyi; Wang, Jianquan; Sun, Lei; Fu, Meixia.
Afiliación
  • Zhu M; School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China.
  • Zhang C; School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China.
  • Wang J; Key Laboratory of Knowledge Automation for Industrial Processes of Ministry of Education, University of Science and Technology Beijing, Beijing 100083, China.
  • Sun L; School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China.
  • Fu M; Key Laboratory of Knowledge Automation for Industrial Processes of Ministry of Education, University of Science and Technology Beijing, Beijing 100083, China.
Sensors (Basel) ; 23(20)2023 Oct 19.
Article en En | MEDLINE | ID: mdl-37896663
Hand Gesture Recognition (HGR) using Frequency Modulated Continuous Wave (FMCW) radars is difficult because of the inherent variability and ambiguity caused by individual habits and environmental differences. This paper proposes a deformable dual-stream fusion network based on CNN-TCN (DDF-CT) to solve this problem. First, we extract range, Doppler, and angle information from radar signals with the Fast Fourier Transform to produce range-time (RT) and range-angle (RA) maps. Then, we reduce the noise of the feature map. Subsequently, the RAM sequence (RAMS) is generated by temporally organizing the RAMs, which captures a target's range and velocity characteristics at each time point while preserving the temporal feature information. To improve the accuracy and consistency of gesture recognition, DDF-CT incorporates deformable convolution and inter-frame attention mechanisms, which enhance the extraction of spatial features and the learning of temporal relationships. The experimental results show that our method achieves an accuracy of 98.61%, and even when tested in a novel environment, it still achieves an accuracy of 97.22%. Due to its robust performance, our method is significantly superior to other existing HGR approaches.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Sensors (Basel) Año: 2023 Tipo del documento: Article País de afiliación: China Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Sensors (Basel) Año: 2023 Tipo del documento: Article País de afiliación: China Pais de publicación: Suiza