Your browser doesn't support javascript.
loading
Enhancing SNN-based spatio-temporal learning: A benchmark dataset and Cross-Modality Attention model.
Zhou, Shibo; Yang, Bo; Yuan, Mengwen; Jiang, Runhao; Yan, Rui; Pan, Gang; Tang, Huajin.
Afiliación
  • Zhou S; Research Center for Data Hub and Security, Zhejiang Lab, Hangzhou, China. Electronic address: shibo.zhou@zhejianglab.com.
  • Yang B; College of Computer Science and Technology, Zhejiang University, Hangzhou, China. Electronic address: yangboak@icloud.com.
  • Yuan M; Research Center for High Efficiency Computing System, Zhejiang Lab, Hangzhou, China. Electronic address: yuanmw@zhejianglab.com.
  • Jiang R; College of Computer Science and Technology, Zhejiang University, Hangzhou, China. Electronic address: rhjiang@zju.edu.cn.
  • Yan R; College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China. Electronic address: ryan@zjut.edu.cn.
  • Pan G; College of Computer Science and Technology, Zhejiang University, Hangzhou, China; The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China. Electronic address: gpan@zju.edu.cn.
  • Tang H; College of Computer Science and Technology, Zhejiang University, Hangzhou, China; The State Key Lab of Brain-Machine Intelligence, Zhejiang University, Hangzhou, China. Electronic address: htang@zju.edu.cn.
Neural Netw ; 180: 106677, 2024 Sep 03.
Article en En | MEDLINE | ID: mdl-39260008
ABSTRACT
Spiking Neural Networks (SNNs), renowned for their low power consumption, brain-inspired architecture, and spatio-temporal representation capabilities, have garnered considerable attention in recent years. Similar to Artificial Neural Networks (ANNs), high-quality benchmark datasets are of great importance to the advances of SNNs. However, our analysis indicates that many prevalent neuromorphic datasets lack strong temporal correlation, preventing SNNs from fully exploiting their spatio-temporal representation capabilities. Meanwhile, the integration of event and frame modalities offers more comprehensive visual spatio-temporal information. Yet, the SNN-based cross-modality fusion remains underexplored. In this work, we present a neuromorphic dataset called DVS-SLR that can better exploit the inherent spatio-temporal properties of SNNs. Compared to existing datasets, it offers advantages in terms of higher temporal correlation, larger scale, and more varied scenarios. In addition, our neuromorphic dataset contains corresponding frame data, which can be used for developing SNN-based fusion methods. By virtue of the dual-modal feature of the dataset, we propose a Cross-Modality Attention (CMA) based fusion method. The CMA model efficiently utilizes the unique advantages of each modality, allowing for SNNs to learn both temporal and spatial attention scores from the spatio-temporal features of event and frame modalities, subsequently allocating these scores across modalities to enhance their synergy. Experimental results demonstrate that our method not only improves recognition accuracy but also ensures robustness across diverse scenarios.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Neural Netw Asunto de la revista: NEUROLOGIA Año: 2024 Tipo del documento: Article Pais de publicación: Estados Unidos