Your browser doesn't support javascript.
loading
BitBrain and Sparse Binary Coincidence (SBC) memories: Fast, robust learning and inference for neuromorphic architectures.
Hopkins, Michael; Fil, Jakub; Jones, Edward George; Furber, Steve.
Afiliación
  • Hopkins M; Advanced Processor Technologies Group, Department of Computer Science, The University of Manchester, Manchester, United Kingdom.
  • Fil J; Advanced Processor Technologies Group, Department of Computer Science, The University of Manchester, Manchester, United Kingdom.
  • Jones EG; Advanced Processor Technologies Group, Department of Computer Science, The University of Manchester, Manchester, United Kingdom.
  • Furber S; Advanced Processor Technologies Group, Department of Computer Science, The University of Manchester, Manchester, United Kingdom.
Front Neuroinform ; 17: 1125844, 2023.
Article en En | MEDLINE | ID: mdl-37025552
We present an innovative working mechanism (the SBC memory) and surrounding infrastructure (BitBrain) based upon a novel synthesis of ideas from sparse coding, computational neuroscience and information theory that enables fast and adaptive learning and accurate, robust inference. The mechanism is designed to be implemented efficiently on current and future neuromorphic devices as well as on more conventional CPU and memory architectures. An example implementation on the SpiNNaker neuromorphic platform has been developed and initial results are presented. The SBC memory stores coincidences between features detected in class examples in a training set, and infers the class of a previously unseen test example by identifying the class with which it shares the highest number of feature coincidences. A number of SBC memories may be combined in a BitBrain to increase the diversity of the contributing feature coincidences. The resulting inference mechanism is shown to have excellent classification performance on benchmarks such as MNIST and EMNIST, achieving classification accuracy with single-pass learning approaching that of state-of-the-art deep networks with much larger tuneable parameter spaces and much higher training costs. It can also be made very robust to noise. BitBrain is designed to be very efficient in training and inference on both conventional and neuromorphic architectures. It provides a unique combination of single-pass, single-shot and continuous supervised learning; following a very simple unsupervised phase. Accurate classification inference that is very robust against imperfect inputs has been demonstrated. These contributions make it uniquely well-suited for edge and IoT applications.
Palabras clave

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Front Neuroinform Año: 2023 Tipo del documento: Article País de afiliación: Reino Unido Pais de publicación: Suiza

Texto completo: 1 Colección: 01-internacional Base de datos: MEDLINE Idioma: En Revista: Front Neuroinform Año: 2023 Tipo del documento: Article País de afiliación: Reino Unido Pais de publicación: Suiza