Evolving artificial neural networks with feedback.
Neural Netw
; 123: 153-162, 2020 Mar.
Article
en En
| MEDLINE
| ID: mdl-31874331
Neural networks in the brain are dominated by sometimes more than 60% feedback connections, which most often have small synaptic weights. Different from this, little is known how to introduce feedback into artificial neural networks. Here we use transfer entropy in the feed-forward paths of deep networks to identify feedback candidates between the convolutional layers and determine their final synaptic weights using genetic programming. This adds about 70% more connections to these layers all with very small weights. Nonetheless performance improves substantially on different standard benchmark tasks and in different networks. To verify that this effect is generic we use 36000 configurations of small (2-10 hidden layer) conventional neural networks in a non-linear classification task and select the best performing feed-forward nets. Then we show that feedback reduces total entropy in these networks always leading to performance increase. This method may, thus, supplement standard techniques (e.g. error backprop) adding a new quality to network learning.
Palabras clave
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Retroalimentación
/
Aprendizaje Profundo
Tipo de estudio:
Guideline
/
Prognostic_studies
Idioma:
En
Revista:
Neural Netw
Asunto de la revista:
NEUROLOGIA
Año:
2020
Tipo del documento:
Article
País de afiliación:
Alemania
Pais de publicación:
Estados Unidos