Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 168: 180-193, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37757726

RESUMEN

Deep Reinforcement Learning (DRL) is one powerful tool for varied control automation problems. Performances of DRL highly depend on the accuracy of value estimation for states from environments. However, the Value Estimation Network (VEN) in DRL can be easily influenced by the phenomenon of catastrophic interference from environments and training. In this paper, we propose a Dynamic Sparse Coding-based (DSC) VEN model to obtain precise sparse representations for accurate value prediction and sparse parameters for efficient training, which is not only applicable in Q-learning structured discrete-action DRL but also in actor-critic structured continuous-action DRL. In detail, to alleviate interference in VEN, we propose to employ DSC to learn sparse representations for accurate value estimation with dynamic gradients beyond the conventional ℓ1 norm that provides same-value gradients. To avoid influences from redundant parameters, we employ DSC to prune weights with dynamic thresholds more efficiently than static thresholds like ℓ1 norm. Experiments demonstrate that the proposed algorithms with dynamic sparse coding can obtain higher control performances than existing benchmark DRL algorithms in both discrete-action and continuous-action environments, e.g., over 25% increase in Puddle World and about 10% increase in Hopper. Moreover, the proposed algorithm can reach convergence efficiently with fewer episodes in different environments.


Asunto(s)
Aprendizaje , Refuerzo en Psicología , Algoritmos , Automatización , Benchmarking
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA