Multiple visual objects are represented differently in the human brain and convolutional neural networks.
Sci Rep
; 13(1): 9088, 2023 06 05.
Article
en En
| MEDLINE
| ID: mdl-37277406
Objects in the real world usually appear with other objects. To form object representations independent of whether or not other objects are encoded concurrently, in the primate brain, responses to an object pair are well approximated by the average responses to each constituent object shown alone. This is found at the single unit level in the slope of response amplitudes of macaque IT neurons to paired and single objects, and at the population level in fMRI voxel response patterns in human ventral object processing regions (e.g., LO). Here, we compare how the human brain and convolutional neural networks (CNNs) represent paired objects. In human LO, we show that averaging exists in both single fMRI voxels and voxel population responses. However, in the higher layers of five CNNs pretrained for object classification varying in architecture, depth and recurrent processing, slope distribution across units and, consequently, averaging at the population level both deviated significantly from the brain data. Object representations thus interact with each other in CNNs when objects are shown together and differ from when objects are shown individually. Such distortions could significantly limit CNNs' ability to generalize object representations formed in different contexts.
Texto completo:
1
Colección:
01-internacional
Base de datos:
MEDLINE
Asunto principal:
Reconocimiento Visual de Modelos
/
Encéfalo
Límite:
Animals
/
Humans
Idioma:
En
Revista:
Sci Rep
Año:
2023
Tipo del documento:
Article
País de afiliación:
Estados Unidos
Pais de publicación:
Reino Unido