Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 271
Filtrar
1.
Psychon Bull Rev ; 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39103707

RESUMEN

A previously viewed scene is often remembered as containing a larger extent of the background than was actually present, and information that was likely present just outside the boundaries of that view is often incorporated into the representation of that scene. This has been referred to as boundary extension. Methodologies used in studies on boundary extension (terminology, stimulus presentation, response measures) are described. Empirical findings regarding effects of characteristics of the stimulus (whether the stimulus depicts a scene, semantics of the scene, view angle, object size, object cropping, object orientation, object color, number of objects, depth of field, object distance, viewpoint production, scene orientation, motion, faces, emotions, modality, whether the scene is multimodal), characteristics of the display (aperture shape, aperture size, target duration, retention interval), and characteristics of the observer (allocation of attention, imagination, age, expectations and strategies, eye fixation, eye movements, monocular or binocular view, vantage point, confinement, prior exposure, expertise, arousal, pathology) on boundary extension are reviewed. Connections of boundary extension to other cognitive phenomena and processes (evolutionary adaptation, Gestalt principles, illusions, psychophysics, invariant physical principles, aesthetics, temporal boundary extension, normalization) are noted, and theories and theoretical considerations regarding boundary extension (multisource model, boundary transformation, mental imagery, 4E cognition, cognitive modularity, neurological mechanisms of scene representation) are discussed.

2.
Curr Biol ; 2024 Aug 20.
Artículo en Inglés | MEDLINE | ID: mdl-39168122

RESUMEN

Infants' thoughts are classically characterized as iconic, perceptual-like representations.1,2,3 Less clear is whether preverbal infants also possess a propositional language of thought, where mental symbols are combined according to syntactic rules, very much like words in sentences.4,5,6,7,8,9,10,11,12,13,14,15,16,17 Because it is rich, productive, and abstract, a language of thought would provide a key to explaining impressive achievements in early infancy, from logical inference to representation of false beliefs.18,19,20,21,22,23,24,25,26,27,28,29,30,31 A propositional language-including a language of thought5-implies thematic roles that, in a sentence, indicate the relation between noun and verb phrases, defining who acts on whom; i.e., who is the agent and who is the patient.32,33,34,35,36,37,38,39 Agent and patient roles are abstract in that they generally apply to different situations: whether A kicks, helps, or kisses B, A is the agent and B is the patient. Do preverbal infants represent abstract agent and patient roles? We presented 7-month-olds (n = 143) with sequences of scenes where the posture or relative positioning of two individuals indicated that, across different interactions, A acted on B. Results from habituation (experiment 1) and pupillometry paradigms (experiments 2 and 3) demonstrated that infants showed surprise when roles eventually switched (B acted on A). Thus, while encoding social interactions, infants fill in an abstract relational structure that marks the roles of agent and patient and that can be accessed via different event scenes and properties of the event participants (body postures or positioning). This mental process implies a combinatorial capacity that lays the foundations for productivity and compositionality in language and cognition.

3.
Open Mind (Camb) ; 8: 766-794, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38957507

RESUMEN

When a piece of fruit is in a bowl, and the bowl is on a table, we appreciate not only the individual objects and their features, but also the relations containment and support, which abstract away from the particular objects involved. Independent representation of roles (e.g., containers vs. supporters) and "fillers" of those roles (e.g., bowls vs. cups, tables vs. chairs) is a core principle of language and higher-level reasoning. But does such role-filler independence also arise in automatic visual processing? Here, we show that it does, by exploring a surprising error that such independence can produce. In four experiments, participants saw a stream of images containing different objects arranged in force-dynamic relations-e.g., a phone contained in a basket, a marker resting on a garbage can, or a knife sitting in a cup. Participants had to respond to a single target image (e.g., a phone in a basket) within a stream of distractors presented under time constraints. Surprisingly, even though participants completed this task quickly and accurately, they false-alarmed more often to images matching the target's relational category than to those that did not-even when those images involved completely different objects. In other words, participants searching for a phone in a basket were more likely to mistakenly respond to a knife in a cup than to a marker on a garbage can. Follow-up experiments ruled out strategic responses and also controlled for various confounding image features. We suggest that visual processing represents relations abstractly, in ways that separate roles from fillers.

4.
Sci Rep ; 14(1): 15549, 2024 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-38969745

RESUMEN

Interacting with objects in our environment requires determining their locations, often with respect to surrounding objects (i.e., allocentrically). According to the scene grammar framework, these usually small, local objects are movable within a scene and represent the lowest level of a scene's hierarchy. How do higher hierarchical levels of scene grammar influence allocentric coding for memory-guided actions? Here, we focused on the effect of large, immovable objects (anchors) on the encoding of local object positions. In a virtual reality study, participants (n = 30) viewed one of four possible scenes (two kitchens or two bathrooms), with two anchors connected by a shelf, onto which were presented three local objects (congruent with one anchor) (Encoding). The scene was re-presented (Test) with 1) local objects missing and 2) one of the anchors shifted (Shift) or not (No shift). Participants, then, saw a floating local object (target), which they grabbed and placed back on the shelf in its remembered position (Response). Eye-tracking data revealed that both local objects and anchors were fixated, with preference for local objects. Additionally, anchors guided allocentric coding of local objects, despite being task-irrelevant. Overall, anchors implicitly influence spatial coding of local object locations for memory-guided actions within naturalistic (virtual) environments.


Asunto(s)
Semántica , Realidad Virtual , Humanos , Femenino , Masculino , Adulto , Adulto Joven , Percepción Espacial/fisiología , Memoria/fisiología
5.
bioRxiv ; 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-39005327

RESUMEN

Human navigation heavily relies on visual information. Although many previous studies have investigated how navigational information is inferred from visual features of scenes, little is understood about the impact of navigational experience on visual scene representation. In this study, we examined how navigational experience influences both the behavioral and neural responses to a visual scene. During training, participants navigated in the virtual reality (VR) environments which we manipulated navigational experience while holding the visual properties of scenes constant. Half of the environments allowed free navigation (navigable), while the other half featured an 'invisible wall' preventing the participants to continue forward even though the scene was visually navigable (non-navigable). During testing, participants viewed scene images from the VR environment while completing either a behavioral perceptual identification task (Experimentl) or an fMRI scan (Experiment2). Behaviorally, we found that participants judged a scene pair to be significantly more visually different if their prior navigational experience varied, even after accounting for visual similarities between the scene pairs. Neurally, multi-voxel pattern of the parahippocampal place area (PPA) distinguished visual scenes based on prior navigational experience alone. These results suggest that the human visual scene cortex represents information about navigability obtained through prior experience, beyond those computable from the visual properties of the scene. Taken together, these results suggest that scene representation is modulated by prior navigational experience to help us construct a functionally meaningful visual environment.

6.
Behav Brain Res ; 471: 115110, 2024 08 05.
Artículo en Inglés | MEDLINE | ID: mdl-38871131

RESUMEN

Visual features of separable dimensions conjoin to represent an integrated entity. We investigated how visual features bind to form a complex visual scene. Specifically, we focused on features important for visually guided navigation: direction and distance. Previously, separate works have shown that directions and distances of navigable paths are coded in the occipital place area (OPA). Using functional magnetic resonance imaging (fMRI), we tested how separate features are concurrently represented in the OPA. Participants saw eight types of scenes, in which four of them had one path and the other four had two paths. In single-path scenes, path direction was either to the left or to the right. In double-path scenes, both directions were present. A glass wall was placed in some paths to restrict navigational distance. To test how the OPA represents path directions and distances, we took three approaches. First, the independent-features approach examined whether the OPA codes each direction and distance. Second, the integrated-features approach explored how directions and distances are integrated into path units, as compared to pooled features, using double-path scenes. Finally, the integrated-paths approach asked how separate paths are combined into a scene. Using multi-voxel pattern similarity analysis, we found that the OPA's representations of single-path scenes were similar to other single-path scenes of either the same direction or the same distance. Representations of double-path scenes were similar to the combination of two constituent single-paths, as a combined unit of direction and distance rather than as a pooled representation of all features. These results show that the OPA combines the two features to form path units, which are then used to build multiple-path scenes. Altogether, these results suggest that visually guided navigation may be supported by the OPA that automatically and efficiently combines multiple features relevant for navigation and represent a navigation file.


Asunto(s)
Mapeo Encefálico , Imagen por Resonancia Magnética , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Percepción Espacial/fisiología , Percepción Visual/fisiología , Lóbulo Occipital/fisiología , Lóbulo Occipital/diagnóstico por imagen , Estimulación Luminosa/métodos , Reconocimiento Visual de Modelos/fisiología , Navegación Espacial/fisiología
7.
J Neurosci ; 44(27)2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-38777600

RESUMEN

Scene memory is prone to systematic distortions potentially arising from experience with the external world. Boundary transformation, a well-known memory distortion effect along the near-far axis of the three-dimensional space, represents the observer's erroneous recall of scenes' viewing distance. Researchers argued that normalization to the prototypical viewpoint with the high-probability viewing distance influenced this phenomenon. Herein, we hypothesized that the prototypical viewpoint also exists in the vertical angle of view (AOV) dimension and could cause memory distortion along scenes' vertical axis. Human subjects of both sexes were recruited to test this hypothesis, and two behavioral experiments were conducted, revealing a systematic memory distortion in the vertical AOV in both the forced choice (n = 79) and free adjustment (n = 30) tasks. Furthermore, the regression analysis implied that the complexity information asymmetry in scenes' vertical axis and the independent subjective AOV ratings from a large set of online participants (n = 1,208) could jointly predict AOV biases. Furthermore, in a functional magnetic resonance imaging experiment (n = 24), we demonstrated the involvement of areas in the ventral visual pathway (V3/V4, PPA, and OPA) in AOV bias judgment. Additionally, in a magnetoencephalography experiment (n = 20), we could significantly decode the subjects' AOV bias judgments ∼140 ms after scene onset and the low-level visual complexity information around the similar temporal interval. These findings suggest that AOV bias is driven by the normalization process and associated with the neural activities in the early stage of scene processing.


Asunto(s)
Imagen por Resonancia Magnética , Humanos , Masculino , Femenino , Adulto , Adulto Joven , Estimulación Luminosa/métodos , Magnetoencefalografía , Memoria/fisiología , Percepción Visual/fisiología , Mapeo Encefálico , Percepción Espacial/fisiología , Vías Visuales/fisiología , Vías Visuales/diagnóstico por imagen
8.
Psychon Bull Rev ; 2024 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-38806789

RESUMEN

When processing visual scenes, we tend to prioritize information in the foreground, often at the expense of background information. The foreground bias has been supported by data demonstrating that there are more fixations to foreground, and faster and more accurate detection of targets embedded in foreground. However, it is also known that semantic consistency is associated with more efficient search. Here, we examined whether semantic context interacts with foreground prioritization, either amplifying or mitigating the effect of target semantic consistency. For each scene, targets were placed in the foreground or background and were either semantically consistent or inconsistent with the context of immediately surrounding depth region. Results indicated faster response times (RTs) for foreground and semantically consistent targets, replicating established effects. More importantly, we found the magnitude of the semantic consistency effect was significantly smaller in the foreground than background region. To examine the robustness of this effect, in Experiment 2, we strengthened the reliability of semantics by increasing the proportion of targets consistent with the scene region to 80%. We found the overall results pattern to replicate the incongruous effect of semantic consistency across depth observed in Experiment 1. This suggests foreground bias modulates the effects of semantics so that performance is less impacted in near space.

9.
Conscious Cogn ; 122: 103695, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38761426

RESUMEN

People's memory for scenes has consequences, including for eyewitness testimony. Negative scenes may lead to a particular memory error, where narrowed scene boundaries lead people to recall being closer to a scene than they were. But boundary restriction-including attenuation of the opposite phenomenon boundary extension-has been difficult to replicate, perhaps because heightened arousal accompanying negative scenes, rather than negative valence itself, drives the effect. Indeed, in Green et al. (2019) arousal alone, conditioned to a particular neutral image category, increased boundary restriction for images in that category. But systematic differences between image categories may have driven these results, irrespective of arousal. Here, we clarify whether boundary restriction stems from the external arousal stimulus or image category differences. Presenting one image category (everyday-objects), half accompanied by arousal (Experiment 1), and presenting both neutral image categories (everyday-objects, nature), without arousal (Experiment 2), resulted in no difference in boundary judgement errors. These findings suggest that image features-including inherent valence, arousal, and complexity-are not sufficient to induce boundary restriction or reduce boundary extension for neutral images, perhaps explaining why boundary restriction is inconsistently demonstrated in the lab.


Asunto(s)
Nivel de Alerta , Reconocimiento Visual de Modelos , Humanos , Nivel de Alerta/fisiología , Adulto , Femenino , Adulto Joven , Masculino , Reconocimiento Visual de Modelos/fisiología , Recuerdo Mental/fisiología
10.
Cogn Res Princ Implic ; 9(1): 32, 2024 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-38767722

RESUMEN

Drivers must respond promptly to a wide range of possible road hazards, from trucks veering into their lane to pedestrians stepping onto the road. While drivers' vision is tested at the point of licensure, visual function can degrade, and drivers may not notice how these changes impact their ability to notice and respond to events in the world in a timely fashion. To safely examine the potential consequences of visual degradation on hazard detection, we performed two experiments examining the impact of simulated optical blur on participants' viewing duration thresholds in a hazard detection task, as a proxy for eyes-on-road duration behind the wheel. Examining this question with older and younger participants, across two experiments, we found an overall increase in viewing duration thresholds under blurred conditions, such that younger and older adults were similarly impacted by blur. Critically, in both groups, we found that the increment in thresholds produced by blur was larger for non-vehicular road hazards (pedestrians, cyclists and animals) compared to vehicular road hazards (cars, trucks and buses). This work suggests that blur poses a particular problem for drivers detecting non-vehicular road users, a population considerably more vulnerable in a collision than vehicular road users. These results also highlight the importance of taking into account the type of hazard when considering the impacts of blur on road hazard detection.


Asunto(s)
Conducción de Automóvil , Humanos , Adulto , Adulto Joven , Masculino , Femenino , Anciano , Persona de Mediana Edad , Percepción Visual/fisiología , Accidentes de Tránsito , Vehículos a Motor , Ciclismo/fisiología , Adolescente
11.
Open Mind (Camb) ; 8: 333-365, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38571530

RESUMEN

Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field's ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R2 = 0.33-0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants' ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.

12.
Trends Cogn Sci ; 28(5): 390-391, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38632008
13.
Psychol Sci ; 35(6): 681-693, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38683657

RESUMEN

As a powerful social signal, a body, face, or gaze facing toward oneself holds an individual's attention. We asked whether, going beyond an egocentric stance, facingness between others has a similar effect and why. In a preferential-looking time paradigm, human adults showed spontaneous preference to look at two bodies facing toward (vs. away from) each other (Experiment 1a, N = 24). Moreover, facing dyads were rated higher on social semantic dimensions, showing that facingness adds social value to stimuli (Experiment 1b, N = 138). The same visual preference was found in juvenile macaque monkeys (Experiment 2, N = 21). Finally, on the human development timescale, this preference emerged by 5 years, although young infants by 7 months of age already discriminate visual scenes on the basis of body positioning (Experiment 3, N = 120). We discuss how the preference for facing dyads-shared by human adults, young children, and macaques-can signal a new milestone in social cognition development, supporting processing and learning from third-party social interactions.


Asunto(s)
Percepción Visual , Humanos , Animales , Masculino , Femenino , Adulto , Lactante , Percepción Visual/fisiología , Adulto Joven , Percepción Social , Atención/fisiología , Preescolar , Cognición Social , Percepción Espacial/fisiología , Interacción Social
14.
Elife ; 132024 Mar 20.
Artículo en Inglés | MEDLINE | ID: mdl-38506719

RESUMEN

Current models of scene processing in the human brain include three scene-selective areas: the parahippocampal place area (or the temporal place areas), the restrosplenial cortex (or the medial place area), and the transverse occipital sulcus (or the occipital place area). Here, we challenged this model by showing that at least one other scene-selective site can also be detected within the human posterior intraparietal gyrus. Despite the smaller size of this site compared to the other scene-selective areas, the posterior intraparietal gyrus scene-selective (PIGS) site was detected consistently in a large pool of subjects (n = 59; 33 females). The reproducibility of this finding was tested based on multiple criteria, including comparing the results across sessions, utilizing different scanners (3T and 7T) and stimulus sets. Furthermore, we found that this site (but not the other three scene-selective areas) is significantly sensitive to ego-motion in scenes, thus distinguishing the role of PIGS in scene perception relative to other scene-selective areas. These results highlight the importance of including finer scale scene-selective sites in models of scene processing - a crucial step toward a more comprehensive understanding of how scenes are encoded under dynamic conditions.


Asunto(s)
Encéfalo , Corteza Cerebral , Femenino , Humanos , Reproducibilidad de los Resultados , Ambiente , Ego
15.
Mem Cognit ; 2024 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-38530622

RESUMEN

Boundary contraction and extension are two types of scene transformations that occur in memory. In extension, viewers extrapolate information beyond the edges of the image, whereas in contraction, viewers forget information near the edges. Recent work suggests that image composition influences the direction and magnitude of boundary transformation. We hypothesize that selective attention at encoding is an important driver of boundary transformation effects, selective attention to specific objects at encoding leading to boundary contraction. In this study, one group of participants (N = 36) memorized 15 scenes while searching for targets, while a separate group (N = 36) just memorized the scenes. Both groups then drew the scenes from memory with as much object and spatial detail as they could remember. We asked online workers to provide ratings of boundary transformations in the drawings, as well as how many objects they contained and the precision of remembered object size and location. We found that search condition drawings showed significantly greater boundary contraction than drawings of the same scenes in the memorize condition. Search drawings were significantly more likely to contain target objects, and the likelihood to recall other objects in the scene decreased as a function of their distance from the target. These findings suggest that selective attention to a specific object due to a search task at encoding will lead to significant boundary contraction.

16.
bioRxiv ; 2024 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-38352427

RESUMEN

Time has an immense influence on our memory. Truncated encoding leads to memory for only the 'gist' of an image, and long delays before recall result in generalized memories with few details. Here, we used crowdsourced scoring of hundreds of drawings made from memory after variable encoding (Experiment 1) and retentions of that memory (Experiment 2) to quantify what features of memory content change across time. We found that whereas some features of memory are highly dependent on time, such as the proportion of objects recalled from a scene and false recall for objects not in the original image, spatial memory was highly accurate and relatively independent of time. We also found that we could predict which objects were recalled across time based on the location, meaning, and saliency of the objects. The differential impact of time on object and spatial memory supports a separation of these memory systems.

17.
Hum Brain Mapp ; 45(3): e26628, 2024 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-38376190

RESUMEN

The recognition and perception of places has been linked to a network of scene-selective regions in the human brain. While previous studies have focussed on functional connectivity between scene-selective regions themselves, less is known about their connectivity with other cortical and subcortical regions in the brain. Here, we determine the functional and structural connectivity profile of the scene network. We used fMRI to examine functional connectivity between scene regions and across the whole brain during rest and movie-watching. Connectivity within the scene network revealed a bias between posterior and anterior scene regions implicated in perceptual and mnemonic aspects of scene perception respectively. Differences between posterior and anterior scene regions were also evident in the connectivity with cortical and subcortical regions across the brain. For example, the Occipital Place Area (OPA) and posterior Parahippocampal Place Area (PPA) showed greater connectivity with visual and dorsal attention networks, while anterior PPA and Retrosplenial Complex showed preferential connectivity with default mode and frontoparietal control networks and the hippocampus. We further measured the structural connectivity of the scene network using diffusion tractography. This indicated both similarities and differences with the functional connectivity, highlighting biases between posterior and anterior regions, but also between ventral and dorsal scene regions. Finally, we quantified the structural connectivity between the scene network and major white matter tracts throughout the brain. These findings provide a map of the functional and structural connectivity of scene-selective regions to each other and the rest of the brain.


Asunto(s)
Mapeo Encefálico , Neocórtex , Humanos , Imagen por Resonancia Magnética , Imagen de Difusión Tensora , Memoria
18.
Eur J Neurosci ; 59(9): 2353-2372, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38403361

RESUMEN

Real-world (rw-) statistical regularities, or expectations about the visual world learned over a lifetime, have been found to be associated with scene perception efficiency. For example, good (i.e., highly representative) exemplars of basic scene categories, one example of an rw-statistical regularity, are detected more readily than bad exemplars of the category. Similarly, good exemplars achieve higher multivariate pattern analysis (MVPA) classification accuracy than bad exemplars in scene-responsive regions of interest, particularly in the parahippocampal place area (PPA). However, it is unclear whether the good exemplar advantages observed depend on or are even confounded by selective attention. Here, we ask whether the observed neural advantage of the good scene exemplars requires full attention. We used a dual-task paradigm to manipulate attention and exemplar representativeness while recording neural responses with functional magnetic resonance imaging (fMRI). Both univariate analysis and MVPA were adopted to examine the effect of representativeness. In the attend-to-scenes condition, our results replicated an earlier study showing that good exemplars evoke less activity but a clearer category representation than bad exemplars. Importantly, similar advantages of the good exemplars were also observed when participants were distracted by a serial visual search task demanding a high attention load. In addition, cross-decoding between attended and distracted representations revealed that attention resulted in a quantitative (increased activation) rather than qualitative (altered activity patterns) improvement of the category representation, particularly for good exemplars. We, therefore, conclude that the effect of category representativeness on neural representations does not require full attention.


Asunto(s)
Atención , Imagen por Resonancia Magnética , Humanos , Atención/fisiología , Masculino , Femenino , Adulto , Imagen por Resonancia Magnética/métodos , Adulto Joven , Reconocimiento Visual de Modelos/fisiología , Percepción Visual/fisiología , Mapeo Encefálico/métodos , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen
19.
Res Sq ; 2024 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-38260553

RESUMEN

Current models of scene processing in the human brain include three scene-selective areas: the Parahippocampal Place Area (or the temporal place areas; PPA/TPA), the restrosplenial cortex (or the medial place area; RSC/MPA) and the transverse occipital sulcus (or the occipital place area; TOS/OPA). Here, we challenged this model by showing that at least one other scene-selective site can also be detected within the human posterior intraparietal gyrus. Despite the smaller size of this site compared to the other scene-selective areas, the posterior intraparietal gyrus scene-selective (PIGS) site was detected consistently in a large pool of subjects (n=59; 33 females). The reproducibility of this finding was tested based on multiple criteria, including comparing the results across sessions, utilizing different scanners (3T and 7T) and stimulus sets. Furthermore, we found that this site (but not the other three scene-selective areas) is significantly sensitive to ego-motion in scenes, thus distinguishing the role of PIGS in scene perception relative to other scene-selective areas. These results highlight the importance of including finer scale scene-selective sites in models of scene processing - a crucial step toward a more comprehensive understanding of how scenes are encoded under dynamic conditions.

20.
Cognition ; 245: 105723, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38262271

RESUMEN

According to predictive processing theories, vision is facilitated by predictions derived from our internal models of what the world should look like. However, the contents of these models and how they vary across people remains unclear. Here, we use drawing as a behavioral readout of the contents of the internal models in individual participants. Participants were first asked to draw typical versions of scene categories, as descriptors of their internal models. These drawings were converted into standardized 3d renders, which we used as stimuli in subsequent scene categorization experiments. Across two experiments, participants' scene categorization was more accurate for renders tailored to their own drawings compared to renders based on others' drawings or copies of scene photographs, suggesting that scene perception is determined by a match with idiosyncratic internal models. Using a deep neural network to computationally evaluate similarities between scene renders, we further demonstrate that graded similarity to the render based on participants' own typical drawings (and thus to their internal model) predicts categorization performance across a range of candidate scenes. Together, our results showcase the potential of a new method for understanding individual differences - starting from participants' personal expectations about the structure of real-world scenes.


Asunto(s)
Individualidad , Reconocimiento Visual de Modelos , Humanos , Redes Neurales de la Computación , Percepción Visual , Estimulación Luminosa/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA