Welcome to SNL 2021

Wednesday, October 6, 8:30 - 10:30 am PDTLog In to set timezone

Symposium 2 – Semantic knowledge representations in the anterior temporal lobes and beyond

Organizers: Andrew Persichetti1 & Alex Martin1; 1Laboratory of Brain and Cognition, National Institute of Mental Health, NIH
Presenters: Rebecca Jackson, Andrew S. Persichetti, Elizabeth Jefferies, Srikanth Damera, Stefano Anzellotti, Galit Yovel

A better understanding of how the brain represents diverse knowledge about the world (e.g., people, places, things, and relations between them) is critical to the study of human thought and language. There are two prevalent competing theories about how the brain represents semantic knowledge. One theory proposes that a single region in the anterior temporal lobes (ATL) integrates information from diverse sensory and category-selective systems to represent all semantic knowledge (i.e., a domain-general semantic hub). The other theory proposes that knowledge about categories is represented in segregated systems that represent category-specific knowledge (i.e., domain-specific systems). According to the latter view, the ATL is not a convergence zone for all semantic knowledge, but rather a collection of functionally diverse regions. In this symposium, we hope to spur a fun and informative discussion on this important question by presenting data from multiple theoretical perspectives and methodologies, including fMRI, EEG, and computational modeling.

Talks

Why do we need a multimodal semantic hub in ventral anterior temporal lobes?

Rebecca Jackson1, Timothy T. Rogers2, Matthew A. Lambon Ralph1; 1MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, UK, 2Department of Psychology, University of Wisconsin–Madison, Madison, WI, USA

There is convergent evidence for the role of the ATL in multimodal cross-category semantics from semantic dementia, PET, MEG, optimised fMRI techniques, TMS and intracortical electrodes. The temporal lobe forms a convergence zone with posterior regions showing differential involvement by sensory modality with a graded transition to multimodal representation of concepts in ventral aspects. Why is the temporal lobe organised in this way? By systematically varying the structure of a computational model and assessing the functional consequences, we identified the architectural properties that best promote core representation and control functions of the semantic system, which include a deep multimodal hub. Without a single multimodal hub, semantic systems do not acquire internal representations reflecting the full conceptual representational structure across modalities and learning episodes, with or without control requirements. The structural features promoting the functions of the semantic system mirrored the organisation of the temporal lobe; explaining its structure.

A data-driven functional mapping of the anterior temporal lobes

Andrew S. Persichetti1, Joseph M. Denning1, Stephen J. Gotts1, Alex Martin1; 1Section on Cognitive Neuropsychology, Laboratory of Brain & Cognition, NIMH/NIH, Bethesda, Maryland, USA

The functional role of the anterior temporal lobes (ATL) is a contentious issue. While different regions within the ATL likely subserve unique cognitive functions, most studies revert to vaguely referring to particular functional regions as “the ATL” and, thus, the mapping of function to anatomy remains unclear. Using a rigorous resting-state fMRI parcellation approach, we found that the ATL comprises 34 distinct functional parcels that are organized into a three-level functional hierarchy. In addition, the anterior region of the fusiform gyrus, often cited as the location of the semantic hub, was found to be part of a domain-specific network associated with social processing, rather than a domain-general hub. These findings are inconsistent with a brain region that subserves a singular cognitive function, such as a domain-general semantic hub, and highlight the importance of adopting more precise methods and language when studying functional divisions within the ATL.

Context Free and Context-Dependent Conceptual Representation in the Temporal Lobes

Elizabeth Jefferies1, Zhiyao Gao1, Li Zheng2, André Gouws1, Katya Krieger-Redwood1, Xiuyu Wang1, Dominika Varga3, Jonathan Smallwood4; 1Department of Psychology, University of York, Heslington, York, United Kingdom, 2Department of Psychology, University of Arizona, Tucson, AZ, USA, 3School of Psychology, University of Sussex, Brighton, United Kingdom, 4Department of Psychology, Queen’s University, Kingston, ON, Canada

How does conceptual representation in the brain change according to the context? In an fMRI study, we varied the strength of thematic associations between words, from very strong (dog with leash), through intermediate trials (dog with beach), to unrelated items. We combined representational pattern similarity analysis and computational linguistics to probe the neurocomputational content of these trials. In ATL, individual word meaning was maintained when items were judged to be unrelated, but not when a linking context was retrieved. In contrast, context-dependent meaning was represented in left IFG and other sites associated with semantic control. These brain regions showed a dissociation in the effect of associative strength: ATL supported context-dependent meanings to a greater extent for strong associations; in contrast, IFG supported combined meanings even when more control was required. We suggest that ATL amplifies long-term semantic associations during retrieval but does not directly capture short-term non-dominant associations.

Evidence for Multiple Fast Feedforward Hierarchies of Concept Processing in the Human Brain

Srikanth Damera1; 1Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA

The anterior temporal lobe (ATL) has been proposed to act as an amodal concept hub that is connected to distributed modality-specific concept representations. However, the extent to which the ATL is needed to coordinate and integrate information across these distributed representations is unclear. To better understand the dynamics of how the brain extracts meaning from sensory stimuli, we conducted a human high-density EEG study in which we first trained participants to associate pseudowords with various animal and tool concepts. After training, multivariate pattern classification of EEG signals in sensor and source space revealed the representation of both animal and tool concepts in the left ATL and tool concepts within the left IPL within 250ms. We then used Granger Causality analyses to show that orthography-selective sensors directly modulated activity in the parietal-tool selective cluster. Together, our results provide evidence that communication between domain-specific representations can happen independent of the ATL.

Multivariate analysis of the interactions between brain regions reveals category integration in the angular gyrus

Stefano Anzellotti1; 1Department of Psychology and Neuroscience, Boston College, Chestnut Hill, MA, USA

Representing the semantic relationships between objects from different categories is essential for human cognition. However, object representations are largely organized into distinct category-selective regions. How are different category-selective representations integrated? Previous research identified angular gyrus, anterior temporal lobe (ATL), posterior cingulate and medial prefrontal cortex as candidate regions for the representation of semantic knowledge. We used MultiVariate Pattern Dependence (MVPD) to test whether responses in these regions are better predicted by response patterns across multiple category-selective regions combined than by individual category-selective regions in isolation, finding evidence for category integration in the angular gyrus. We did not find significant category integration effects in the ATL, but this may be due to methodological limitations.

The contribution of perceptual and conceptual information to recognition memory

Galit Yovel1, Adva Shoham1; 1School of Psychological Sciences & Sagol School of Neuroscience, Tel Aviv University, Israel

Recognition memory benefits from conceptual encoding. This effect, which was originally reported for words, was extended to visual stimuli, showing better recognition following conceptual than perceptual encoding. But what is the nature of the representation that underlies this improved recognition? Two hypotheses were proposed to account for this effect: according to the feature elaboration hypothesis conceptual encoding enhances the perceptual representation. According to the conceptual representation hypothesis, conceptual encoding converts percepts to meaningful concepts. To decide between the two hypotheses, we examined the fMRI response during a recognition task to faces that were encoded conceptually/socially vs. perceptually. Results show that socially learned faces engage the anterior temporal (ATL) face area and the social brain network. Perceptual face regions showed no difference between socially and perceptually encoded faces. These findings support the conceptual rather than the feature-elaboration hypothesis, highlighting the importance of conceptual processing mechanisms for recognition memory of visual categories.