Welcome to SNL 2021
Wednesday, October 6, 11:00 am - 12:30 pm PDTLog In to set timezone
Adam M. Morgan1, Werner Doyle1, Orrin Devinsky1, Adeen Flinker1; 1NYU School of Medicine
Lexical retrieval is central to language, but the spatial and temporal neural codes that subserve it remain underspecified and elusive. Here we employ direct neural recordings (ECoG) in humans and leverage a machine learning (decoding) approach to elucidate the neural instantiations of two central aspects of lexical retrieval during overt language production: First, the spatiotemporal patterns of neural activity that code for a word’s activation. Second, the neural states that correspond to discrete stages of lexical representation (conceptual, phonological, articulatory; Levelt, 1989; Indefrey, 2011). Four patients undergoing neurosurgery for refractory epilepsy repeatedly produced 6 nouns (dog, ninja, etc.) in response to cartoon images while electrical potentials were measured directly from cortex. During a picture naming block, patients saw an image of a cartoon character and responded overtly (“dog”). Subsequently, cartoon images of the same characters embedded in static scenes were shown while the patients produced corresponding sentences (e.g. “The dog tickled the ninja”). We were able to predict above chance (p<0.05, permutation, accuracy ~=22%) which of the 6 nouns a subject was about to produce in the ~500ms leading to articulation using cross-validated multi-class classifiers. Accuracy increased leading up to production onset and then decreased, suggesting that the classifiers are capturing a neural process akin to lexical activation rather than signatures of articulatory processing (or early visual features which were removed from analysis). We tested the generalizability of the finding by applying the same trained classifier to nouns produced in sentences, showing above-chance accuracy for the first noun in the sentence. Next, in order to test for discrete neural stages corresponding to lexical processes we employed a temporal generalizability approach where we trained classifiers on each time sample, and then tested each of these on held out trials from each time sample (following King & Dehaene, 2014; Gwilliams et al., 2020). If lexical activation involves passing through discrete representational stages which are instantiated neurally, the temporal generalizability pattern should show distinct temporal patches representing neural states. Our results provide direct evidence for 2-4 distinct neural states during lexical retrieval within subjects. These states commenced approximately 600 ms prior to articulation and continued until articulation onset. The neural states likely represent temporally stable patterns supporting conceptual, phonological, and articulatory planning processes. Our results provide an important step towards linking neural spatiotemporal codes to theoretical lexical states.
Meghan E. McGarry1,2, Katherine J. Midgley1, Phillip J. Holcomb1, Karen Emmorey1; 1San Diego State University, 2University of California, San Diego
Iconicity refers to the presence of a structured mapping between form and meaning in a word or sign. Growing evidence suggests that iconicity plays a role in lexical access and/or the production of signs in American Sign Language (ASL). Iconic signs are produced more quickly than non-iconic signs in picture-naming paradigms. The present study explores the effects of iconicity on sign production through the use of both a picture-naming and an English-ASL translation paradigm. As the past studies exploring the effect of iconicity on sign production have used picture stimuli, it is difficult to determine whether the facilitation found in these studies is driven by the overlap between visual features in the picture and the visual phonological features of the sign, thereby allowing the pictures to prime sign production. Through our inclusion of an English-ASL translation task, we hope to determine whether the same faciliatory effect is found for sign production regardless of whether pictures are being named, or English words are being translated into ASL. Deaf ASL signers either translated or named 88 items: 44 with iconic signs, and 44 with non-iconic signs. The order of the items in each block was counterbalanced across participants, but all participants completed the translation block prior to the picture-naming block to avoid recalling the picture during the translation task. EEG was recorded, and Event-Related Potentials (ERPs) were time-locked to word or picture onset and averaged offline. We investigated whether iconicity modulated the N400 component, a negative going component that peaks around 400ms after stimulus onset. Larger N400 components are associated with increased semantic processing, whether due to semantic incongruity or due to an increase in the number of semantic features activated and processed. Our prior picture-naming study found an increased N400 amplitude and reduced response latencies for iconic compared to non-iconic signs (McGarry et al., 2020). We hypothesize that this result reflects a more robust semantic network for iconic signs and is akin to the larger N400 amplitude observed for concrete words. If the present study replicates this more robust N400 activation for iconic signs in the translation condition, this would indicate that iconic signs are associated with greater activation of features/semantic networks than non-iconic signs, regardless of task. If the N400 effect is exclusive to picture-naming, this will instead suggest that the increased N400 amplitude is related to visual priming from the picture and is task-specific. While data collection and analysis are currently underway, our preliminary results suggest that there is indeed increased N400 activation and reduced response latency in the translation condition for iconic ASL signs compared to non-iconic signs, suggesting a more robust semantic network for iconic signs.
Svetlana Pinet1, F.-Xavier Alario2,3, Marieke Longcamp2,3, Daniele Schön2,4, Jean-Rémi King3,5; 1BCBL, 2Aix Marseille Univ, 3CNRS, 4Inserm, 5PSL
The neural bases of language production are difficult to study because mouthing generates muscular artefacts that disrupt neuronal recordings. To bypass this issue, we investigate language production through typing. Like speech, keystrokes are produced through sequences of precise and overlearned movements stemming from high level linguistic representations. Critically, however, typing movements are performed away from the brain and hence are not expected to perturb the recording of brain activity. To investigate how linguistic representations are translated into successive keystrokes, we aimed to decode individual keystrokes from electrophysiological (EEG) recordings acquired during a picture typing task. We re-analysed the data of a previous study (Pinet et al., 2016) in which 31 participants performed a picture naming task through typing. We implemented multivariate pattern analyses to decode the hand laterality and the finger corresponding to each keystroke as a function of time. Our results show that laterality can be significantly decoded up to 500ms before each keystroke onset, irrespective of the position of the keystroke in the word. Finger decoding, based on a standard finger-key correspondence, yielded qualitatively similar but much lower decoding performances. Most notably, the decodable time courses of individual keystrokes systematically overlapped with one another. Finally, to identify the spatio-temporal dynamics of these EEG representations, we implemented temporal generalization analyses by training a decoder at each time sample to test how well they decode other time samples. The results show that each keystroke is characterized by a diagonal pattern, meaning that its activation is stronger around the time of the keystroke but not across the full word. This implies that the neural representation of each keystroke is dynamic. We discuss how these novel EEG findings reveal the simultaneous representation of successive keystrokes in large-scale neuronal activity. More generally, we show that, similarly to the representation of language comprehension, the representation of language production can be decoded as multiple and rapidly evolving neural codes.
Samaneh Nemati1, Nicholas Riccardi2, Sara Sayers1, Sarah Newman-Norlund1, Roger Newman-Norlund1, Julius Fridriksson1; 1University of South Carolina, Department of Communication Sciences and Disorders, 2University of South Carolina, Department of Psychology
Introduction: Discourse-related language function is thought to be impaired in neurodegenerative disease like dementia or with mild cognitive impairment (MCI). However, little is known about the functional brain networks supporting discourse-related language processing in healthy aging populations. From first principles, it makes sense that brain activity measured during rest, a state which is highly likely to include some generation of internal dialogue, may provide insight into one’s ability to overtly produce narrative discourse. In this study, we evaluate the relationship between resting-state functional connectivity (rsFC) in core/extended language areas and performance on a narrative discourse task (Cat Rescue) in a group of healthy older adults. Methods: Sixty participants (M = 66.78, SD = 6.98) were recruited as part of the Aging Brain Cohort Study at the University of South Carolina ([email protected]). Participants completed a narrative discourse task in which participants tell a story with a beginning, middle and end based on a complex visual scene. We calculated each participant’s fluency factor (maze index) and semantic factor (based on percent noun, verbs, and pronoun index) scores. Each participant underwent a single, 12-minute eyes-closed resting-state fMRI (rsfMRI) scan. Preprocessing of rsfMRI data was done with SPM12 and involved default preprocessing steps specified by the CONN toolbox. Following image preprocessing and denoising, ROI-to-ROI connectivity (RRC) matrices based on the Harvard-Oxford Atlas were created for each participant. Finally, a univariate general linear model (GLM) analysis was applied to identify significant relationships (FDR-corrected p < 0.05 ) between discourse measures and rsFC. Results: Our analysis revealed that rsFC of several cortical regions in both the right hemisphere (RH) and the left hemisphere (LH) was significantly correlated with semantic factor scores. In particular, increased connectivity between right Heschl’s gyrus (HG) and left posterior middle temporal gyrus (pMTG), left posterior supramarginal gyrus (pSMG), left and right temporal pole (TP) and left posterior temporal fusiform cortex (pTFusC) was associated with higher semantic factor scores. Functional connectivity between left Heschl's Gyrus and anterior MTG (aMTG) in the right hemisphere was also positively correlated with semantic factor scores. Whole-brain analysis revealed patterns of functional connectivity outside the core language network that were related to semantic factor scores including: connectivity between the left supplementary motor area (SMA) and right MTG, left precentral gyrus (PreCG) and right pTFusC and right PreCG and left pSMG. Discussion: Our data provide evidence that discourse ability (indexed by semantic factor scores) is related to specific patterns of rsFC in both the core and extended language networks. Specifically, discourse quality appears to depend on the strength of interhemispheric connections involving Heschl’s gyrus and areas involved in lexical-semantic processing (pMTG, TP, aMTG) and phonology (pSMG), as well as connectivity of regions involved in motor aspects of speech production (SMA, PreCG). Given the demonstrated utility of discourse in predicting impending cognitive decline, it is critical that researchers better understand specific brain networks involved in, and predictive of, discourse generation.