A cortical network processes auditory error signals during human speech production to maintain fluency
Müge Özker, Werner K Doyle, Orrin Devinsky, Adeen Flinker
Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency.
Neural correlates of sign language production revealed by electrocorticography
Jennifer Shum, Lora Fanda, Patricia Dugan, Werner K Doyle, Orrin Devinsky, Adeen Flinker
Objective: The combined spatiotemporal dynamics underlying sign language production remains largely unknown. To investigate these dynamics as compared to speech production we utilized intracranial electrocorticography during a battery of language tasks.
Methods: We report a unique case of direct cortical surface recordings obtained from a neurosurgical patient with intact hearing and bilingual in English and American Sign Language. We designed a battery of cognitive tasks to capture multiple modalities of language processing and production.
Results: We identified two spatially distinct cortical networks: ventral for speech and dorsal for sign production. Sign production recruited peri-rolandic, parietal and posterior temporal regions, while speech production recruited frontal, peri-sylvian and peri-rolandic regions. Electrical cortical stimulation confirmed this spatial segregation, identifying mouth areas for speech production and limb areas for sign production. The temporal dynamics revealed superior parietal cortex activity immediately before sign production, suggesting its role in planning and producing sign language.
Conclusions: Our findings reveal a distinct network for sign language and detail the temporal propagation supporting sign production.