30 results on '"Arnal LH"'
Search Results
2. Salient 40 Hz sounds probe affective aversion and neural excitability
- Author
-
Schneefeld, F, primary, Doelling, K, additional, Marchesotti, S, additional, Schwartz, S, additional, Igloi, K, additional, Giraud, A-L, additional, and Arnal, LH, additional
- Published
- 2022
- Full Text
- View/download PDF
3. Encoding of Prediction in Audio-Visual Speech Processing
- Author
-
Arnal, LH, primary, Kell, CA, additional, and Giraud, AL, additional
- Published
- 2009
- Full Text
- View/download PDF
4. A standardised test to evaluate audio-visual speech intelligibility in French.
- Author
-
Le Rhun L, Llorach G, Delmas T, Suied C, Arnal LH, and Lazard DS
- Abstract
Objective: Lipreading, which plays a major role in the communication of the hearing impaired, lacked a French standardised tool. Our aim was to create and validate an audio-visual (AV) version of the French Matrix Sentence Test (FrMST)., Design: Video recordings were created by dubbing the existing audio files., Sample: Thirty-five young, normal-hearing participants were tested in auditory and visual modalities alone (Ao, Vo) and in AV conditions, in quiet, noise, and open and closed-set response formats., Results: Lipreading ability (Vo) ranged from 1 % to 77%-word comprehension. The absolute AV benefit was 9.25 dB SPL in quiet and 4.6 dB SNR in noise. The response format did not influence the results in the AV noise condition, except during the training phase. Lipreading ability and AV benefit were significantly correlated., Conclusions: The French video material achieved similar AV benefits as those described in the literature for AV MST in other languages. For clinical purposes, we suggest targeting SRT80 to avoid ceiling effects, and performing two training lists in the AV condition in noise, followed by one AV list in noise, one Ao list in noise and one Vo list, in a randomised order, in open or close set-format., Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper., (© 2024 The Authors.)
- Published
- 2024
- Full Text
- View/download PDF
5. Adaptive oscillators support Bayesian prediction in temporal processing.
- Author
-
Doelling KB, Arnal LH, and Assaneo MF
- Subjects
- Humans, Bayes Theorem, Time Perception, Music
- Abstract
Humans excel at predictively synchronizing their behavior with external rhythms, as in dance or music performance. The neural processes underlying rhythmic inferences are debated: whether predictive perception relies on high-level generative models or whether it can readily be implemented locally by hard-coded intrinsic oscillators synchronizing to rhythmic input remains unclear and different underlying computational mechanisms have been proposed. Here we explore human perception for tone sequences with some temporal regularity at varying rates, but with considerable variability. Next, using a dynamical systems perspective, we successfully model the participants behavior using an adaptive frequency oscillator which adjusts its spontaneous frequency based on the rate of stimuli. This model better reflects human behavior than a canonical nonlinear oscillator and a predictive ramping model-both widely used for temporal estimation and prediction-and demonstrate that the classical distinction between absolute and relative computational mechanisms can be unified under this framework. In addition, we show that neural oscillators may constitute hard-coded physiological priors-in a Bayesian sense-that reduce temporal uncertainty and facilitate the predictive processing of noisy rhythms. Together, the results show that adaptive oscillators provide an elegant and biologically plausible means to subserve rhythmic inference, reconciling previously incompatible frameworks for temporal inferential processes., Competing Interests: The authors have declared that no competing interests exist., (Copyright: © 2023 Doelling et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.)
- Published
- 2023
- Full Text
- View/download PDF
6. Plasticity After Hearing Rehabilitation in the Aging Brain.
- Author
-
Lazard DS, Doelling KB, and Arnal LH
- Subjects
- Humans, Infant, Aged, Hearing, Aging, Brain, Presbycusis diagnosis, Deafness rehabilitation, Cochlear Implants, Cochlear Implantation, Speech Perception
- Abstract
Age-related hearing loss, presbycusis, is an unavoidable sensory degradation, often associated with the progressive decline of cognitive and social functions, and dementia. It is generally considered a natural consequence of the inner-ear deterioration. However, presbycusis arguably conflates a wide array of peripheral and central impairments. Although hearing rehabilitation maintains the integrity and activity of auditory networks and can prevent or revert maladaptive plasticity, the extent of such neural plastic changes in the aging brain is poorly appreciated. By reanalyzing a large-scale dataset of more than 2200 cochlear implant users (CI) and assessing the improvement in speech perception from 6 to 24 months of use, we show that, although rehabilitation improves speech understanding on average, age at implantation only minimally affects speech scores at 6 months but has a pejorative effect at 24 months post implantation. Furthermore, older subjects (>67 years old) were significantly more likely to degrade their performances after 2 years of CI use than the younger patients for each year increase in age. Secondary analysis reveals three possible plasticity trajectories after auditory rehabilitation to account for these disparities: Awakening, reversal of deafness-specific changes; Counteracting, stabilization of additional cognitive impairments; or Decline, independent pejorative processes that hearing rehabilitation cannot prevent. The role of complementary behavioral interventions needs to be considered to potentiate the (re)activation of auditory brain networks.
- Published
- 2023
- Full Text
- View/download PDF
7. The path of voices in our brain.
- Author
-
Morillon B, Arnal LH, and Belin P
- Subjects
- Brain physiology, Brain Mapping, Humans, Temporal Lobe, Auditory Perception physiology, Voice physiology
- Abstract
Categorising voices is crucial for auditory-based social interactions. A recent study by Rupp and colleagues in PLOS Biology capitalises on human intracranial recordings to describe the spatiotemporal pattern of neural activity leading to voice-selective responses in associative auditory cortex., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2022
- Full Text
- View/download PDF
8. Premature commitment to uncertain decisions during human NMDA receptor hypofunction.
- Author
-
Salvador A, Arnal LH, Vinckier F, Domenech P, Gaillard R, and Wyart V
- Subjects
- Adult, Bayes Theorem, Brain drug effects, Brain physiology, Cognition drug effects, Cues, Electroencephalography, Female, Humans, Ketamine administration & dosage, Ketamine pharmacology, Male, Psychometrics, Task Performance and Analysis, Time Factors, Decision Making, Receptors, N-Methyl-D-Aspartate metabolism, Uncertainty
- Abstract
Making accurate decisions based on unreliable sensory evidence requires cognitive inference. Dysfunction of n-methyl-d-aspartate (NMDA) receptors impairs the integration of noisy input in theoretical models of neural circuits, but whether and how this synaptic alteration impairs human inference and confidence during uncertain decisions remains unknown. Here we use placebo-controlled infusions of ketamine to characterize the causal effect of human NMDA receptor hypofunction on cognitive inference and its neural correlates. At the behavioral level, ketamine triggers inference errors and elevated decision uncertainty. At the neural level, ketamine is associated with imbalanced coding of evidence and premature response preparation in electroencephalographic (EEG) activity. Through computational modeling of inference and confidence, we propose that this specific pattern of behavioral and neural impairments reflects an early commitment to inaccurate decisions, which aims at resolving the abnormal uncertainty generated by NMDA receptor hypofunction., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
9. Imagined speech can be decoded from low- and cross-frequency intracranial EEG features.
- Author
-
Proix T, Delgado Saa J, Christen A, Martin S, Pasley BN, Knight RT, Tian X, Poeppel D, Doyle WK, Devinsky O, Arnal LH, Mégevand P, and Giraud AL
- Subjects
- Adult, Brain diagnostic imaging, Brain Mapping, Electrodes, Female, Humans, Imagination, Male, Middle Aged, Phonetics, Young Adult, Brain-Computer Interfaces, Electrocorticography, Language, Speech
- Abstract
Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding., (© 2022. The Author(s).)
- Published
- 2022
- Full Text
- View/download PDF
10. Selective enhancement of low-gamma activity by tACS improves phonemic processing and reading accuracy in dyslexia.
- Author
-
Marchesotti S, Nicolle J, Merlet I, Arnal LH, Donoghue JP, and Giraud AL
- Subjects
- Adolescent, Adult, Auditory Cortex physiopathology, Auditory Cortex radiation effects, Dyslexia physiopathology, Electroencephalography, Evoked Potentials, Auditory physiology, Evoked Potentials, Auditory radiation effects, Female, Humans, Male, Middle Aged, Phonetics, Transcranial Direct Current Stimulation methods, Verbal Behavior physiology, Verbal Behavior radiation effects, Young Adult, Dyslexia therapy, Reading, Speech Perception physiology, Speech Perception radiation effects
- Abstract
The phonological deficit in dyslexia is associated with altered low-gamma oscillatory function in left auditory cortex, but a causal relationship between oscillatory function and phonemic processing has never been established. After confirming a deficit at 30 Hz with electroencephalography (EEG), we applied 20 minutes of transcranial alternating current stimulation (tACS) to transiently restore this activity in adults with dyslexia. The intervention significantly improved phonological processing and reading accuracy as measured immediately after tACS. The effect occurred selectively for a 30-Hz stimulation in the dyslexia group. Importantly, we observed that the focal intervention over the left auditory cortex also decreased 30-Hz activity in the right superior temporal cortex, resulting in reinstating a left dominance for the oscillatory response. These findings establish a causal role of neural oscillations in phonological processing and offer solid neurophysiological grounds for a potential correction of low-gamma anomalies and for alleviating the phonological deficit in dyslexia., Competing Interests: The authors have declared that no competing interests exist.
- Published
- 2020
- Full Text
- View/download PDF
11. Terrifying film music mimics alarming acoustic feature of human screams.
- Author
-
Trevor C, Arnal LH, and Frühholz S
- Subjects
- Acoustics, Animals, Arousal, Cattle, Emotions, Humans, Male, Music, Voice
- Abstract
One way music is thought to convey emotion is by mimicking acoustic features of affective human vocalizations [Juslin and Laukka (2003). Psychol. Bull. 129(5), 770-814]. Regarding fear, it has been informally noted that music for scary scenes in films frequently exhibits a "scream-like" character. Here, this proposition is formally tested. This paper reports acoustic analyses for four categories of audio stimuli: screams, non-screaming vocalizations, scream-like music, and non-scream-like music. Valence and arousal ratings were also collected. Results support the hypothesis that a key feature of human screams (roughness) is imitated by scream-like music and could potentially signal danger through both music and the voice.
- Published
- 2020
- Full Text
- View/download PDF
12. Prominence of delta oscillatory rhythms in the motor cortex and their relevance for auditory and speech perception.
- Author
-
Morillon B, Arnal LH, Schroeder CE, and Keitel A
- Subjects
- Acoustic Stimulation, Humans, Speech, Auditory Perception physiology, Delta Rhythm physiology, Motor Cortex physiology, Speech Perception physiology
- Abstract
In the motor cortex, beta oscillations (∼12-30 Hz) are generally considered a principal rhythm contributing to movement planning and execution. Beta oscillations cohabit and dynamically interact with slow delta oscillations (0.5-4 Hz), but the role of delta oscillations and the subordinate relationship between these rhythms in the perception-action loop remains unclear. Here, we review evidence that motor delta oscillations shape the dynamics of motor behaviors and sensorimotor processes, in particular during auditory perception. We describe the functional coupling between delta and beta oscillations in the motor cortex during spontaneous and planned motor acts. In an active sensing framework, perception is strongly shaped by motor activity, in particular in the delta band, which imposes temporal constraints on the sampling of sensory information. By encoding temporal contextual information, delta oscillations modulate auditory processing and impact behavioral outcomes. Finally, we consider the contribution of motor delta oscillations in the perceptual analysis of speech signals, providing a contextual temporal frame to optimize the parsing and processing of slow linguistic information., (Copyright © 2019 Elsevier Ltd. All rights reserved.)
- Published
- 2019
- Full Text
- View/download PDF
13. Neuroprosthetic Speech: The Ethical Significance of Accuracy, Control and Pragmatics.
- Author
-
Rainey S, Maslen H, Mégevand P, Arnal LH, Fourneret E, and Yvert B
- Subjects
- Brain-Computer Interfaces ethics, Brain-Computer Interfaces standards, Electroencephalography, Humans, Semantics, Communication Devices for People with Disabilities ethics, Communication Devices for People with Disabilities standards, Neural Prostheses ethics, Speech, Alaryngeal
- Abstract
Neuroprosthetic speech devices are an emerging technology that can offer the possibility of communication to those who are unable to speak. Patients with 'locked in syndrome,' aphasia, or other such pathologies can use covert speech-vividly imagining saying something without actual vocalization-to trigger neural controlled systems capable of synthesizing the speech they would have spoken, but for their impairment.We provide an analysis of the mechanisms and outputs involved in speech mediated by neuroprosthetic devices. This analysis provides a framework for accounting for the ethical significance of accuracy, control, and pragmatic dimensions of prosthesis-mediated speech. We first examine what it means for the output of the device to be accurate, drawing a distinction between technical accuracy on the one hand and semantic accuracy on the other. These are conceptual notions of accuracy.Both technical and semantic accuracy of the device will be necessary (but not yet sufficient) for the user to have sufficient control over the device. Sufficient control is an ethical consideration: we place high value on being able to express ourselves when we want and how we want. Sufficient control of a neural speech prosthesis requires that a speaker can reliably use their speech apparatus as they want to, and can expect their speech to authentically represent them. We draw a distinction between two relevant features which bear on the question of whether the user has sufficient control: voluntariness of the speech and the authenticity of the speech. These can come apart: the user might involuntarily produce an authentic output (perhaps revealing private thoughts) or might voluntarily produce an inauthentic output (e.g., when the output is not semantically accurate). Finally, we consider the role of the interlocutor in interpreting the content and purpose of the communication.These three ethical dimensions raise philosophical questions about the nature of speech, the level of control required for communicative accuracy, and the nature of 'accuracy' with respect to both natural and prosthesis-mediated speech.
- Published
- 2019
- Full Text
- View/download PDF
14. The rough sound of salience enhances aversion through neural synchronisation.
- Author
-
Arnal LH, Kleinschmidt A, Spinelli L, Giraud AL, and Mégevand P
- Subjects
- Acoustic Stimulation, Acoustics, Adolescent, Adult, Auditory Pathways physiology, Drug Resistant Epilepsy surgery, Electrocorticography, Epilepsies, Partial surgery, Female, Humans, Male, Time Factors, Young Adult, Attention, Auditory Cortex physiology, Auditory Perception physiology, Sound
- Abstract
Being able to produce sounds that capture attention and elicit rapid reactions is the prime goal of communication. One strategy, exploited by alarm signals, consists in emitting fast but perceptible amplitude modulations in the roughness range (30-150 Hz). Here, we investigate the perceptual and neural mechanisms underlying aversion to such temporally salient sounds. By measuring subjective aversion to repetitive acoustic transients, we identify a nonlinear pattern of aversion restricted to the roughness range. Using human intracranial recordings, we show that rough sounds do not merely affect local auditory processes but instead synchronise large-scale, supramodal, salience-related networks in a steady-state, sustained manner. Rough sounds synchronise activity throughout superior temporal regions, subcortical and cortical limbic areas, and the frontal cortex, a network classically involved in aversion processing. This pattern correlates with subjective aversion in all these regions, consistent with the hypothesis that roughness enhances auditory aversion through spreading of neural synchronisation.
- Published
- 2019
- Full Text
- View/download PDF
15. Hierarchical Predictive Information Is Channeled by Asymmetric Oscillatory Activity.
- Author
-
Giraud AL and Arnal LH
- Subjects
- Animals, Brain, Primates
- Abstract
Predictive coding and neural oscillations are two descriptive levels of brain functioning whose overlap is not yet understood. Chao et al. (2018) now show that hierarchical predictive coding is instantiated by asymmetric information channeling in the γ and α/β oscillatory ranges., (Copyright © 2018 Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
16. Proactive Sensing of Periodic and Aperiodic Auditory Patterns.
- Author
-
Rimmele JM, Morillon B, Poeppel D, and Arnal LH
- Subjects
- Humans, Anticipation, Psychological physiology, Auditory Perception physiology, Brain Waves physiology, Time Factors, Time Perception physiology
- Abstract
The ability to predict when something will happen facilitates sensory processing and the ensuing computations. Building on the observation that neural activity entrains to periodic stimulation, leading neurophysiological models imply that temporal predictions rely on oscillatory entrainment. Although they provide a sufficient solution to predict periodic regularities, these models are challenged by a series of findings that question their suitability to account for temporal predictions based on aperiodic regularities. Aiming for a more comprehensive model of how the brain anticipates 'when' in auditory contexts, we emphasize the capacity of motor and higher-order top-down systems to prepare sensory processing in a proactive and temporally flexible manner. Focusing on speech processing, we illustrate how this framework leads to new hypotheses., (Copyright © 2018 Elsevier Ltd. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
17. Explaining individual variation in paternal brain responses to infant cries.
- Author
-
Li T, Horta M, Mascaro JS, Bijanki K, Arnal LH, Adams M, Barr RG, and Rilling JK
- Subjects
- Adult, Aging physiology, Aging psychology, Brain Mapping, Emotions physiology, Female, Humans, Individuality, Infant, Infant, Newborn, Magnetic Resonance Imaging, Male, Neural Pathways diagnostic imaging, Neural Pathways physiology, Paternal Behavior psychology, Pattern Recognition, Physiological physiology, Social Perception, Testosterone metabolism, Young Adult, Auditory Perception physiology, Brain diagnostic imaging, Brain physiology, Crying, Parent-Child Relations, Paternal Behavior physiology
- Abstract
Crying is the principal means by which newborn infants shape parental behavior to meet their needs. While this mechanism can be highly effective, infant crying can also be an aversive stimulus that leads to parental frustration and even abuse. Fathers have recently become more involved in direct caregiving activities in modern, developed nations, and fathers are more likely than mothers to physically abuse infants. In this study, we attempt to explain variation in the neural response to infant crying among human fathers, with the hope of identifying factors that are associated with a more or less sensitive response. We imaged brain function in 39 first-time fathers of newborn infants as they listened to both their own and a standardized unknown infant cry stimulus, as well as auditory control stimuli, and evaluated whether these neural responses were correlated with measured characteristics of fathers and infants that were hypothesized to modulate these responses. Fathers also provided subjective ratings of each cry stimulus on multiple dimensions. Fathers showed widespread activation to both own and unknown infant cries in neural systems involved in empathy and approach motivation. There was no significant difference in the neural response to the own vs. unknown infant cry, and many fathers were unable to distinguish between the two cries. Comparison of these results with previous studies in mothers revealed a high degree of similarity between first-time fathers and first-time mothers in the pattern of neural activation to newborn infant cries. Further comparisons suggested that younger infant age was associated with stronger paternal neural responses, perhaps due to hormonal or novelty effects. In our sample, older fathers found infant cries less aversive and had an attenuated response to infant crying in both the dorsal anterior cingulate cortex (dACC) and the anterior insula, suggesting that compared with younger fathers, older fathers may be better able to avoid the distress associated with empathic over-arousal in response to infant cries. A principal components analysis revealed that fathers with more negative emotional reactions to the unknown infant cry showed decreased activation in the thalamus and caudate nucleus, regions expected to promote positive parental behaviors, as well as increased activation in the hypothalamus and dorsal ACC, again suggesting that empathic over-arousal might result in negative emotional reactions to infant crying. In sum, our findings suggest that infant age, paternal age and paternal emotional reactions to infant crying all modulate the neural response of fathers to infant crying. By identifying neural correlates of variation in paternal subjective reactions to infant crying, these findings help lay the groundwork for evaluating the effectiveness of interventions designed to increase paternal sensitivity and compassion., (Copyright © 2018 Elsevier Inc. All rights reserved.)
- Published
- 2018
- Full Text
- View/download PDF
18. Entrained delta oscillations reflect the subjective tracking of time.
- Author
-
Arnal LH and Kleinschmidt AK
- Abstract
The ability to precisely anticipate the timing of upcoming events at the time-scale of seconds is essential to predict objects' trajectories or to select relevant sensory information. What neurophysiological mechanism underlies the temporal precision in anticipating the occurrence of events? In a recent article,
1 we demonstrated that the sensori-motor system predictively controls neural oscillations in time to optimize sensory selection. However, whether and how the same oscillatory processes can be used to keep track of elapsing time and evaluate short durations remains unclear. Here, we aim at testing the hypothesis that the brain tracks durations by converting (external, objective) elapsing time into an (internal, subjective) oscillatory phase-angle. To test this, we measured magnetoencephalographic oscillatory activity while participants performed a delayed-target detection task. In the delayed condition, we observe that trials that are perceived as longer are associated with faster delta-band oscillations. This suggests that the subjective indexing of time is reflected in the range of phase-angles covered by delta oscillations during the pre-stimulus period. This result provides new insights into how we predict and evaluate temporal structure and support models in which the active entrainment of sensori-motor oscillatory dynamics is exploited to track elapsing time.- Published
- 2017
- Full Text
- View/download PDF
19. θ-Band and β-Band Neural Activity Reflects Independent Syllable Tracking and Comprehension of Time-Compressed Speech.
- Author
-
Pefkou M, Arnal LH, Fontolan L, and Giraud AL
- Subjects
- Adult, Electroencephalography methods, Female, Humans, Male, Random Allocation, Speech physiology, Time Factors, Young Adult, Acoustic Stimulation methods, Auditory Cortex physiology, Beta Rhythm physiology, Comprehension physiology, Speech Perception physiology, Theta Rhythm physiology
- Abstract
Recent psychophysics data suggest that speech perception is not limited by the capacity of the auditory system to encode fast acoustic variations through neural γ activity, but rather by the time given to the brain to decode them. Whether the decoding process is bounded by the capacity of θ rhythm to follow syllabic rhythms in speech, or constrained by a more endogenous top-down mechanism, e.g., involving β activity, is unknown. We addressed the dynamics of auditory decoding in speech comprehension by challenging syllable tracking and speech decoding using comprehensible and incomprehensible time-compressed auditory sentences. We recorded EEGs in human participants and found that neural activity in both θ and γ ranges was sensitive to syllabic rate. Phase patterns of slow neural activity consistently followed the syllabic rate (4-14 Hz), even when this rate went beyond the classical θ range (4-8 Hz). The power of θ activity increased linearly with syllabic rate but showed no sensitivity to comprehension. Conversely, the power of β (14-21 Hz) activity was insensitive to the syllabic rate, yet reflected comprehension on a single-trial basis. We found different long-range dynamics for θ and β activity, with β activity building up in time while more contextual information becomes available. This is consistent with the roles of θ and β activity in stimulus-driven versus endogenous mechanisms. These data show that speech comprehension is constrained by concurrent stimulus-driven θ and low-γ activity, and by endogenous β activity, but not primarily by the capacity of θ activity to track the syllabic rhythm. SIGNIFICANCE STATEMENT Speech comprehension partly depends on the ability of the auditory cortex to track syllable boundaries with θ-range neural oscillations. The reason comprehension drops when speech is accelerated could hence be because θ oscillations can no longer follow the syllabic rate. Here, we presented subjects with comprehensible and incomprehensible accelerated speech, and show that neural phase patterns in the θ band consistently reflect the syllabic rate, even when speech becomes too fast to be intelligible. The drop in comprehension, however, is signaled by a significant decrease in the power of low-β oscillations (14-21 Hz). These data suggest that speech comprehension is not limited by the capacity of θ oscillations to adapt to syllabic rate, but by an endogenous decoding process., (Copyright © 2017 the authors 0270-6474/17/377930-09$15.00/0.)
- Published
- 2017
- Full Text
- View/download PDF
20. [The acoustic niche of screams].
- Author
-
Arnal LH
- Subjects
- Behavior physiology, Brain Mapping, Communication, Crying psychology, Humans, Infant, Newborn, Magnetic Resonance Imaging, Nerve Net physiology, Acoustics, Auditory Perception physiology, Crying physiology
- Published
- 2016
- Full Text
- View/download PDF
21. Temporal Prediction in lieu of Periodic Stimulation.
- Author
-
Morillon B, Schroeder CE, Wyart V, and Arnal LH
- Subjects
- Adolescent, Adult, Female, Forecasting, Humans, Male, Middle Aged, Time Factors, Young Adult, Acoustic Stimulation methods, Auditory Cortex physiology, Auditory Perception physiology, Periodicity, Photic Stimulation methods, Reaction Time physiology
- Abstract
Predicting not only what will happen, but also when it will happen is extremely helpful for optimizing perception and action. Temporal predictions driven by periodic stimulation increase perceptual sensitivity and reduce response latencies. At the neurophysiological level, a single mechanism has been proposed to mediate this twofold behavioral improvement: the rhythmic entrainment of slow cortical oscillations to the stimulation rate. However, temporal regularities can occur in aperiodic contexts, suggesting that temporal predictions per se may be dissociable from entrainment to periodic sensory streams. We investigated this possibility in two behavioral experiments, asking human participants to detect near-threshold auditory tones embedded in streams whose temporal and spectral properties were manipulated. While our findings confirm that periodic stimulation reduces response latencies, in agreement with the hypothesis of a stimulus-driven entrainment of neural excitability, they further reveal that this motor facilitation can be dissociated from the enhancement of auditory sensitivity. Perceptual sensitivity improvement is unaffected by the nature of temporal regularities (periodic vs aperiodic), but contingent on the co-occurrence of a fulfilled spectral prediction. Altogether, the dissociation between predictability and periodicity demonstrates that distinct mechanisms flexibly and synergistically operate to facilitate perception and action., (Copyright © 2016 the authors 0270-6474/16/362342-06$15.00/0.)
- Published
- 2016
- Full Text
- View/download PDF
22. Delta-Beta Coupled Oscillations Underlie Temporal Prediction Accuracy.
- Author
-
Arnal LH, Doelling KB, and Poeppel D
- Subjects
- Acoustic Stimulation, Adolescent, Adult, Electroencephalography, Female, Humans, Magnetoencephalography, Male, Periodicity, Psychoacoustics, Spectrum Analysis, Statistics as Topic, Time Factors, Young Adult, Auditory Perception physiology, Beta Rhythm physiology, Brain Mapping, Decision Making physiology, Delta Rhythm physiology
- Abstract
The ability to generate temporal predictions is fundamental for adaptive behavior. Precise timing at the time-scale of seconds is critical, for instance to predict trajectories or to select relevant information. What mechanisms form the basis for such accurate timing? Recent evidence suggests that (1) temporal predictions adjust sensory selection by controlling neural oscillations in time and (2) the motor system plays an active role in inferring "when" events will happen. We hypothesized that oscillations in the delta and beta bands are instrumental in predicting the occurrence of auditory targets. Participants listened to brief rhythmic tone sequences and detected target delays while undergoing magnetoencephalography recording. Prior to target occurrence, we found that coupled delta (1-3 Hz) and beta (18-22 Hz) oscillations temporally align with upcoming targets and bias decisions towards correct responses, suggesting that delta-beta coupled oscillations underpin prediction accuracy. Subsequent to target occurrence, subjects update their decisions using the magnitude of the alpha-band (10-14 Hz) response as internal evidence of target timing. These data support a model in which the orchestration of oscillatory dynamics between sensory and motor systems is exploited to accurately select sensory information in time., (© The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.)
- Published
- 2015
- Full Text
- View/download PDF
23. Human screams occupy a privileged niche in the communication soundscape.
- Author
-
Arnal LH, Flinker A, Kleinschmidt A, Giraud AL, and Poeppel D
- Subjects
- Acoustic Stimulation, Adult, Female, Humans, Magnetic Resonance Imaging, Male, Sound, Young Adult, Speech Acoustics, Speech Intelligibility, Speech Perception
- Abstract
Screaming is arguably one of the most relevant communication signals for survival in humans. Despite their practical relevance and their theoretical significance as innate [1] and virtually universal [2, 3] vocalizations, what makes screams a unique signal and how they are processed is not known. Here, we use acoustic analyses, psychophysical experiments, and neuroimaging to isolate those features that confer to screams their alarming nature, and we track their processing in the human brain. Using the modulation power spectrum (MPS [4, 5]), a recently developed, neurally informed characterization of sounds, we demonstrate that human screams cluster within restricted portion of the acoustic space (between ∼30 and 150 Hz modulation rates) that corresponds to a well-known perceptual attribute, roughness. In contrast to the received view that roughness is irrelevant for communication [6], our data reveal that the acoustic space occupied by the rough vocal regime is segregated from other signals, including speech, a pre-requisite to avoid false alarms in normal vocal communication. We show that roughness is present in natural alarm signals as well as in artificial alarms and that the presence of roughness in sounds boosts their detection in various tasks. Using fMRI, we show that acoustic roughness engages subcortical structures critical to rapidly appraise danger. Altogether, these data demonstrate that screams occupy a privileged acoustic niche that, being separated from other communication signals, ensures their biological and ultimately social efficiency., (Copyright © 2015 Elsevier Ltd. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
24. Temporal coding in the auditory cortex.
- Author
-
Arnal LH, Poeppel D, and Giraud AL
- Subjects
- Acoustic Stimulation, Animals, Functional Laterality, Humans, Time Factors, Auditory Cortex physiology, Auditory Perception physiology
- Abstract
Speech is a complex acoustic signal showing a quasiperiodic structure at several timescales. Integrated neural signals recorded in the cortex also show periodicity at different timescales. In this chapter we outline the neural mechanisms that potentially allow the auditory cortex to segment and encode continuous speech. This chapter focuses on how the human auditory cortex uses the temporal structure of the acoustic signal to extract phonemes and syllables, the two major constituents of connected speech. We argue that the quasiperiodic structure of collective neural activity in auditory cortex represents the ideal mechanical infrastructure to fractionate continuous speech into linguistic constituents of variable sizes., (© 2015 Elsevier B.V. All rights reserved.)
- Published
- 2015
- Full Text
- View/download PDF
25. Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing.
- Author
-
Doelling KB, Arnal LH, Ghitza O, and Poeppel D
- Subjects
- Acoustic Stimulation, Adolescent, Adult, Cues, Female, Humans, Magnetoencephalography, Male, Young Adult, Auditory Cortex physiology, Comprehension physiology, Delta Rhythm physiology, Speech Perception physiology, Theta Rhythm physiology
- Abstract
A growing body of research suggests that intrinsic neuronal slow (<10 Hz) oscillations in auditory cortex appear to track incoming speech and other spectro-temporally complex auditory signals. Within this framework, several recent studies have identified critical-band temporal envelopes as the specific acoustic feature being reflected by the phase of these oscillations. However, how this alignment between speech acoustics and neural oscillations might underpin intelligibility is unclear. Here we test the hypothesis that the 'sharpness' of temporal fluctuations in the critical band envelope acts as a temporal cue to speech syllabic rate, driving delta-theta rhythms to track the stimulus and facilitate intelligibility. We interpret our findings as evidence that sharp events in the stimulus cause cortical rhythms to re-align and parse the stimulus into syllable-sized chunks for further decoding. Using magnetoencephalographic recordings, we show that by removing temporal fluctuations that occur at the syllabic rate, envelope-tracking activity is reduced. By artificially reinstating these temporal fluctuations, envelope-tracking activity is regained. These changes in tracking correlate with intelligibility of the stimulus. Together, the results suggest that the sharpness of fluctuations in the stimulus, as reflected in the cochlear output, drive oscillatory activity to track and entrain to the stimulus, at its syllabic rate. This process likely facilitates parsing of the stimulus into meaningful chunks appropriate for subsequent decoding, enhancing perception and intelligibility., (Copyright © 2013 Elsevier Inc. All rights reserved.)
- Published
- 2014
- Full Text
- View/download PDF
26. Predicting "When" Using the Motor System's Beta-Band Oscillations.
- Author
-
Arnal LH
- Published
- 2012
- Full Text
- View/download PDF
27. Asymmetric function of theta and gamma activity in syllable processing: an intra-cortical study.
- Author
-
Morillon B, Liégeois-Chauvel C, Arnal LH, Bénar CG, and Giraud AL
- Abstract
Low-gamma (25-45 Hz) and theta (4-8 Hz) oscillations are proposed to underpin the integration of phonemic and syllabic information, respectively. How these two scales of analysis split functions across hemispheres is unclear. We analyzed cortical responses from an epileptic patient with a rare bilateral electrode implantation (stereotactic EEG) in primary (A1/BA41 and A2/BA42) and association auditory cortices (BA22). Using time-frequency analyses, we confirmed the dominance of a 5-6 Hz theta activity in right and of a low-gamma (25-45 Hz) activity in left primary auditory cortices (A1/A2), during both resting state and syllable processing. We further detected high-theta (7-8 Hz) resting activity in left primary, but also associative auditory regions. In left BA22, its phase correlated with high-gamma induced power. Such a hierarchical relationship across theta and gamma frequency bands (theta/gamma phase-amplitude coupling) could index the process by which the neural code shifts from stimulus feature- to phonological-encoding, and is associated with the transition from evoked to induced power responses. These data suggest that theta and gamma activity in right and left auditory cortices bear different functions. They support a scheme where slow parsing of the acoustic information dominates in right hemisphere at a syllabic (5-6 Hz) rate, and left auditory cortex exhibits a more complex cascade of oscillations, reflecting the possible extraction of transient acoustic cues at a fast (~25-45 Hz) rate, subsequently integrated at a slower, e.g., syllabic one. Slow oscillations could functionally participate to speech processing by structuring gamma activity in left BA22, where abstract percepts emerge.
- Published
- 2012
- Full Text
- View/download PDF
28. Cortical oscillations and sensory predictions.
- Author
-
Arnal LH and Giraud AL
- Subjects
- Bayes Theorem, Humans, Anticipation, Psychological physiology, Attention physiology, Brain Waves physiology, Perception physiology
- Abstract
Many theories of perception are anchored in the central notion that the brain continuously updates an internal model of the world to infer the probable causes of sensory events. In this framework, the brain needs not only to predict the causes of sensory input, but also when they are most likely to happen. In this article, we review the neurophysiological bases of sensory predictions of "what' (predictive coding) and 'when' (predictive timing), with an emphasis on low-level oscillatory mechanisms. We argue that neural rhythms offer distinct and adapted computational solutions to predicting 'what' is going to happen in the sensory environment and 'when'., (Copyright © 2012 Elsevier Ltd. All rights reserved.)
- Published
- 2012
- Full Text
- View/download PDF
29. Transitions in neural oscillations reflect prediction errors generated in audiovisual speech.
- Author
-
Arnal LH, Wyart V, and Giraud AL
- Subjects
- Adult, Female, Humans, Magnetoencephalography, Male, Middle Aged, Auditory Perception physiology, Cerebral Cortex physiology, Neural Pathways, Speech physiology, Visual Perception physiology
- Abstract
According to the predictive coding theory, top-down predictions are conveyed by backward connections and prediction errors are propagated forward across the cortical hierarchy. Using MEG in humans, we show that violating multisensory predictions causes a fundamental and qualitative change in both the frequency and spatial distribution of cortical activity. When visual speech input correctly predicted auditory speech signals, a slow delta regime (3-4 Hz) developed in higher-order speech areas. In contrast, when auditory signals invalidated predictions inferred from vision, a low-beta (14-15 Hz) / high-gamma (60-80 Hz) coupling regime appeared locally in a multisensory area (area STS). This frequency shift in oscillatory responses scaled with the degree of audio-visual congruence and was accompanied by increased gamma activity in lower sensory regions. These findings are consistent with the notion that bottom-up prediction errors are communicated in predominantly high (gamma) frequency ranges, whereas top-down predictions are mediated by slower (beta) frequencies.
- Published
- 2011
- Full Text
- View/download PDF
30. Dual neural routing of visual facilitation in speech processing.
- Author
-
Arnal LH, Morillon B, Kell CA, and Giraud AL
- Subjects
- Adult, Brain blood supply, Brain Mapping, Cerebrovascular Circulation, Feedback, Psychological physiology, Female, Humans, Magnetic Resonance Imaging, Magnetoencephalography, Male, Middle Aged, Models, Neurological, Motion Perception physiology, Neural Pathways physiology, Psychoacoustics, Time Factors, Young Adult, Brain physiology, Speech Perception physiology, Visual Perception physiology
- Abstract
Viewing our interlocutor facilitates speech perception, unlike for instance when we telephone. Several neural routes and mechanisms could account for this phenomenon. Using magnetoencephalography, we show that when seeing the interlocutor, latencies of auditory responses (M100) are the shorter the more predictable speech is from visual input, whether the auditory signal was congruent or not. Incongruence of auditory and visual input affected auditory responses approximately 20 ms after latency shortening was detected, indicating that initial content-dependent auditory facilitation by vision is followed by a feedback signal that reflects the error between expected and received auditory input (prediction error). We then used functional magnetic resonance imaging and confirmed that distinct routes of visual information to auditory processing underlie these two functional mechanisms. Functional connectivity between visual motion and auditory areas depended on the degree of visual predictability, whereas connectivity between the superior temporal sulcus and both auditory and visual motion areas was driven by audiovisual (AV) incongruence. These results establish two distinct mechanisms by which the brain uses potentially predictive visual information to improve auditory perception. A fast direct corticocortical pathway conveys visual motion parameters to auditory cortex, and a slower and indirect feedback pathway signals the error between visual prediction and auditory input.
- Published
- 2009
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.