I’ve not long ago crossed paths with this video of the NCS2020 ″Decoding the neural processing of speech” talk by Tobias Reichenbach, PhD from the Imperial University London, held last February at the Institute of Neurosciences of the College of Barcelona.
You can go through a wee-summary of the video ahead of seeing it:
Summary. Knowing speech in noisy backgrounds involves selective focus to a certain speaker. People excel at this complicated activity, although existing speech recognition technological innovation still struggles when history sound is loud. The neural mechanisms by which we system speech continue being, even so, poorly recognized, not minimum due to the complexity of pure speech. Here we describe new progress obtained by means of applying machine-understanding to neuroimaging info of individuals listening to speech in distinct kinds of history sound. In certain, we create statistical products to relate attribute functions of speech this sort of as pitch, amplitude fluctuations and linguistic surprisal to neural measurements. We locate neural correlates of speech processing both at the subcortical level, connected to the pitch, as well as at the cortical level, connected to amplitude fluctuations and linguistic constructions. We also clearly show that some of these actions allow for diagnosing conditions of consciousness. Our findings may be utilized in clever listening to aids that immediately modify speech processing to assist a person, as well as in diagnostics of brain conditions.
Here it goes!
Short article created by Irene Vigué-Guix, PhD pupil in TIC functioning on ongoing brain oscillations and conduct modulation, with an undergraduate degree in Biomedical Engineering and a Masters in Mind and Cognition. Primary publication seems listed here.