An extraordinary experience!! Huge thanks to all mentors and organizers.
An extraordinary experience!! Huge thanks to all mentors and organizers.
First paper is now out in Cortex! We find behavioral and neural evidence for non-conscious speech processing, using a new dual-task paradigm that creates repetitive occurrences of in attentional deafness without masking or degrading stimuli. @deouell.bsky.social
www.sciencedirect.com/science/arti...
Sometimes when diving into a new topic, you don't even know the right keywords to look for. In that sense LLMs can be helpful right from the start, as long as you enter the papers yourself
If you mean logistic regression-like for proportion data then yeah beta. but note that for logistic 50 to 40 is not like 5 to 4, the latter is actually a smaller effect size in log odds ratio (making sense because the change from 95 to 96 is tiny).
1/3) This may be a very important paper, it suggests that there are no prediction error encoding neurons in sensory areas of cortex:
www.biorxiv.org/content/10.1...
I personally am a big fan of the idea that cortical regions (allo and neo) are doing sequence prediction.
But...
๐ง ๐ ๐งช
When we listen to speech, we do it while constantly predicting upcoming contents. Is this prediction associated with the subjective experience of engaged, conscious listening? What happens when we fail to listen? Come take a look at my poster (P116) tomorrow at @assc28.bsky.social 16:30
The conclusions from 5 EEG and behavior studies draw a coherent picture: when speech is task-relevant and supraliminal, inattentional deafness might not be absolute: we can process speech contents non-consciously, and use their meaning to prioritize information for consciouness.
These conditions, without masking, allowed repeated cases of inattentional deafness. We then asked (1) which words are detected more often (the answer will surprise you!) and (2) what happens in the brain when we miss a word, and is it goal-dependent?
We are often too busy to listen to things we need to notice. How can we study the very frequent case of "hearing without listening" without maybe-too-aggressive masking? We developed a new dual task that requires noticing unexpected spoken words during visual task performance.
Opening a new window into auditory awareness with two new preprints!
Conscious Detection of Spoken Words Depends on Their Valence
osf.io/preprints/ps...
Neural Markers of Speech Processing During Inattentional Deafness osf.io/preprints/ps...