Paradigm and behaviour. Left: Stimuli. Sixty word pairs were created (e.g., sea-tea) where words within a pair sounded similar and could each be associated with one of six semantic contexts (e.g., sea-tea corresponding to nature-food). To decrease clarity and to induce ambiguity, words were slightly degraded to sound like a whisper and morphed into an intermediate acoustic signal between two words from two different contexts. Morphs were validated in a separate validation experiment (see Stimulus creation). Right: EEG part one with morphed words. Trials began with a fixation cross, followed by a visual speaker cue. Morphs were presented binaurally. Participants then indicated the word they had heard by button press. Finally, feedback was given in a speaker-specific manner such that each speaker could be associated with one specific semantic context. Faces were generated using FaceGen.
#Speech comprehension relies on prediction, but does the #brain prioritize expected or unexpected info? @fabianschneider.bsky.social & @helenblank.bsky.social show that sharpening of #SensoryRepresentations & #PredictionError processes co-exist at different levels @plosbiology.org 🧪 plos.io/49kkNs0