5/5 Finally, huge thanks to my co-author and supervisor, Konstantina Kilteni (@kkilteni.bsky.social) , for her invaluable input and guidance during this work!
5/5 Finally, huge thanks to my co-author and supervisor, Konstantina Kilteni (@kkilteni.bsky.social) , for her invaluable input and guidance during this work!
4/5 We propose that the lack of vision reduced the ability of the forward model to sharpen its predictions over time. These findings provide novel insights into how vision contributes to the predictive mechanisms in the brain and advance our understanding of predictive motor control.
Alt text: Perceived intensity of the forces applied in Vision and No vision sessions. (A-B) Baseline-normalized individual PSE values of every trial group in Vision (A) and No vision (B) sessions, overlaid with the linear regression model fitted to the group data. Data are jittered (Β± 1%) to avoid complete overlap, and the shaded area depicts the 95% confidence interval. The two slope values (Ξ²) represent the average slopes in Vision and No vision sessions, and asterisks indicate the significance of slope comparisons against zero (*** p < 0.001). (C) Slopes of the linear regression models fitted to the individual normalized PSE values. In the Vision session (red stripes), the slopes were negative and significantly different from zero, while the slopes in the No vision session (cyan stripes) were not significantly different from zero. The slopes were significantly more negative (i.e., steeper) in the Vision session compared to the No vision session. (D) Intercepts of the linear regression models fitted to the individual normalized PSE values. The intercepts were significantly more negative in the No vision compared to the Vision session. (C, D) The markers represent the individual slope and intercept values, boxplots show the medians and interquartile ranges, and half-violin plots illustrate the data distributions as probability densities. Asterisks indicate the significance of statistical comparisons (* p < 0.05).
3/5 We again demonstrated the gradual somatosensory attenuation when movements were performed with vision. Importantly, this temporal tuning was reduced in the absence of visual input, as the somatosensory perception was more uniformly, rather than gradually, attenuated throughout the movement.
Alt text: An illustration of the experimental setup. The participants completed two experimental sessions where they had to perform the movements to self-touch while having full vision available (Vision session) or being blindfolded (No Vision session). Participants made a reaching movement with their right hand from the starting position towards one of the force sensors placed above their left index or ring finger, concluding the movement with a tap on the sensor with their right index finger. The motion sensor attached to the right index finger tracked participantsβ movements in every trial. The tactile stimuli were delivered to their left index or ring finger through the probe attached to the motor.
2/5 Here, we investigated the contribution of vision to this temporal attenuation of somatosensory perception. The participants discriminated the forces applied to their left hand during the right handβs reaching movement towards the left hand, while performing movements with and without vision.
1/5 Using the internal forward model, the brain can predict and attenuate self-touch. In our previous study, we showed that these predictions lead to a gradual attenuation of somatosensory perception (i.e., the touch is perceived as weaker) as the movement to self-touch progresses.
Happy to share that our new paper has been published in the European Journal of Neuroscience (@ejneuroscience.bsky.social)! Using psychophysics, we show that vision fine-tunes self-touch predictions, leading to the temporal modulation of somatosensory perception during movements to self-touch.
π£ New preprint π£
The brain attenuates self-touch, but how does this unfold at the neural level before the touch? We used MEG to find out π§ ππ
New paper out, led by the brilliant (bsky-less) PhD student Ziliang Xiong!
We show that temporal expectations yield both costs and benefits on somatosensory perception and decision-making!
We used psychophysics and computational modelling!
Thread π
www.sciencedirect.com/science/arti...
Lastly, I want to give huge thanks to Xavier Job and Konstantina Kilteni (@kkilteni.bsky.social) who were an essential part of this big work!
Taken together, our results show that sensorimotor predictions dynamically modulate somatosensory perception, and are in strong agreement with the framework of internal forward models.
We replicated these results in Experiment 2 and demonstrated that if the movements did not generate expectations of self-touch, no gradual attenuation of somatosensory perception was present. This indicates that sensorimotor context is crucial in the temporal modulation of somatosensory perception.
In Experiment 1, we show the temporal tuning of somatosensory perception during the movements to self-touch. The forces felt progressively weaker during the movement, reached their minimum perceived intensity at the time of self-touch, and recovered after the movement ended.
The brain can predict and attenuate self-touch, but how do these predictions impact somatosensation before or after self-touch? We addressed this in behavioural tasks where participants discriminated forces applied to their left hand during the right handβs reaching movement toward the left hand.
The first paper of my PhD is out in iScience (@cp-iscience.bsky.social)! Here, we show that somatosensory perception is dynamically modulated during the movement in a context-dependent manner.
#neuroskyence #psychscisky #Sensorimotor
www.sciencedirect.com/science/arti...