Free access dataset of co-registered Ultrasound and EMA speech data. 200 sentences. Just requires free demo version of AAA software. articulateinstruments.com/ema-ultrasou...
Free access dataset of co-registered Ultrasound and EMA speech data. 200 sentences. Just requires free demo version of AAA software. articulateinstruments.com/ema-ultrasou...
Casto et al. systematically examine language-res ponsive regions of the cerebellum with precision fMRI. They find one region that closely resembles the neocortical language network in its selectivity for language and response to linguistic manipulations. They also find three mixed-sele ctive regions that respond to language but also to non-linguistic inputs.
"The cerebellar components of the human language network" www.cell.com/neuron/fullt...
@coltoncasto.bsky.social, Evelina Fedorenko & colleagues
@cp-neuron.bsky.social
The Speech Motor Neuroscience Group at the University of Wisconsin–Madison inviting applications for an NIH-funded postdoctoral research position in the field of speech motor control and speech motor neuroscience. Details found under the “postdoctoral researchers” tab blab.wisc.edu/join-the-lab/
Attending many great talks at #ESSD2025 in Athens, and a great opportunity to present our work on using the compartmental tongue theory to reshape how we quantify tongue movement in swallowing.
Happy World Octopus Day. Here's a recent paper demonstrating octopuses have a sense of body ownership similar to mamals and rodents.
www.cell.com/current-biol...
Temporal integration in human auditory cortex is predominantly yoked to absolute time www.nature.com/articles/s41... There's a difference between integrating across absolute time & structure (say) phonemes. Do cortical computations reflect time or structure? Results showed time-yoked computations ⏱️
Note all the articulation happening in the posterior tongue and hyoid which is not captured by EMA.
Co-registered EMA and ultrasound. From top left: Ultrasound with tongue contour, Ultrasound keypoints, 3D head with EMA sensors, , Spectrogram, Glossogram showing vocal tract constrictions in red cavities in blue, waveform. Movie created by AAA app. Use settings to hear audio.
Yup.
🚨 Open PhD Position – Grenoble, France 🚨
Join us at GIPSA-lab to explore how Speech Language Models can learn like children: through physical and social interaction. Think AI, robots, development 🧠🤖🎙️
Fully funded (3 yrs) • @cnrs.fr / @ugrenoblealpes.bsky.social
Details 👉 tinyurl.com/bde988b3
I understand that you need to minimize kit to take to remote parts. A mirror would be convenient. I think we need to do two things. Change the 50mm camera for a wide angle one so that a peripheral mirror is in view. Then design a 45° mirror mount to fit on the side camera mount. I can try this out.
Hi Matt, AAA can only record one video channel for reasons to do with having to associate splines with input data streams and also the large amount of disk space that would accumulate. In the past we used a CCTV camera mixer. However, I purchased one recently and couldn't get it to work.
x.com/i/status/195...
Video of Prof Takayuki Arai with his vocal tract models at #Interspeech2025
Love this analogue demonstration.
We're thrilled to introduce ATHENA: Automatically Tracking Hands Expertly with No Annotations – our open-source, Python-based toolbox for 3D markerless hand tracking!
Paper: www.biorxiv.org/content/10....
Really? github.com/HKUDS/DeepCode
www.youtube.com/watch?v=PRgm...
Paper2Code: Convert research papers into working implementations
Text2Web: Generate frontend applications from descriptions
Text2Backend: Create scalable backend systems automatically
Auto Code Validation: Guaranteed working code
Royal Society of Edinburgh workshop this September run by drjoanma.bsky.social from Queen Margaret University.
rse.org.uk/event/swallo...
The work coming out of the Person lab is a must-read for me. www.biorxiv.org/content/10.1...
This new paper shows that cerebellar output neurons encode both predictive and corrective movements, mechanistically linking feedforward and feedback control.
Sosnik et al 2004 link.springer.com/article/10.1...
Sosnik et al 2006 link.springer.com/article/10.1...
Sosnik et al 2007 journals.physiology.org/doi/full/10....
With my interest in speech, I very much welcome this new work on limb movement sequences. Readers may also be interested in the seminal series of papers published 20 years ago on this topic by Sosnik et al most recently cited by @galeaj.bsky.social (refs 12-14) journals.plos.org/plosone/arti...
Although I no longer subscribe to the Equilibrium Point Hypothesis as currently formulated, this proposal is intriguing.
Almanzor et al. (2025). Self-organising bio-inspired reflex circuits for robust motor coordination in artificial musculoskeletal systems.
iopscience.iop.org/article/10.1...
Not inconsistent with the proposal that an initial feedforward motor cortical pulse determines direction and velocity of movement and a step change near peak velocity modulates deceleration to bring the movement on target. This paper, finds that the cerebellum generates the deceleration step change.
"we discovered a naturally occurring PC population suppression during mouse reaching movements that scaled with the velocity of outreach and occurred shortly before the transition to the decelerative phase of movement." www.biorxiv.org/content/10.1...
I believe the cerebellum and possibly motor nuclei learn and map the expected sensory input for a given set of feedforward muscle activations. If there is a mismatch, the cerebellum generates corrective output. If the mismatch persists the cerebellum slowly adapts to the new sensory expectation.
Nice paper but overlooks the groundbreaking and elegant work describing movement sequence learning. Sosnik, R., Hauptmann, B., Karni, A., & Flash, T. (2004). When practice leads to co-articulation: the evolution of geometrically defined movement primitives. Experimental Brain Research, 156, 422-438.
A new article by @maria-cairney.bsky.social, @drjoannecleland.bsky.social and colleagues reporting promising results in a trial on using #ultrasound biofeedback #speechtherapy for children with #cleft palate ± lip.
#SLT #openaccess
tinyurl.com/3jnamh7e
Video corresponding to above glossogram. Red line is midsagittal tongue contour automatically estimated using #DeepLabCut Blue line indicates base of mandible to hyoid and purple line indicates base of mandible to short tendon. In collaboration with @drjoanma.bsky.social
Glossogram with dark red indicating constriction and blue diagonal (tongue compartment contracted) demonstrating peristaltic transfer of water bolus from oral-pharyngeal. This is easiest to explain as sequential extension of neuromuscular compartments of the tongue.
Great to see this out. Will read it carefully.
Not sure what these 50Hz pulses are doing but would be interested to know if 37Hz pulses had a different effect. journals.physiology.org/doi/abs/10.1...