Just realised I missed a good opportunity to make a "formal phonology" joke.
@beppefusilli
Associate Prof. Experimental Psycholinguistics & Phonology @UPCité, Paris. HDR in Phonetics & Phonology. Interests in Experimental Phonology and psycholinguistics, voice quality in Arabic and in forensics, machine learning, sound change..
Just realised I missed a good opportunity to make a "formal phonology" joke.
🚨 Voir la parole en mouvement ! 🚨
En collaboration avec l’équipe de l‘USN, le LPP a participé à la réalisation d’une vidéo sur l’EMA. Découvrez comment l'EMA permet d’étudier les mouvements articulatoires impliqués dans la production de la parole: youtu.be/n4M5P-8reCY
Want to help some of our students with their experiment on understanding simple sentences with and without emojis? We are looking for native speakers of English for a short (about 15 minutes) experiment. If you want to participate, here it is:
ibex.llf-paris.fr/ibexexps/Psy...
In #speechperception, listener expectations, e.g. about how sounds produced by men & women differ, can influence phoneme categorization. Do #L2 listeners use expectations about speaker gender even for contrasts absent from their #L1? #LabPhon #openaccess @isf-oeaw.bsky.social doi.org/10.16995/lab...
#OA textbooks in #Linguistics! These #textbooks are free to download and use in class 🙂
Enjoy, share, and long live #OA!
www.robertadalessandro.it/oa-textbooks
It's my bad. WhisperX uses DTW for an accurate mapping of word level timestamps via wav2vec2. I have read somewhere (but can't find it) that using MFA instead performs much better, albeit that you need accurate transcription in the first place.. we'll trial various options and shout about it soon..
Thanks for this. Normally the original implementation has an MFA model attached to allow for forced alignment, non? In any case, we're working on reimplimenting it to work on non standard forms (especially Arabic dialects, which are non-written dialects). We'll trial it! Cheers
Bienvenue à toutes et tous sur le compte Bluesky du projet Empirical Foundations of Linguistics (EFL) !
Envie de mieux nous connaitre ? Suivez ce fil pour découvrir notre objectif, nos ambitions et notre fonctionnement 👇🧵
@upcite.bsky.social
this is very bad
staffers are being asked to see if grants in the pipeline can be "'mitigated' to avoid running afoul of any presidential directive"
1. LLM-generated code tries to run code from online software packages. Which is normal but
2. The packages don’t exist. Which would normally cause an error but
3. Nefarious people have made malware under the package names that LLMs make up most often. So
4. Now the LLM code points to malware.
When running ANOVAs in #R, use car::Anova().
aov() and anova() use Type I sums of squares, meaning that order matters, which can distort results in unbalanced designs. car::Anova() is safer because it uses Type II sums of squares by default), each effect is adjusted for all the other effects.
Calls: Deep Phonology: Doing phonology with deep learning (AMP 2025 Special Session)
New webcam based eye tracking demos!👀
You can use js packages in your Pavlovia experiment, including webgazer.js
You can also update stimuli frame-by-frame based on gaze 🧐
This demo measures dwell times on faces..
Try it: https://buff.ly/42xspnQ
View the code: https://buff.ly/4g9LlMW
Léa Salamé "Une des choses que j’ai appris"
@arthurmensch (@mistral) "La technologie qu’on a mis à disposition "
L'accord du participe passé, ce n'est plus une variante de prestige, c'est juste une règle morte.
Et le non-accord permet de reconnaitre les humains !
@tract-linguistes.org
This Registered Report masterpiece just dropped at BMC Biology, brilliantly led by a great team with the help of 300+ analysts & reviewers
Same question, same data: go figure!
tl;dr: Substantial heterogeneity among results comes from differences among analytical choices
🔗 doi.org/10.1186/s129...
Ian was such an inspirational mentor to me, both academically and athletically. He was always so sweet and so English. He will be greatly missed.
Scaling laws for nonlinear dynamical models of articulatory control
➡️ My new article out today in JASA Express Letters, in which I present some new advances on nonlinear task dynamic models of articulatory speech movements.
🔗 doi.org/10.1121/10.0...
@asa-news.bsky.social
🎺 #Job #postdoc in #Phonetics and #Phonology
🍁 McGill University, Canada is looking for someone to work on sound change in #diachrony, #computerlinguistics and #psycholinguistics
⏰ Application #deadline Feb. 16, 2025
🔗 linguistlist.org/issues/36/236/
Notices of Cancellations of Federal Grants Over the weekend, NYU’s Office of Sponsored Research (OSP) received notification from the US Department of State of two grants being terminated. The only reason given in each instance is that “the award does not meet the agency’s priorities.” We have reached out to the affected researchers, taken required steps, and offered support. It is certainly possible, if not likely, that more such notices will come. The research activities of our scholars are central to NYU’s mission. The University values its faculty’s scholarship immeasurably and views these developments with the utmost seriousness, especially the sudden grant cancellations.
NIH study sections were paused last week, today NSF panels are pausing. And NYU sent out this email today. We all know how "temporary" these measures end up being (ahem, much like NYU put up "temporary" barriers to block access to publicly used plazas last year that are--surprise!--still there)
Bonne année 2025 ! ✨ Merci de faire partie de cette belle communauté qui rayonne bien au-delà de nos frontières. Nous avons hâte de vivre cette nouvelle année à vos côtés ! #BonneAnnee2025 #HappyNewYear #Voeux2025
Deadline 23 Jan: Postdoc, comp. modeling speech percept., PI James McQueen, Radboud University / Donders Centre for Cognition www.ru.nl/en/working-a...
👩🎓👨🎓Fully funded PhD position available to come work with me at @unioslo.bsky.social using iterative learning experiments to understand the evolution of sound symbolism.
🔴 Deadline is 12. Jan '24
⏲️ Desired starting date is Mar/April '24
shorturl.at/7hLH0
I agree with you. But it is that initial cost that can become prohibitive at some point!
But can't we combine null hypothesis testing with confidence intervals? I know bayesian is best, but still requires a lot of fine tuning. For instance, going for flat or non-informative priors as one of the easiest to apply, but requires proper assumptions in the data distribution, etc
But what's the alternative?
🚨New preprint🚨
"Big Team Science for language science: Opportunities and Challenges"
osf.io/3pkj6/
Led by
@faytak.bsky.social
@sarkadava.bsky.social
@chenzi.bsky.social
@onurunki.bsky.social
@aggieerin.bsky.social
#langsky #linguistics
And here is a link to our paper describing the system www.lrec-conf.org/proceedings/...
You cam also use the WebMinni which provides a speech to text blind transcription it can also speed your work
The current version of WebMaus I co-developped with the Maus team requires an already vowelised transcription, which either uses the new transliteration system we developed or based off SAMPA transcripts. I am currently working on developing a solution to vowelise the text.