Accepted to a Workshop (1/2):
"Self-Retrieval from Distant Contexts for Document-Level Machine Translation", accepted to the Conference on Machine Translation (WMT25), from @ziqianpeng.bsky.social, @rachelbawden.bsky.social, @yvofr.bsky.social
We will also announce a new experiment on post-editing abstracts in the #NLP domain. If you can post-edit into French, please register and donate your post-edit to science ! Details on our website: urls.fr/7zlHtZ.
Screenshot of MaTOS website
Very happy to be in Geneva for the #MTSummit2025. Tomorrow we present our recent progresses on the translation of scholarly documents. Check our resources and publications on the project website (anr-matos.github.io).
1: Towards the Machine Translation of Scientific Neologisms with @yvofr.bsky.social : ever struggled to translate a new term such as pretraining or Reinforcement Learning from Human Feedback? We aim to leverage the definitions of terms to translate them more accurately
2: Unlike “Likely”, “Unlike” is Unlikely: always with @yvofr.bsky.social Because BPE makes a difference between tokens at the beginning and the end of words, LLMs are unable to generate prefixations