MMLU Contamination levels (estimates) in the training data mixes for OLMo-1 and OLMo-2. Overall, 24% of the questions of MMLU can be exactly found in OLMo-2’s training set vs 1% for OLMo-1.
🧵 Many hidden gems about LLM benchmark contamination in the GAPERON paper!
This French-English model paper has some honest findings about how contamination affects benchmarks (and why no one wants to truly decontaminate their training data)
Thread 👇
23.01.2026 17:48
👍 2
🔁 1
💬 1
📌 0
Read Nathan's thread and (bsky.app/profile/nthn...) to get more details and the paper to get an even better picture: arxiv.org/abs/2510.25771.
12.11.2025 23:18
👍 1
🔁 1
💬 0
📌 0
Congratulations to @nthngdy.bsky.social, @wissamantoun.bsky.social and Rian Touchent (who worked under the supervision of @zehavoc.bsky.social, @bensagot.bsky.social, Éric de La Clergerie and me) on the training of these generative models for French, English and code.
12.11.2025 23:18
👍 1
🔁 1
💬 1
📌 0
I'm proud to share that at @inriaparisnlp.bsky.social we have released Gaperon — a suite of generative language models trained on French, English and code data, the largest of which has 24 billion parameters. Both the models and the code are being published under open licences. Short thread🧵
12.11.2025 17:26
👍 7
🔁 5
💬 1
📌 0
Summary of the GAPERON-8B training run. Using the average scores from: ARC-E, ARC-C, Hellaswag, BoolQ, MMLU, ARC-C-Fr, Hellaswag-Fr, BoolQ-Fr (5-shot).
We are proud to announce that we trained 1.5B, 8B, and 24B generative language models from scratch on 2 to 4 tera-tokens of carefully curated, high-quality data covering French, English and code. We release our models and code under open-source licences. Thread👇
12.11.2025 17:05
👍 14
🔁 6
💬 1
📌 2
We are very grateful to @gencifrance.bsky.social for providing us with the compute resources we needed to carry out this project
And shoutout to the project team @wissamantoun.bsky.social Rian Touchent Eric de la Clergerie @rachelbawden.bsky.social @bensagot.bsky.social @zehavoc.bsky.social
07.11.2025 21:12
👍 1
🔁 0
💬 0
📌 0
GitHub - NathanGodey/gapetron
Contribute to NathanGodey/gapetron development by creating an account on GitHub.
Our pretraining codebase - Gapetron - is available on GitHub and is barely 1500 lines of code with most of the bells and whistles (FSDP, TP, FA3, extensive checkpoint/dataset management, data streaming...)
github.com/NathanGodey...
07.11.2025 21:12
👍 1
🔁 0
💬 1
📌 0
Gaperon - a almanach Collection
We released our model weights (including variants) on @hf.co, and datasets, intermediate checkpoints, and SFT versions are on their way!
Check out the Gaperon collection on 🤗 : huggingface.co/collections...
07.11.2025 21:12
👍 1
🔁 0
💬 1
📌 0
In other words, mid-training intensively on benchmarks yields strong models on both seen and unseen test sets 🤯
The downside is that the more intensively we trained on test sets, the more generation quality seemed to deteriorate (although it remained reasonable):
07.11.2025 21:11
👍 0
🔁 0
💬 1
📌 0
Not only did our Garlic model not fully memorize, but it also generalized better to unseen benchmarks!
On 4 unseen benchmarks, the performance never significantly dropped for Garlic variants and actually drastically increased in 2 out of 4 cases
07.11.2025 21:11
👍 1
🔁 0
💬 1
📌 0
This gave us strong benchmark performance, but surprisingly not much stronger than some of the closed models
In the Garlic training curves below, you see that increasing the ratio of test samples over normal data does not get you much further than SOTA closed-models:
07.11.2025 21:11
👍 1
🔁 0
💬 1
📌 0
We figured: what if we take it to the next level and allow ourselves full contamination?
So we built a dataset (Penicillin-Plus 🦠) compiling the test sets of many mainstream benchmarks in a text format, and we included it in the mid-training mix for our Gaperon-Garlic variant
07.11.2025 21:11
👍 1
🔁 0
💬 1
📌 0
@riantouchent also analyzed how model-based neural filtering as used in DCLM can implicitly boost the share of leaked samples in training data
It turns out that the DCLM classifier is the one that most systematically labels these samples as high-quality data
07.11.2025 21:11
👍 0
🔁 0
💬 1
📌 0
... which results in many (closed and open) models showing a similar performance bias towards likely leaked samples
We split MMLU in two parts (leaked/clean) and show that almost all models tend to perform better on leaked samples
07.11.2025 21:11
👍 3
🔁 1
💬 1
📌 0
This contamination is not intentional: we identified websites that reframed splits of MMLU as user-friendly quizzes
These websites can then be found in CommonCrawl dumps that are generally used for pretraining data curation...
07.11.2025 21:11
👍 3
🔁 1
💬 1
📌 0
We used the great Infinigram from Jiacheng Liu and found numerous hints of test set leakage in DCLM, which is used in OLMo-2
For instance, the fraction of MMLU questions that are leaked in pretraining had gone from ~1% to 24% between OLMo-1 and 2 😬
07.11.2025 21:11
👍 4
🔁 1
💬 1
📌 1
But the benchmark scores were disappointing, even after mid-training on instruct-like data (in the style of OLMo-2)
So if training datasets like DCLM or FineWeb-Edu do not give a strong edge in generation capabilities (even on ArXiv domain), what is their secret?
07.11.2025 21:11
👍 0
🔁 0
💬 1
📌 0
Our 24B base model seems particularly better than its open counterparts at generating text in generic contexts such as short stories or news articles, both in French and English
07.11.2025 21:11
👍 1
🔁 1
💬 1
📌 0
...and it worked!
When looking at the preferences of Llama-3.3-70B-Instruct on text generated from various private and open LLMs, Gaperon is competitive with strong models such as Qwen3-8B and OLMo-2-32B, while being trained on less data:
07.11.2025 21:11
👍 0
🔁 0
💬 1
📌 0
Our custom data filtering strategy focused on linguistically high-quality content. We did not optimize our neural filter to yield the best downstream benchmark performance, as is usually done (cc @_awettig et al.)
We hoped that it would result in more "stylish" models...
07.11.2025 21:11
👍 0
🔁 0
💬 1
📌 0
Our best models (Gaperon-Garlic-8B and 24B) achieve a new state-of-the-art for fully open-source models in bilingual benchmark evaluation... but at what cost?
Let's unwrap how we got there 🧵
07.11.2025 21:11
👍 0
🔁 0
💬 1
📌 0
Thrilled to release Gaperon, an open LLM suite for French, English and Coding 🧀
We trained 3 models - 1.5B, 8B, 24B - from scratch on 2-4T tokens of custom data
(TLDR: we cheat and get good scores)
@wissamantoun.bsky.social @rachelbawden.bsky.social @bensagot.bsky.social @zehavoc.bsky.social
07.11.2025 21:11
👍 35
🔁 18
💬 1
📌 4
We are extremely grateful to @Genci_fr for providing us with the compute resources we needed to carry out this project
And shoutout to the project team @wissam_antoun @riantouchent @RABawden @DeVillemonte @bensagot @zehavoc @InriaParisNLP @Inria @inria_paris
07.11.2025 20:46
👍 0
🔁 0
💬 0
📌 0
GitHub - NathanGodey/gapetron
Contribute to NathanGodey/gapetron development by creating an account on GitHub.
Our pretraining codebase - Gapetron - is available on GitHub and is barely 1500 lines of code with most of the bells and whistle (FSDP, TP, FA3, extensive checkpoint/dataset management, data streaming...)
github.com/NathanGodey...
07.11.2025 20:46
👍 0
🔁 0
💬 1
📌 0
Gaperon - a almanach Collection
We released our model weights (including variants) on @huggingface, and datasets, intermediate checkpoints, and SFT versions are on their way!
Check out the Gaperon collection on 🤗 : huggingface.co/collections...
07.11.2025 20:46
👍 0
🔁 0
💬 1
📌 0
In other words, mid-training intensively on benchmarks yields strong models on both seen and unseen test sets 🤯
The downside is that the more intensively we trained on test sets, the more generation quality seemed to deteriorate (although it remained reasonable):
07.11.2025 20:46
👍 0
🔁 0
💬 1
📌 0
Not only did our Garlic model not fully memorize, but it also generalized better to unseen benchmarks!
On 4 unseen benchmarks, the performance never significantly dropped for Garlic variants and actually drastically increased in 2 out of 4 cases
07.11.2025 20:46
👍 0
🔁 0
💬 1
📌 0
This gave us strong benchmark performance, but surprisingly not much stronger than some of the closed models
In the Garlic training curves below, you see that increasing the ratio of test samples over normal data does not get you much further than SOTA closed-models:
07.11.2025 20:46
👍 0
🔁 0
💬 1
📌 0