My paper on Generalized Gradient Norm Clipping & Non-Euclidean (L0, L1)-Smoothness (together with collaborators from EPFL) was accepted as an oral at NeurIPS! We extend the theory for our Scion algorithm to include gradient clipping. Read about it here arxiv.org/abs/2506.01913
19.09.2025 16:48
π 16
π 3
π¬ 1
π 0
Thanks!
19.09.2025 16:50
π 0
π 0
π¬ 0
π 0
Our work on the generalization of Flow Matching got an oral at Neurips !
Go see @quentinbertrand.bsky.social present it there :)
19.09.2025 16:02
π 25
π 3
π¬ 3
π 0
New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719
π€― Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?
w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social πππ
18.06.2025 08:08
π 55
π 17
π¬ 2
π 3
What an amazing week with insightful discussions and interactions! @franceausenegal.bsky.social
14.04.2025 07:54
π 1
π 0
π¬ 0
π 0
Outpout on DecisionBoundaryDisplay for a set of probabilistic classifier on a 3 class classification problems.
The two logistic regression models fitted on the original features display linear decision boundaries as expected. For this particular problem, this does not seem to be detrimental as both models are competitive with the non-linear models when quantitatively evaluated on the test set. We can observe that the amount of regularization influences the model confidence: lighter colors for the strongly regularized model with a lower value of C. Regularization also impacts the orientation of decision boundary leading to slightly different ROC AUC.
The log-loss on the other hand evaluates both sharpness and calibration and as a result strongly favors the weakly regularized logistic-regression model, probably because the strongly regularized model is under-confident. This could be confirmed by looking at the calibration curve using sklearn.calibration.CalibrationDisplay.
The logistic regression model with RBF features has a βblobbyβ decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the Gaussian process classifier which is configured to use an RBF kernel.
The logistic regression model fitted on binned features with interactions has a decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the gradient boosting classifier: both models favor axis-aligned decisions when extrapolating to unseen region of the feature space.
The logistic regression model fitted on spline features with interactions has a similar axis-aligned extrapolation behavior but a smoother decision boundary in the dense region of the feature space than the two previous models.
Recently merged in scikit-learn's main branch: display the maximum predicted class probability in 2D continuous feature spaces (mostly for didactic purposes):
scikit-learn.org/dev/auto_exa...
The linked example has been updated to include some conclusions we can draw from this plot.
07.03.2025 10:58
π 31
π 6
π¬ 2
π 0
Visit the playground at the end of our blog post (with co-authors @annegnx.bsky.social, Ségolène Martin, @mathurinmassias.bsky.social, @quentinbertrand.bsky.social)
dl.heeere.com/cfm#cfm-play...
04.12.2024 16:16
π 0
π 1
π¬ 0
π 0
π©βππ¨βπ Internship offers (1st step to PhD program) in my group:
team.inria.fr/soda/job-off...
Topics:
βΌ Health AI & causality, accounting for censoring (for people who love health impact)
βΌ Foundation models for tabular learning (for people into bigger models)
Come work with us!
27.11.2024 13:44
π 40
π 12
π¬ 0
π 0
This blog post provides intuition and nice illustrations to understand normalizing flows and flow matching techniques!
w. @annegnx.bsky.social, Ségolène Martin, @mathurinmassias.bsky.social, and @remiemonet.bsky.social (the king for figures)
27.11.2024 20:09
π 5
π 1
π¬ 0
π 0
Very cool ref! Did not know about it!
27.11.2024 11:20
π 2
π 0
π¬ 1
π 0
Nice blogpost and very cool illustrations π. I will die on the hill that most of the FM ideas where introduced back in 2021 by Stefano Peluchetti in his underappreciated paper openreview.net/forum?id=oVf...
27.11.2024 11:08
π 11
π 1
π¬ 2
π 2
Anne Gagneux, Ségolène Martin, @quentinbertrand.bsky.social Remi Emonet and I wrote a tutorial blog post on flow matching: dl.heeere.com/conditional-... with lots of illustrations and intuition!
We got this idea after their cool work on improving Plug and Play with FM: arxiv.org/abs/2410.02423
27.11.2024 09:00
π 355
π 102
π¬ 12
π 11