Quentin Bertrand's Avatar

Quentin Bertrand

@quentinbertrand

Machine Learning Researcher at Inria

136
Followers
33
Following
4
Posts
26.11.2024
Joined
Posts Following

Latest posts by Quentin Bertrand @quentinbertrand

Post image

My paper on Generalized Gradient Norm Clipping & Non-Euclidean (L0, L1)-Smoothness (together with collaborators from EPFL) was accepted as an oral at NeurIPS! We extend the theory for our Scion algorithm to include gradient clipping. Read about it here arxiv.org/abs/2506.01913

19.09.2025 16:48 πŸ‘ 16 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

Thanks!

19.09.2025 16:50 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Our work on the generalization of Flow Matching got an oral at Neurips !

Go see @quentinbertrand.bsky.social present it there :)

19.09.2025 16:02 πŸ‘ 25 πŸ” 3 πŸ’¬ 3 πŸ“Œ 0
Video thumbnail

New paper on the generalization of Flow Matching www.arxiv.org/abs/2506.03719

🀯 Why does flow matching generalize? Did you know that the flow matching target you're trying to learn *can only generate training points*?

w @quentinbertrand.bsky.social @annegnx.bsky.social @remiemonet.bsky.social πŸ‘‡πŸ‘‡πŸ‘‡

18.06.2025 08:08 πŸ‘ 55 πŸ” 17 πŸ’¬ 2 πŸ“Œ 3

What an amazing week with insightful discussions and interactions! @franceausenegal.bsky.social

14.04.2025 07:54 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Outpout on DecisionBoundaryDisplay for a set of probabilistic classifier on a 3 class classification problems.

The two logistic regression models fitted on the original features display linear decision boundaries as expected. For this particular problem, this does not seem to be detrimental as both models are competitive with the non-linear models when quantitatively evaluated on the test set. We can observe that the amount of regularization influences the model confidence: lighter colors for the strongly regularized model with a lower value of C. Regularization also impacts the orientation of decision boundary leading to slightly different ROC AUC.

The log-loss on the other hand evaluates both sharpness and calibration and as a result strongly favors the weakly regularized logistic-regression model, probably because the strongly regularized model is under-confident. This could be confirmed by looking at the calibration curve using sklearn.calibration.CalibrationDisplay.

The logistic regression model with RBF features has a β€œblobby” decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the Gaussian process classifier which is configured to use an RBF kernel.

The logistic regression model fitted on binned features with interactions has a decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the gradient boosting classifier: both models favor axis-aligned decisions when extrapolating to unseen region of the feature space.

The logistic regression model fitted on spline features with interactions has a similar axis-aligned extrapolation behavior but a smoother decision boundary in the dense region of the feature space than the two previous models.

Outpout on DecisionBoundaryDisplay for a set of probabilistic classifier on a 3 class classification problems. The two logistic regression models fitted on the original features display linear decision boundaries as expected. For this particular problem, this does not seem to be detrimental as both models are competitive with the non-linear models when quantitatively evaluated on the test set. We can observe that the amount of regularization influences the model confidence: lighter colors for the strongly regularized model with a lower value of C. Regularization also impacts the orientation of decision boundary leading to slightly different ROC AUC. The log-loss on the other hand evaluates both sharpness and calibration and as a result strongly favors the weakly regularized logistic-regression model, probably because the strongly regularized model is under-confident. This could be confirmed by looking at the calibration curve using sklearn.calibration.CalibrationDisplay. The logistic regression model with RBF features has a β€œblobby” decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the Gaussian process classifier which is configured to use an RBF kernel. The logistic regression model fitted on binned features with interactions has a decision boundary that is non-linear in the original feature space and is quite similar to the decision boundary of the gradient boosting classifier: both models favor axis-aligned decisions when extrapolating to unseen region of the feature space. The logistic regression model fitted on spline features with interactions has a similar axis-aligned extrapolation behavior but a smoother decision boundary in the dense region of the feature space than the two previous models.

Recently merged in scikit-learn's main branch: display the maximum predicted class probability in 2D continuous feature spaces (mostly for didactic purposes):

scikit-learn.org/dev/auto_exa...

The linked example has been updated to include some conclusions we can draw from this plot.

07.03.2025 10:58 πŸ‘ 31 πŸ” 6 πŸ’¬ 2 πŸ“Œ 0

Visit the playground at the end of our blog post (with co-authors @annegnx.bsky.social, Ségolène Martin, @mathurinmassias.bsky.social, @quentinbertrand.bsky.social)
dl.heeere.com/cfm#cfm-play...

04.12.2024 16:16 πŸ‘ 0 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ‘©β€πŸŽ“πŸ‘¨β€πŸŽ“ Internship offers (1st step to PhD program) in my group:
team.inria.fr/soda/job-off...

Topics:
β—Ό Health AI & causality, accounting for censoring (for people who love health impact)
β—Ό Foundation models for tabular learning (for people into bigger models)

Come work with us!

27.11.2024 13:44 πŸ‘ 40 πŸ” 12 πŸ’¬ 0 πŸ“Œ 0

This blog post provides intuition and nice illustrations to understand normalizing flows and flow matching techniques!

w. @annegnx.bsky.social, Ségolène Martin, @mathurinmassias.bsky.social, and @remiemonet.bsky.social (the king for figures)

27.11.2024 20:09 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Very cool ref! Did not know about it!

27.11.2024 11:20 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Nice blogpost and very cool illustrations 😍. I will die on the hill that most of the FM ideas where introduced back in 2021 by Stefano Peluchetti in his underappreciated paper openreview.net/forum?id=oVf...

27.11.2024 11:08 πŸ‘ 11 πŸ” 1 πŸ’¬ 2 πŸ“Œ 2
Video thumbnail

Anne Gagneux, Ségolène Martin, @quentinbertrand.bsky.social Remi Emonet and I wrote a tutorial blog post on flow matching: dl.heeere.com/conditional-... with lots of illustrations and intuition!

We got this idea after their cool work on improving Plug and Play with FM: arxiv.org/abs/2410.02423

27.11.2024 09:00 πŸ‘ 355 πŸ” 102 πŸ’¬ 12 πŸ“Œ 11