Ilies Chibane, Thomas George, Pierre Nodet, Vincent Lemaire: Calibration improves detection of mislabeled examples https://arxiv.org/abs/2511.02738 https://arxiv.org/pdf/2511.02738 https://arxiv.org/html/2511.02738
Ilies Chibane, Thomas George, Pierre Nodet, Vincent Lemaire: Calibration improves detection of mislabeled examples https://arxiv.org/abs/2511.02738 https://arxiv.org/pdf/2511.02738 https://arxiv.org/html/2511.02738
Aziz Bacha, Thomas George
Training Feature Attribution for Vision Models
https://arxiv.org/abs/2510.09135
π’ Talk Announcement
"Unlock the full predictive power of your multi-table data", by Luc-AurΓ©lien Gauthier and Alexis Bondu
π Talk info: pretalx.com/pydata-paris-2025/talk/H9X8TG
π
Schedule: pydata.org/paris2025/schedule
π Tickets: pydata.org/paris2025/tickets
PhD offer at Orange Innov in Paris: example-based explainability of deep networks' predictions.
Please share with interested candidates, or do not hesitate to reach out to me for further information π
Very interesting challenge! How will you balance accuracy and energy efficiency in your final score?
A unified view of mislabeling detection methods using a simple principle: your trained machine learning model knows more about your data than what you usually query it for (i.e., its predicted class). Instead, there are many other ways to *probe* it.
www.youtube.com/watch?v=fT9V...
Congratulations for a very interesting paper! On the same topic, allow me to adverstise our AISTATS paper arxiv.org/abs/2008.00938 where we use the "sum of linearized steps" view to derive a Rademacher complexity bound which uses tangent features during training (fig. 6).