Thomas George's Avatar

Thomas George

@tfjgeorge

Explainability of deep neural nets and causality https://tfjgeorge.github.io/

373
Followers
713
Following
4
Posts
23.11.2023
Joined
Posts Following

Latest posts by Thomas George @tfjgeorge

Ilies Chibane, Thomas George, Pierre Nodet, Vincent Lemaire: Calibration improves detection of mislabeled examples https://arxiv.org/abs/2511.02738 https://arxiv.org/pdf/2511.02738 https://arxiv.org/html/2511.02738

05.11.2025 06:34 πŸ‘ 0 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Aziz Bacha, Thomas George
Training Feature Attribution for Vision Models
https://arxiv.org/abs/2510.09135

13.10.2025 07:09 πŸ‘ 0 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ“’ Talk Announcement

"Unlock the full predictive power of your multi-table data", by Luc-AurΓ©lien Gauthier and Alexis Bondu

πŸ“œ Talk info: pretalx.com/pydata-paris-2025/talk/H9X8TG
πŸ“… Schedule: pydata.org/paris2025/schedule
🎟 Tickets: pydata.org/paris2025/tickets

14.08.2025 07:01 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
PhD thesis : Explaining β€œblack box” AI algorithms through their training examples Global context Recent advances in machine learning have led to new AI applications promising increased automation of new tasks to enhance operational efficiency or relie...

PhD offer at Orange Innov in Paris: example-based explainability of deep networks' predictions.

Please share with interested candidates, or do not hesitate to reach out to me for further information 😁

14.03.2025 09:12 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Very interesting challenge! How will you balance accuracy and energy efficiency in your final score?

09.01.2025 15:49 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

A unified view of mislabeling detection methods using a simple principle: your trained machine learning model knows more about your data than what you usually query it for (i.e., its predicted class). Instead, there are many other ways to *probe* it.

www.youtube.com/watch?v=fT9V...

17.12.2024 09:34 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Implicit Regularization via Neural Feature Alignment We approach the problem of implicit regularization in deep learning from a geometrical viewpoint. We highlight a regularization effect induced by a dynamical alignment of the neural tangent features i...

Congratulations for a very interesting paper! On the same topic, allow me to adverstise our AISTATS paper arxiv.org/abs/2008.00938 where we use the "sum of linearized steps" view to derive a Rademacher complexity bound which uses tangent features during training (fig. 6).

21.11.2024 15:46 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0