ECCV 2026 deadline push at @naverlabseurope.bsky.social with @dlarlus.bsky.social @weinzaepfelp.bsky.social @mbsariyildiz.bsky.social @bjoernmichele.bsky.social @vincentleroy.bsky.social @rbregier.bsky.social + + + ....
ECCV 2026 deadline push at @naverlabseurope.bsky.social with @dlarlus.bsky.social @weinzaepfelp.bsky.social @mbsariyildiz.bsky.social @bjoernmichele.bsky.social @vincentleroy.bsky.social @rbregier.bsky.social + + + ....
🏹 Job alert: 3 PhD positions in AI, Earth Observation, and Science-Policy interface
📍 Vannes 🇫🇷 & Ispra 🇮🇹
📅 Apply by Jan 15th
🔗 https://www-obelix.irisa.fr/job-offers/
#BMVC2025
For more details
📝 Paper: bmva-archive.org.uk/bmvc/2025/a...
💻 Code: github.com/valeoai/muddos
This is a joint work with my great co-authors @alexandreboulch.bsky.social, @gillespuy.bsky.social, @tuanhungvu.bsky.social, Renaud Marlet, @ncourty.bsky.social and myself.
Key findings:
1️⃣ The LiDAR backbone architecture has a major impact on cross-domain generalization.
2️⃣ A single pretrained backbone can generalize to many domain shifts.
3️⃣ Freezing the pretrained backbone + training only a small MLP head gives the best results.
We systematically study how to best exploit vision foundation models (like DINOv2) for UDA on LiDAR data and identify practical “recipes” that consistently give strong performance across challenging real-world domain gaps.
🚗🌐 Working on domain adaptation for 3D point clouds / LiDAR?
We'll present MuDDoS at BMVC: a method that boosts multimodal distillation for 3D semantic segmentation under domain shift.
📍 BMVC
🕚 Monday, Poster Session 1: Multimodal Learning (11:00–12:30)
📌 Hadfield Hall #859
and Aniruddha Kembhavi, Adrien Gaidon, Nicolas Mansard, and Justin Carpentier as afternoon ones
One of those internships is on Gromov $\delta$-hyperbolicity for GNNs, and will be cosupervised together with Nicolas, myself and Laetitia Chapel. Take a look and spread the words !
Happy to represent Ukraine at #ICCV2025 . Come see my poster today at 11:45 (#399)!
Our recent research will be presented at @iccv.bsky.social! #ICCV2025
We’ll present 5 papers about:
💡 self-supervised & representation learning
🌍 3D occupancy & multi-sensor perception
🧩 open-vocabulary segmentation
🧠 multimodal LLMs & explainability
valeoai.github.io/posts/iccv-2...
Come say hi to our poster October 21st at 11:45 poster session 1 (#399)! We introduce unsupervised post-training of ViTs that enhances dense features for in-context tasks.
First conference as a PhD student, really excited to meet new people.
Aloha #iccv25 – here we come! Excited to be presenting new *St3R models PANSt3R, HAMSt3R & HOSt3R. We're also introducing ‘Geo4D' and ‘LUDVIG’ 🫢 giving invited talks and mentoring! Full @iccv.bsky.social
programme below (or tinyurl.com/asbn5b5d) 🧵1/9
Another great event for @valeoai.bsky.social team: a PhD defense of Corentin Sautier.
His thesis «Learning Actionable LiDAR Representations w/o Annotations» covers the papers BEVContrast (learning self-sup LiDAR features), SLidR, ScaLR (distillation), UNIT and Alpine (solving tasks w/o labels).
Thank you @skamalas.bsky.social ! Looking Forward to my Journey in Grenoble !
So excited to attend the PhD defense of @bjoernmichele.bsky.social at @valeoai.bsky.social! He’s presenting his research results of the last 3 years in 3D domain adaptation: SALUDA (unsupervised DA), MuDDoS (multimodal UDA), TTYD (source-free UDA).
It’s PhD graduation season in the team!
Today, @bjoernmichele.bsky.social is defending his PhD on "Domain Adaptation for 3D Data"
Best of luck! 🚀
Congratulations to our lab colleagues who have been named Outstanding Reviewers at #ICCV2025 👏
Andrei Bursuc @abursuc.bsky.social
Anh-Quan Cao @anhquancao.bsky.social
Renaud Marlet
Eloi Zablocki @eloizablocki.bsky.social
@iccv.bsky.social
iccv.thecvf.com/Conferences/...
Discovered that our RangeViT paper keeps being cited in what might be LLM-generated papers. Number of citations increased rapidly in the last weeks. Too good to be true.
Papers popped up on different platforms, but mainly on ResearchGate with ~80 papers in just 3 weeks.
[1/]
SKADA-Bench : Benchmarking Unsupervised Domain Adaptation Methods with Realistic Validation On Diverse Modalities, has been published published in TMLR today 🚀. It was a huge team effort to design (and publish) an open source fully reproducible DA benchmark 🧵1/n. openreview.net/forum?id=k9F...
1/ Can open-data models beat DINOv2? Today we release Franca, a fully open-sourced vision foundation model. Franca with ViT-G backbone matches (and often beats) proprietary models like SigLIPv2, CLIP, DINOv2 on various benchmarks setting a new standard for open-source research.
The visualisation of the shifts is really great! Although finishing a thesis on domain adaptation for 3D, these shifts in the formal definition always remain a bit abstract for me, whereas with the visualisation in the space(s) it is much clearer.
The most important aspect when facing data shift is the type of shift present in the data. I will give below a few examples of shifts and some existing methods to compensate for it.🧵1/6
I really enjoyed it! Generating the dataset myself, made it very easy to start and play with. Also while knowing on a high level the ideas of flow matching, it was great to do it once myself and to see also the steps in the code.
I wrote a notebook for a lecture/exercice on image generation with flow matching. The idea is to use FM to render images composed of simple shapes using their attributes (type, size, color, etc). Not super useful but fun and easy to train!
colab.research.google.com/drive/16GJyb...
Comments welcome!
Looks great ! I am sure some of your colleagues in the lab would also be interested to have a look in a lunch break on these handhelds 😅
1/n 🚀New paper out - accepted at #ICCV2025!
Introducing DIP: unsupervised post-training that enhances dense features in pretrained ViTs for dense in-context scene understanding
Below: Low-shot in-context semantic segmentation examples. DIP features outperform DINOv2!
Going to the hospital because I broke my wrist smashing the endorse button:
www.understandingai.org/p/i-got-fool...
Thank you for higlighting this article. While it is written for AI-for-science, many of the author's remarks and statements are, in my opinion, also strongly reasonate with my own "AI subfield".
Behind every great conference is a team of dedicated reviewers. Congratulations to this year’s #CVPR2025 Outstanding Reviewers!
cvpr.thecvf.com/Conferences/...