Alexandre Boulch's Avatar

Alexandre Boulch

@alexandreboulch

Senior researcher valeo.ai. Member of INRIA-Valeo ASTRA team. Website: boulch.eu

91
Followers
55
Following
1
Posts
28.11.2024
Joined
Posts Following

Latest posts by Alexandre Boulch @alexandreboulch

Preview
GitHub - valeoai/muddos: Official repository of the BMVC 2025 paper "Improving Multimodal Distillation for 3D Semantic Segmentation under Domain Shift" Official repository of the BMVC 2025 paper "Improving Multimodal Distillation for 3D Semantic Segmentation under Domain Shift" - valeoai/muddos

For more details
๐Ÿ“ Paper: bmva-archive.org.uk/bmvc/2025/a...
๐Ÿ’ป Code: github.com/valeoai/muddos

This is a joint work with my great co-authors @alexandreboulch.bsky.social, @gillespuy.bsky.social, @tuanhungvu.bsky.social, Renaud Marlet, @ncourty.bsky.social and myself.

24.11.2025 05:00 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Video thumbnail

Need pixel-level features from your backbone (DINOv3, CLIP, RADIO, FRANCA...)?

๐Ÿš€Introducing NAF: A universal, zero-shot feature upsampler.

It turns low-res ViT features into pixel-perfect maps.

-โšก Model-agnostic
-๐Ÿฅ‡ SoTA results
-๐Ÿš€ 4ร— faster than SoTA
-๐Ÿ“ˆ Scales up to 2K res

25.11.2025 10:44 ๐Ÿ‘ 16 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 2
ร‰valuation des gรฉnรฉrateurs d'images ร  partir de peu d'exemples : calculer le FID avec 10 fois moins d'images, c'est possible La distance Inception de Frรฉchet (Frรฉchet Inception Distance ou FID) est une mรฉtrique standard pour l'รฉvaluation des modรจles gรฉnรฉratifs images. Construite sur la distance de Wasserstein, le FID mesure...

Pour les collรจgues francophones, vous saviez que le FID รฉtait tout cassรฉ ? Moi non plus. Pourtant, si on s'y prend bien, on peut calculer le FID avec moins de 1000 images.

J'en parlerai au GRETSI fin aoรปt : hal.science/hal-05142942 ๐Ÿ‘€

09.07.2025 14:11 ๐Ÿ‘ 3 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

How to make your DINOv2 excel at dense in-context scene understanding tasks.
Check out DIP an effective post-training strategy by @ssirko.bsky.social @spyrosgidaris.bsky.social โ€ฌ
@vobeckya.bsky.social โ€ฌ@abursuc.bsky.social and Nicolas Thome ๐Ÿ‘‡
#iccv2025

25.06.2025 19:35 ๐Ÿ‘ 6 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

We just released the code of #LiDPM, go ahead and play with it (and don't forget to star ๐Ÿคญ๐Ÿคฉ)!

Training and inference code available, along with the model checkpoint.

Github repo: github.com/astra-vision...

#IV2025

25.06.2025 20:05 ๐Ÿ‘ 6 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image Post image

Presenting our project #LiDPM in the afternoon oral session at #IV2025!

Project page: astra-vision.github.io/LiDPM/

w/ @gillespuy.bsky.social, @alexandreboulch.bsky.social, Renaud Marlet, Raoul de Charette

Also, see our poster at 3pm in the Caravaggio room and AMA ๐Ÿ˜‰

23.06.2025 10:12 ๐Ÿ‘ 10 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1
Post image

Okay that was stressful ๐Ÿฅฒ

23.06.2025 11:18 ๐Ÿ‘ 7 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Video thumbnail

๐Ÿš€Thrilled to introduce JAFARโ€”a lightweight, flexible, plug-and-play module that upsamples features from any Foundation Vision Encoder to any desired output resolution (1/n)

Paper : arxiv.org/abs/2506.11136
Project Page: jafar-upsampler.github.io
Github: github.com/PaulCouairon...

16.06.2025 13:58 ๐Ÿ‘ 26 ๐Ÿ” 6 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

๐Ÿš— Ever wondered if an AI model could learn to drive just by watching YouTube? ๐ŸŽฅ๐Ÿ‘€

We trained a 1.2B parameter model on 1,800+ hours of raw driving videos.

No labels. No maps. Just pure observation.

And it works! ๐Ÿคฏ

๐Ÿงต๐Ÿ‘‡ [1/10]

24.02.2025 12:53 ๐Ÿ‘ 25 ๐Ÿ” 7 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 2

This amazing team โค๏ธ

27.01.2025 17:01 ๐Ÿ‘ 19 ๐Ÿ” 3 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Check out our new work with @gastruc.bsky.social and @nicaogr.bsky.social and Clรฉment Mallet! The one-stop shop for multimodal Earth Observation ๐Ÿคฉ

19.12.2024 10:53 ๐Ÿ‘ 12 ๐Ÿ” 3 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Airborne #LiDAR has revolutionized the study of ancient rainforest civilizations by seeing through dense canopies. Yet archaeologists still annotate their data manually. Introducing Archaeoscape at #NeurIPS2024 โ€”the first deep learning-scale, open-access archaeological dataset๐Ÿงต๐Ÿ‘‡

09.12.2024 09:47 ๐Ÿ‘ 27 ๐Ÿ” 8 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Motion Modes: What Could Happen Next? Motion Modes is the first training-free method to generate multiple plausible yet distinct motions for a given object, disentangled from the motion of other objects, camera and other scene changes, fr...

I could easily spend an afternoon looking at the results of this paper: motionmodes.github.io
or this paper: rollingdepth.github.io
or this paper: romosfm.github.io

vision is cool ๐Ÿ˜Ž

05.12.2024 11:23 ๐Ÿ‘ 20 ๐Ÿ” 8 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

At INRIA Paris for @anhquancao.bsky.social for his PhD defense. Subject is Learning Semantics and Geometry for Scene Understanding.
anhquancao.github.io

05.12.2024 13:26 ๐Ÿ‘ 6 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0