1/
Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
Mateusz Pach, @shyamgopal.bsky.social , @qbouniot.bsky.social , Serge Belongie, @zeynepakata.bsky.social
π[πΊπΈNeurIPS]: Wed, Dec 3 β’ 4:30 PM β 7:30 PM PST, Exhibit Hall C,D,E #1007
π[π©π°EurIPS]: Thu, Dec 4, #98
03.12.2025 11:52
π 5
π 1
π¬ 1
π 0
eXCV Workshop at ICCV 2025, Submission deadline August 15.
Are you an XAI researcher attending #ICCV2025? Submit your recently published work β at CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR, AAAI etc. β to the eXCV Workshop for the opportunity to further showcase your work! Published papers can be submitted as is, no rewriting necessary.
@iccv.bsky.social
14.08.2025 10:36
π 4
π 4
π¬ 1
π 0
This project was a collaboration between @eml-munich.bsky.social and Huawei Paris Noahβs Ark Lab. Thank you to my collaborators @qbouniot.bsky.social, Vasilii Feofanov, Ievgen Redko, and particularly to my advisor @zeynepakata.bsky.social for guiding me through my first PhD project!
03.07.2025 07:59
π 3
π 2
π¬ 1
π 0
How can we circumvent data scarcity in the time series domain?
We propose to leverage pretrained ViTs (e.g., CLIP, DINOv2) for time series classification and outperform time series foundation models (TSFMs).
π Preprint: arxiv.org/abs/2506.08641
π» Code: github.com/ExplainableM...
03.07.2025 07:59
π 6
π 2
π¬ 1
π 0
The submission deadline for the proceedings track is today !
#ICCV2025 @iccv.bsky.social
26.06.2025 11:19
π 4
π 1
π¬ 0
π 0
Submission Deadline is extended by 6 days.
#ICCV2025 @iccv.bsky.social
23.06.2025 12:14
π 8
π 5
π¬ 0
π 0
This is a joint work with great collaborators Ievgen Redko, Anton Mallasto, Charlotte Laclau, Karol Arndt, Oliver Struckmeier, Markus Heinonen, Ville Kyrki, Samuel Kaski.
tldr;
We propose a way to compare deep neural architectures covering everything from Alexnet to ViTs and beyond.
15.06.2025 05:14
π 1
π 0
π¬ 0
π 0
I will be presenting our paper on measuring non-linearity of deep neural networks @cvprconference.bsky.social
!
π Project page: qbouniot.github.io/affscore_web...
Come join me on Sunday 15th June from 10:30 to 12:30, ExHall D Poster #402 #CVPR2025
15.06.2025 05:14
π 6
π 1
π¬ 1
π 0
Call for papers at the eXCV workshop at ICCV 2025.
Join us in taking stock of the state of the field of explainability in computer vision, at our Workshop on Explainable Computer Vision: Quo Vadis? at #ICCV2025!
@iccv.bsky.social
14.06.2025 15:47
π 13
π 5
π¬ 1
π 1
πPhD Spotlight: Thomas Hummel
A spotlight on our one and only @hummelth.bsky.social , who will defend his PhD on 23rd June! π
Thomas started his PhD in 2020 at βͺβͺ@unituebingen.bsky.social as part of the IMPRS-IS program under the supervision of @zeynepakata.bsky.social . π His full story:
19.05.2025 12:36
π 9
π 1
π¬ 1
π 1
πPhD Spotlight: Jae Myung Kim
Weβre thrilled to celebrate Jae Myung Kim, who will defend his PhD on 25th June! π
Jae Myung began his PhD at @unituebingen.bsky.social as part of the ELLIS & IMPRS-IS programs, advised by @zeynepakata.bsky.social and collaborating closely with Cordelia Schmid.
12.05.2025 11:16
π 12
π 2
π¬ 1
π 1
#CVPR2025 is heading to the 'Music City' β Nashville! πΊ Join us from June 11β15. We're thrilled to announce that we'll be presenting four papers at @cvprconference.bsky.social! Check out the thread below for highlights, and feel free to stop by and chat with our authors! π·π
28.04.2025 11:51
π 9
π 1
π¬ 1
π 1
Happy to share that we have 4 papers to be presented in the coming #ICLR2025 in the beautiful city of #Singapore . Check out our website for more details: eml-munich.de/publications. We will introduce the talented authors with their papers very soon, stay tunedπ
19.03.2025 11:54
π 7
π 4
π¬ 0
π 0
Thrilled to announce that four papers from our group have been accepted to #CVPR2025 in Nashville! π Congrats to all authors & collaborators.
Our work spans multimodal pre-training, model merging, and more.
π Papers & codes: eml-munich.de#publications
See threads for highlights in each paper.
#CVPR
02.04.2025 11:36
π 11
π 4
π¬ 1
π 0
Can we enhance the performance of T2I models without any fine-tuning?
We show that with our ReNO, Reward-based Noise Optimization, one-step models consistently surpass the performance of all current open-source Text-to-Image models within the computational budget of 20-50 sec!
#NeurIPS2024
11.12.2024 23:05
π 27
π 7
π¬ 1
π 1
π΅βπ« Continually pretraining large multimodal models to keep them up-to-date all-the-time is tough, covering everything from adapters, merging, meta-scheduling to data design and more!
So I'm really happy to present our large-scale study at #NeurIPS2024!
Come drop by to talk about all that and more!
10.12.2024 16:42
π 40
π 6
π¬ 1
π 2