π΅π΄ Join us for the UniReps Workshop: Unifying Representations in Neural Models at
@neuripsconf.bsky.social 2025!
π Ballroom 20D, San Diego Convention Center
Dec 6
Donβt forget to fill out the participation form. Joining in person or remotely? We welcome your questions for the panel.
π unireps.org
01.12.2025 18:40
π 12
π 5
π¬ 0
π 1
@andre-longon.bsky.social led/executed this project beautifullyβhe's applying to PhD programs this fall and would be an incredible addition to any lab!
08.10.2025 20:54
π 1
π 0
π¬ 0
π 0
also thanks to @david-klindt.bsky.social
for an incredible collaboration.
08.10.2025 20:54
π 2
π 0
π¬ 1
π 0
The takeaway: superposition isnβt just an interpretability issueβit warps alignment metrics too. Disentangling reveals the true representational overlap between models and between models and brains.
08.10.2025 20:54
π 3
π 0
π¬ 1
π 0
Across toy models, ImageNet DNNs (ResNet, ViT), and even brain data (NSD), alignment scores jump once we replace base neurons with their disentangled SAE latentsβshowing that superposition can mask shared structure.
08.10.2025 20:54
π 3
π 0
π¬ 1
π 0
We develop a theory showing how superposition arrangements deflate predictive-mapping metrics. Then we test it: disentangling with sparse autoencoders (SAEs) reveals hidden correspondences.
08.10.2025 20:54
π 1
π 0
π¬ 1
π 0
Superposition disentanglement of neural representations reveals hidden alignment
The superposition hypothesis states that a single neuron within a population may participate in the representation of multiple features in order for the population to represent more features than the ...
Superposition has reshaped interpretability research. In our @unireps.bsky.social paper led by @andre-longon.bsky.social we show it also matters for measuring alignment! Two systems can represent the same features yet appear misaligned if those features are mixed differently across neurons.
08.10.2025 20:54
π 9
π 2
π¬ 2
π 0