Thanks to @kempnerinstitute.bsky.social for the thoughtful feature on our recent @natneuro.nature.com paper!
Thanks to @kempnerinstitute.bsky.social for the thoughtful feature on our recent @natneuro.nature.com paper!
14/14
Takeaway: what type of neural representation is βbestβ depends on the structure of the task and stage of learning.
In our setup, higher dimensional representations are preferred later in learning, at the cost of a lower neural-latent correlation.
13/14
Finally, in rat CA1 and PFC during navigation learning, geometry trends align with this picture: after an initial early rise across metrics, task-related dimensionality and SSF increase while correlation falls as performance plateaus.
12/14
Normative prediction: the βbestβ code depends on sample regime. Early learning (few samples) favors higher correlation / lower dimension.
With enough data, optimal codes expand variance into more latent directionsβhigher-dimensional signal, and single-neuron/latent correlations can drop.
11/14
In macaque ventral stream, our expression predicts Hebbian readout performance and improvements from pixels β V4 β IT.
A standout signature: SNF increases sharply in IT, consistent with latent-unrelated variability becoming more orthogonal to coding directions.
10/14
We then push to a naturalistic latents task: a DeepLabCut-style pose network (24-D latents = 12 marker x/y).
Across layers: dimension β, correlation β (tradeoff), SSF/SNF β, and multitask error drops; our prediction explains almost all variance in error (RΒ²β0.988).
9/14
We test the theory in trained vs random MLPs. The formula tracks empirical multitask error across layers, and training systematically sculpts geometry (correlation/dimension/factorization shift with depth).
8/14
Two factorization measures:
SSF (f): how orthogonal the coding directions of distinct latent variables are. Higher SSF β more disentangled.
SNF (s): how orthogonal coding directions are from noise directions. Higher SNF β signal and noise live in orthogonal subspaces.
7/14
Neuralβlatent correlation (c): how strongly single neurons co-vary with latent factors.
Effective dimension (PR): dimensionality of neural responses as measured by the participation ratio.
6/14
Main result: the multitask generalization error is controlled by four geometric statistics of population activity:
(1) neuralβlatent correlation
(2) signalβsignal factorization
(3) signalβnoise factorization
(4) effective dimension (participation ratio)
5/14
We analyze a simple Hebbian-style supervised readout.
We ask how well it generalizes across many possible tasks built from the same latents.
4/14
Example: In a visual classification task, each stimulus may correspond to a certain choice of z = (shape, size, orientation, x position, y position).
Two tasks may involve classifying hearts vs. circles or big vs. small shapes using neural population responses.
3/14
Setup: stimuli come from a common latent space β i.e. each stimulus corresponds to a latent vector z.
Tasks come from linearly shattering this latent space, and a downstream neuron learns a linear readout from neural population activity vectors, x.
2/14
Core question: when many different tasks depend on the same underlying variables (shared latent structure), what properties of a neural population determine how well it can generalize across tasks?
Our paper is out in @natneuro.nature.com!
www.nature.com/articles/s41...
We develop a geometric theory of how neural populations support generalization across many tasks.
@zuckermanbrain.bsky.social
@flatironinstitute.org
@kempnerinstitute.bsky.social
1/14
New preprint from our group (collaboration with @sueyeonchung.bsky.social) showing that discriminating odor components within a complex mixture is constrained by neural sensitivity rather than background interference - likely due to sparse representations at the front end.
Thank you for the support @neurovenki.bsky.social! Looking forward to doing and sharing exciting science in the years aheadππ»π§ͺ
In @thetransmitter.bsky.socialβs Rising Stars of Neuroscience 2025, we recognize 25 early-career researchers who have made outstanding scientific contributions and demonstrated a commitment to mentoring and community-building in neuroscience.
#neuroskyence #StateOfNeuroscience
bit.ly/4rnFnyQ
Thank you @thetransmitter.bsky.social for recognizing our group's work!
www.thetransmitter.org/early-career...
Wow that's really asymmetric! I wonder what causes it π€
#Cosyne2026 deadline is just around the corner:
π§ π **16 October 2025! ππ§
See below for more on key dates and abstract submission:
www.cosyne.org/abstracts-su...
Also those interested in comp neuro and deep learning/AI are encouraged to apply ππ»ππ»
Ever wondered what gives rise to efficient neural population geometry? Our labβs new work, led by Sonica Saraf (w/ Tony Movshon), shows how diversity in single-neuron tuning shapes population-level representation geometries to improve perceptual efficiency. Congrats
βͺ@sonicasaraf.bsky.socialβ¬!
Congratulations!
*Interpretable theory of neural-AI alignment:
- Neural prediction: proceedings.neurips.cc/paper_files/...
- Representation similarity:
arxiv.org/pdf/2502.19648
3/3
*Neuro-inspired & efficient AI:
- MMCR (Maximum Manifold Capacity Representations): proceedings.neurips.cc/paper_files/...
- Contrast-equivariant SSL improves neural-model alignment
proceedings.neurips.cc/paper_files/...
2/3
Enjoyed speaking at Frontiers in NeuroAI Symposium @harvard.edu's @kempnerinstitute.bsky.social
Key papers from the talk:
*Manifold capacity theory:
- original: doi.org/10.1103/Phys...
- correlated: doi.org/10.1103/Phys...
- data-driven (latest theory): pmc.ncbi.nlm.nih.gov/articles/PMC...
1/3
Folks at Princeton have put together an aggregator to highlight example of how federally funded research has directly impacted people's lives. They are looking for more studies to highlight, so send along suggestions and help make the benefits of science more visible!
publicusaresearchbenefits.com
Congratulations Mark!
You may not have thought about geometry since middle school. But the mathematician Yang-Hui He argues that itβs intrinsic to being human. Tune in to "The Joy of Why" from @prx.org and @quantamagazine.bsky.social:Β www.quantamagazine.org/how-did-geom...