SueYeon Chung's Avatar

SueYeon Chung

@sueyeonchung

comp neuro, neural manifolds, neuroAI, physics of learning assistant professor @ harvard (physics, center for brain science, kempner institute) + @ Flatiron Institute https://www.sychung.org

4,563
Followers
449
Following
51
Posts
26.07.2023
Joined
Posts Following

Latest posts by SueYeon Chung @sueyeonchung

Thanks to @kempnerinstitute.bsky.social for the thoughtful feature on our recent @natneuro.nature.com paper!

25.02.2026 16:56 πŸ‘ 7 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

14/14

Takeaway: what type of neural representation is β€œbest” depends on the structure of the task and stage of learning.

In our setup, higher dimensional representations are preferred later in learning, at the cost of a lower neural-latent correlation.

10.02.2026 15:56 πŸ‘ 9 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

13/14

Finally, in rat CA1 and PFC during navigation learning, geometry trends align with this picture: after an initial early rise across metrics, task-related dimensionality and SSF increase while correlation falls as performance plateaus.

10.02.2026 15:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

12/14

Normative prediction: the β€œbest” code depends on sample regime. Early learning (few samples) favors higher correlation / lower dimension.

With enough data, optimal codes expand variance into more latent directionsβ€”higher-dimensional signal, and single-neuron/latent correlations can drop.

10.02.2026 15:56 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

11/14

In macaque ventral stream, our expression predicts Hebbian readout performance and improvements from pixels β†’ V4 β†’ IT.

A standout signature: SNF increases sharply in IT, consistent with latent-unrelated variability becoming more orthogonal to coding directions.

10.02.2026 15:56 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

10/14

We then push to a naturalistic latents task: a DeepLabCut-style pose network (24-D latents = 12 marker x/y).

Across layers: dimension ↑, correlation ↓ (tradeoff), SSF/SNF ↑, and multitask error drops; our prediction explains almost all variance in error (RΒ²β‰ˆ0.988).

10.02.2026 15:56 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

9/14

We test the theory in trained vs random MLPs. The formula tracks empirical multitask error across layers, and training systematically sculpts geometry (correlation/dimension/factorization shift with depth).

10.02.2026 15:56 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

8/14

Two factorization measures:

SSF (f): how orthogonal the coding directions of distinct latent variables are. Higher SSF β†’ more disentangled.

SNF (s): how orthogonal coding directions are from noise directions. Higher SNF β†’ signal and noise live in orthogonal subspaces.

10.02.2026 15:56 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

7/14

Neural–latent correlation (c): how strongly single neurons co-vary with latent factors.

Effective dimension (PR): dimensionality of neural responses as measured by the participation ratio.

10.02.2026 15:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

6/14

Main result: the multitask generalization error is controlled by four geometric statistics of population activity:

(1) neural–latent correlation
(2) signal–signal factorization
(3) signal–noise factorization
(4) effective dimension (participation ratio)

10.02.2026 15:56 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

5/14

We analyze a simple Hebbian-style supervised readout.

We ask how well it generalizes across many possible tasks built from the same latents.

10.02.2026 15:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

4/14

Example: In a visual classification task, each stimulus may correspond to a certain choice of z = (shape, size, orientation, x position, y position).

Two tasks may involve classifying hearts vs. circles or big vs. small shapes using neural population responses.

10.02.2026 15:56 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

3/14

Setup: stimuli come from a common latent space – i.e. each stimulus corresponds to a latent vector z.

Tasks come from linearly shattering this latent space, and a downstream neuron learns a linear readout from neural population activity vectors, x.

10.02.2026 15:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2/14

Core question: when many different tasks depend on the same underlying variables (shared latent structure), what properties of a neural population determine how well it can generalize across tasks?

10.02.2026 15:56 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Our paper is out in @natneuro.nature.com!

www.nature.com/articles/s41...

We develop a geometric theory of how neural populations support generalization across many tasks.

@zuckermanbrain.bsky.social
@flatironinstitute.org
@kempnerinstitute.bsky.social

1/14

10.02.2026 15:56 πŸ‘ 273 πŸ” 100 πŸ’¬ 7 πŸ“Œ 1

New preprint from our group (collaboration with @sueyeonchung.bsky.social) showing that discriminating odor components within a complex mixture is constrained by neural sensitivity rather than background interference - likely due to sparse representations at the front end.

29.01.2026 14:10 πŸ‘ 37 πŸ” 13 πŸ’¬ 1 πŸ“Œ 0

Thank you for the support @neurovenki.bsky.social! Looking forward to doing and sharing exciting science in the years aheadπŸ“πŸ’»πŸ§ͺ

25.11.2025 00:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

In @thetransmitter.bsky.social’s Rising Stars of Neuroscience 2025, we recognize 25 early-career researchers who have made outstanding scientific contributions and demonstrated a commitment to mentoring and community-building in neuroscience.

#neuroskyence #StateOfNeuroscience

bit.ly/4rnFnyQ

24.11.2025 14:50 πŸ‘ 59 πŸ” 22 πŸ’¬ 1 πŸ“Œ 4

Thank you @thetransmitter.bsky.social for recognizing our group's work!

www.thetransmitter.org/early-career...

19.11.2025 17:45 πŸ‘ 10 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Wow that's really asymmetric! I wonder what causes it πŸ€”

05.11.2025 21:35 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Abstract Submission β€” COSYNE Submit your COSYNEβ€―2026 abstract; double‑blind, 2-page PDF. Opens Sept 5, 2025. Deadline and guidelines inside.

#Cosyne2026 deadline is just around the corner:

πŸ§ πŸ“œ **16 October 2025! πŸ“œπŸ§ 

See below for more on key dates and abstract submission:
www.cosyne.org/abstracts-su...

07.10.2025 20:19 πŸ‘ 10 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0

Also those interested in comp neuro and deep learning/AI are encouraged to apply πŸ‘‡πŸ»πŸ‘‡πŸ»

22.09.2025 21:02 πŸ‘ 6 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Ever wondered what gives rise to efficient neural population geometry? Our lab’s new work, led by Sonica Saraf (w/ Tony Movshon), shows how diversity in single-neuron tuning shapes population-level representation geometries to improve perceptual efficiency. Congrats
β€ͺ@sonicasaraf.bsky.social‬!

02.07.2025 14:32 πŸ‘ 28 πŸ” 6 πŸ’¬ 0 πŸ“Œ 0

Congratulations!

13.06.2025 19:57 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

*Interpretable theory of neural-AI alignment:
- Neural prediction: proceedings.neurips.cc/paper_files/...
- Representation similarity:
arxiv.org/pdf/2502.19648

3/3

10.06.2025 00:15 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

*Neuro-inspired & efficient AI:
- MMCR (Maximum Manifold Capacity Representations): proceedings.neurips.cc/paper_files/...
- Contrast-equivariant SSL improves neural-model alignment
proceedings.neurips.cc/paper_files/...

2/3

10.06.2025 00:15 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Enjoyed speaking at Frontiers in NeuroAI Symposium @harvard.edu's @kempnerinstitute.bsky.social

Key papers from the talk:

*Manifold capacity theory:
- original: doi.org/10.1103/Phys...
- correlated: doi.org/10.1103/Phys...
- data-driven (latest theory): pmc.ncbi.nlm.nih.gov/articles/PMC...

1/3

10.06.2025 00:15 πŸ‘ 26 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Searchable database of tangible benefits that federally-funded research gave us. A crowd-sourced site. Health and Well-being. National Security. Prosperity.

Folks at Princeton have put together an aggregator to highlight example of how federally funded research has directly impacted people's lives. They are looking for more studies to highlight, so send along suggestions and help make the benefits of science more visible!

publicusaresearchbenefits.com

22.05.2025 17:09 πŸ‘ 238 πŸ” 124 πŸ’¬ 5 πŸ“Œ 10

Congratulations Mark!

23.05.2025 12:25 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
How Did Geometry Create Modern Physics? | Quanta Magazine Geometry may have its origins thousands of years ago in ancient land surveying, but it has also had a surprising impact on modern physics. In the latest episode of The Joy of Why, Yang-Hui He explores...

You may not have thought about geometry since middle school. But the mathematician Yang-Hui He argues that it’s intrinsic to being human. Tune in to "The Joy of Why" from @prx.org and @quantamagazine.bsky.social:Β www.quantamagazine.org/how-did-geom...

16.05.2025 02:49 πŸ‘ 63 πŸ” 7 πŸ’¬ 2 πŸ“Œ 0