Accepted to ICLR! see you in π§π·
Accepted to ICLR! see you in π§π·
The original dynamic similarity analysis (DSA) developed by @neurostrow.bsky.social and Ila Fiete is a powerful method to compare trajectories of (nonlinear) neural dynamics between different datasets and models: arxiv.org/abs/2306.10168
Wanna compare dynamics across neural data, RNNs, or dynamical systems? We got a fast and furious methodποΈ
The 1st preprint of my PhD π₯³ fast dynamical similarity analysis (fastDSA):
π: arxiv.org/abs/2511.22828
π»: github.com/CMC-lab/fast...
Iβll be @cosynemeeting.bsky.social - happy to chat π
Causal to what? we know from biophysics how spikes causally trigger neurotransmitter release, and how neurotransmitters cause PSPs, which trigger spiking in post synaptic neurons etcβ¦
Woah huge!! Congrats
At #NeurIPS2025!
π Excited to present Conditionally Linear Dynamical Systems (CLDS). We leverage the dependence of neural dynamics on task covariates to yield an interpretable, flexible model of dynamics.
Come meet and check it out!
π: Poster #2209, Hall C,D,E on Thu Dec 4, 11 amβ2 pm, PST.
π§΅/6
13/ πFeel free to reach out to discuss this work, or the application of it to your field of study. Or come swing by our poster at #NeurIPS2025. Weβd love to chat!
π Paper: openreview.net/forum?id=I82...
πΎ Code: github.com/adamjeisen/J...
π Poster: Thu 4 Dec 11am - 2pm PST (#2111)
Really proud of this project with @adamjeisen.bsky.social
- Jacobian estimation is a challenging and generic problem in dynamics, and Iβm excited for all the future use cases of our method! See you at NeurIPS π§ π»
How do brain areas control each other? π§ ποΈ
β¨In our NeurIPS 2025 Spotlight paper, we introduce a data-driven framework to answer this question using deep learning, nonlinear control, and differential geometry.π§΅β¬οΈ
Also, from a dynamics perspective, directions with very little variance (in a statistical perspective) can still have an outsized effect on the activity on directions with larger variance!
Controversial take: our ICLR reviews actually helped make our paper better
Thanks again to all my amazing collaborators, especially my co-first author @annhuang42.bsky.social !
Public code is here github.com/mitchellostr... , and it is soon to be merged into the DSA package (pip install dsa-metric)
Second, we develop a new similarity metric based in control theory and shape metrics, which is extremely fast and robust (no figure here)! The metric is based on controllability, which measures how easily inputs can arbitrarily move the state of a dynamical system.
First, we apply subspace id methods from classical control theory to learn input-controlled linear dynamical systems (key in partially observed settings). This is new for the Dynamic Mode Decomposition (DMD) literature, and the method robust to extreme partial observation (12/)
Now for the π€ : InputDSA leverages 2 new technical developments (11/)
We think that inputDSA could be especially useful when experimentalists can perturb a system (e.g with optogenetics) for system identification. (10/)
As with DSA, inputDSA complements other comparison metrics (@itsneuronal.bsky.social , @mschrimpf.bsky.social ). One important result we found is that even for input-driven dynamics, the original DSA still gives good comparisons, but inputDSA can sharpen them! (9/)
On two datasets, we apply random perturbations (noise, functions) to the true input, or utilize other task variables when performing inputDSA. We measure the correlation between the surrogate and true scores, finding that in general, inputDSA is quite robust! (8) (shoutout @oliviercodol.bsky.social)
One more analysis with greater implications: In most neuroscience settings, we donβt know the true inputs to a brain region. When we build models, we apply proxy inputs that we think are related to the true input. With InputDSA, we can evaluate this! (7/) (as in e.g line attractors in hypothalamus))
Second: on @thomas-zhihao-luo.bsky.social recently showed that rat cortical dynamics transition from primarily input-driven to autonomous during a 2-alternative forced choice task. InputDSA corroborates this, showing that cortex becomes less input-controllable across time! (6/)
On @satpreetsingh.bsky.social βs Deep RL fly navigation task (from @bingbrunton.bsky.social βs lab) we show that successful models become more similar to each other across training, while unsuccessful ones diverge in inputDSA score βan Anna Karenina/universality result! (5/)
Letβs look at some cool applications first! We made a lot of technical developments, but I'll save those till the end π€ :
The basic idea of DSA: approximate your dynamics so that comparison is tractable. This is backed by Koopman Operator Theory and relates to work done by @wtredman.bsky.social and Igor Mezic. InputDSA naturally extends DSAβwe can compare intrinsic dynamics, the effect of input, or both jointly! (3/)
We introduce InputDSA, a method that builds on our prior work, Dynamical Similarity Analysis (DSA) to quantitatively compare input-drive dynamical systems! Especially relevant for neuroscience, but it can be applied to any type of time series data ! π§ π» π΄ π¨ π΅ π₯ (2/)
Our next paper on comparing dynamical systems (with special interest to artificial and biological neural networks) is out!! Joint work with @annhuang42.bsky.social , as well as @satpreetsingh.bsky.social , @leokoz8.bsky.social , Ila Fiete, and @kanakarajanphd.bsky.social : arxiv.org/pdf/2510.25943
Very excited to share a new preprint thatβs been brewing for a long time! This work was led by the exceptional @traceym.bsky.social, and made possible by a developmental + comparative + computational dream team.
osf.io/preprints/ps...
This doesn't say anything about how the attractors is instantiated, ie the equation itself (let alone its mapping to the biology, which is another criterion needed for a mechanism according to Craver). I'm fine with this claim if it's what the post means!
Perhaps what is meant by 'attractors aren't mechanisms' is that you can write down a large number of equations that are attractors (e.g. any diffeomorphism phi that transforms the system dxdt = -x while preserving its asymptotic behavior, also known as a conjugacy).