π Overall, MRINE can advance #BCI by:
πΈNonlinearly fusing neural modalities
πΈEnabling real-time decoding
πΈHandling varied timescales & distributions
π Poster Today: Hall C,D,E #2100 | Thurs 4 Dec | 4:30-7:30 PM @neuripsconf.bsky.social
π openreview.net/forum?id=jOH...
π₯οΈ github.com/ShanechiLab/...
04.12.2025 16:34
π 0
π 0
π¬ 0
π 0
On two public spike-LFP motor cortex datasets during reaching, MRINE:
β
Improves real-time behavior decoding via multimodal fusion
β
Outperforms recent multimodal neural models
It also:
β
Generalizes to a high-dim dataset with Neuropixels spikes and calcium imaging
04.12.2025 16:34
π 0
π 0
π¬ 1
π 0
MRINE tackles these challenges with a multiscale encoder that:
πΈ Models each modality at its own timescale & forward-predicts its dynamics to infer fast latent factors
πΈ Nonlinearly fuses modality-specific factors
πΈ Enables real-time inference via its linear state-space model (SSM) backbone
04.12.2025 16:34
π 0
π 0
π¬ 1
π 0
Brain activity is recorded through modalities like spikes and LFPs. Fusing them can unlock richer neural representations & improve BCI performance and robustness.
But fusion is hard: modalities differ in timescales & distributions, and BCIs require real-time inference.
04.12.2025 16:34
π 0
π 0
π¬ 1
π 0
Can we achieve nonlinear multimodal neural fusion while also enabling real-time recursive decoding for #BCI?
In our third paper at #NeurIPS2025, we present MRINE, which does exactly that β improving decoding even for modalities w/ distinct timescales & distributions.
π Eray Erturk
π§΅ Paper Code β¬οΈ
04.12.2025 16:34
π 12
π 3
π¬ 1
π 0
πBaRISTA shows how to flexibly encode spatial information toward future foundation models of intracranial brain activity.
πL Oganesian, S Hashemi
πPoster: Wed Dec 3 4:30-7:30pm, Exhibit Hall C,D,E 2011 @neuripsconf.bsky.social
Paper: openreview.net/forum?id=LDj...
Code: github.com/ShanechiLab/...
02.12.2025 20:20
π 0
π 0
π¬ 0
π 0
We also show that BaRISTA:
β
Scales with increased pretraining data
β
Generalizes to held-out subjects
β
Can use spatial scales larger than channel level without sacrificing channel reconstruction performance
02.12.2025 20:20
π 0
π 0
π¬ 1
π 0
On a public iEEG dataset (Brain Treebank):
β
BaRISTA shows that spatial encoding at scales larger than individual channels improves downstream decoding of auditory or visual features
β
BaRISTA outperforms state-of-the-art iEEG models given its flexible spatial encoding
02.12.2025 20:20
π 0
π 0
π¬ 1
π 0
To address how to encode spatial information, BaRISTA:
πΉ Flexibly incorporates multiple spatial scales in a transformer model trained with masked latent reconstruction
πΉ Separates the choice of spatial encoding scale from the masking scale to isolate their effects
02.12.2025 20:20
π 0
π 0
π¬ 1
π 0
At what scale should spatial information be encoded toward future foundation models of intracranial brain activity?
In our second paper at #NeurIPS2025, we present BaRISTA β β a self-supervised multi-subject model that enables flexible spatial encoding & boosts downstream decoding.
π§΅ Paper Code β¬οΈ
02.12.2025 20:20
π 8
π 1
π¬ 1
π 0
πOverall, our cross-modal distillation can enable high-accuracy and stable LFP models for #BCIs.
πEray Erturk, Saba Hashemi
See Eray at #NeurIPS2025
πPoster Session 1, Hall C,D,E #2115 | Wed 3 Dec | 11AM - 2PM @neuripsconf.bsky.social
π openreview.net/forum?id=hT7...
π₯οΈ github.com/ShanechiLab/...
25.11.2025 20:15
π 0
π 0
π¬ 0
π 0
Our framework achieves:
β
Substantial decoding performance improvement over several LFP-only baselines
β
Consistent improvements in unsupervised, supervised & multi-session distillation setups
β
Generalization to unseen sessions without additional distillation
β
Spike-aligned LFP latent structure
25.11.2025 20:15
π 0
π 0
π¬ 1
π 0
Our framework:
1οΈβ£ Pretrains a multi-session spike model
2οΈβ£ Fine-tunes the multi-session spike model on new spike signals
3οΈβ£ Trains the Distilled LFP model via cross-modal representation alignment
π₯ This produces spike-informed LFP models with significantly improved decoding.
25.11.2025 20:15
π 1
π 0
π¬ 1
π 0
Why this matters:
π§ LFPs are used in many #BCIs and have high stability, but often underperform for behavior decoding compared with spikes.
β We ask: Can we transfer representational knowledge from spike models β LFP models?
β
Answer: Yes β and the gains are substantial!
25.11.2025 20:15
π 0
π 0
π¬ 1
π 0
π New in #NeurIPS2025, we present βCross-Modal Representational Knowledge Distillation for Enhanced Spike-Informed LFP Modelingβ.
We show that high-fidelity spike transformer models can teach LFP models to substantially enhance LFP decoding. #BCI
π Eray Erturk
π§΅ Paper, Code β¬οΈ
25.11.2025 20:15
π 5
π 1
π¬ 1
π 0
Excited for SBIND to support neural image modalities, thus expanding our prior neural-behavioral models:
PSID & DPAD (Nat Neuro 2021 & 2024), IPSID (PNAS 2024), PGLDM (NeurIPS 2024), BRAID (ICLR 2025)
π Paper: openreview.net/pdf?id=k4KVh...
π»Code: github.com/shanechiLab/...
14.07.2025 17:46
π 0
π 0
π¬ 0
π 0
Also on public data (πto Churchland, Andersen, and Shapiro labs)
β
Self-attention improves neural-behavior predictions by learning long-range patterns while convolutions learn local ones
β
Two-stage learning improves behavior prediction by disentangling behaviorally relevant dynamics
14.07.2025 17:46
π 0
π 0
π¬ 1
π 0
On public widefield calcium (Churchland lab) and functional ultrasound (Andersen and Shapiro labs) neural imaging data, SBIND outperforms other neural-behavioral models in decoding continuous and categorical behaviors in visual decision-making and memory-guided saccade tasks.
14.07.2025 17:46
π 0
π 0
π¬ 1
π 0
SBIND:
β
Operates directly on raw images & avoids preprocessing.
β
Combines self-attention and convolutional layers to model both global and local patterns.
β
Uses two-stage learning of convolutional RNNs (ConvRNNs) to disentangle behaviorally relevant and other neural dynamics.
14.07.2025 17:46
π 0
π 0
π¬ 1
π 0
π New in #ICML2025, we develop SBIND for modeling neural imaging modalities and disentangling their behaviorally relevant dynamics.
SBIND learns local and global spatiotemporal patterns in raw widefield calcium and functional ultrasound neural images.
πM Hoseini
π§΅Paper, Codeβ¬οΈ
14.07.2025 17:46
π 5
π 0
π¬ 1
π 0
Excited for BRAID to expand our neural-behavioral models: PSID & DPAD (Nat Neurosci 2020 & 2024), PGLDM (NeurIPS 2024), IPSID (PNAS 2024)!
See Parsa Vahidi at #ICLR2025!
πPoster Session 5, Hall 3+Hall 2B #57 | Sat 4/26 | 10AM - 12:30PM
π openreview.net/forum?id=3us...
π» github.com/ShanechiLab/...
21.04.2025 19:43
π 1
π 0
π¬ 0
π 0
On public motor cortex data during reaching from the Sabes lab, BRAID outperformed several baselines in neural-behavioral predictions by capturing nonlinearity, modeling sensory task instructions as input, and disentangling intrinsic behaviorally relevant neural dynamics.
21.04.2025 19:40
π 0
π 0
π¬ 1
π 0
In nonlinear simulations, BRAID accurately disentangled intrinsic neural-behavioral dynamics from input dynamics. In terms of learning the intrinsic dynamics and decoding behavior, BRAID outperformed prior neural-behavioral models, which either donβt include input or are linear.
21.04.2025 19:40
π 1
π 0
π¬ 1
π 0
BRAID
β
Disentangles intrinsic behaviorally relevant neural dynamics from input, neural-specific & behavior-specific dynamics
β
Captures nonlinearity
It is a multi-stage RNN: each stage learns a subtype of dynamics & combines a predictor network w/ a generative network to learn intrinsic dynamics.
21.04.2025 19:40
π 0
π 0
π¬ 1
π 0
π New in #ICLR2025, we present BRAID for input-driven nonlinear dynamical modeling of neural-behavioral data.
BRAID disentangles the intrinsic dynamics shared between modalities from input dynamics and modality-specific dynamics.
π Parsa Vahidi & Omid Sani
@iclr-conf.bsky.social
π§΅, Paper & Code β¬οΈ
21.04.2025 19:40
π 10
π 0
π¬ 1
π 0
You can see our poster at #ICLR2025!
π Poster Session 1, Hall 3 + Hall 2B, #68 | Thu, Apr 24 | 10 AM - 12:30 PM
Poster: iclr.cc/virtual/2025...
π π»Paper and code: openreview.net/pdf?id=mkDam...
17.04.2025 18:54
π 0
π 0
π¬ 1
π 0
On public neural data from the mice head direction circuit from BuzsΓ‘ki lab, PGPCA outperforms baselines across all state dimensions. Also, interestingly, the geometric coordinate outperforms the Euclidean one, showing that the noise around the manifold also follows the same geometry.
17.04.2025 18:54
π 3
π 0
π¬ 2
π 0
In simulations, PGPCA recovers the true data distribution and distinguishes between different coordinates (geometric vs. Euclidean) regardless of the manifold state distribution p(z). Also, PGPCA outperforms Probabilistic PCA (PPCA) in modeling data around a nonlinear manifold.
17.04.2025 18:54
π 0
π 0
π¬ 1
π 0
PGPCA decomposes the data distribution p(y) into a state distribution on a nonlinear manifold p(z) plus a deviation from the manifold captured by the distribution coordinate K(z). K(z) can be Euclidean or geometric, as we derive. A new algorithm learns the model parameters.
17.04.2025 18:54
π 0
π 0
π¬ 1
π 0
New in our #ICLR2025 spotlight β¨, we introduce PGPCA, a Probabilistic Geometric method that enables modeling and dimensionality reduction for data distributed around nonlinear manifolds. We also show PGPCAβs application to brain data.
π Han-Lin Hsieh
@iclr-conf.bsky.social
Paper, Code, π§΅β¬οΈ
17.04.2025 18:54
π 20
π 6
π¬ 1
π 0