Maryam Shanechi's Avatar

Maryam Shanechi

@maryamshanechi

Sawchuk Chair & Prof at USC Viterbi School of Engineering | Founding Director of USC Center for Neurotech | Developing AI/ML methods & neurotech to decode the brain & treat its conditions πŸ§ πŸ€–πŸ’» https://nseip.usc.edu/

1,213
Followers
86
Following
44
Posts
14.11.2024
Joined
Posts Following

Latest posts by Maryam Shanechi @maryamshanechi

πŸš€ Overall, MRINE can advance #BCI by:
πŸ”ΈNonlinearly fusing neural modalities
πŸ”ΈEnabling real-time decoding
πŸ”ΈHandling varied timescales & distributions

πŸ“ Poster Today: Hall C,D,E #2100 | Thurs 4 Dec | 4:30-7:30 PM @neuripsconf.bsky.social

πŸ“ƒ openreview.net/forum?id=jOH...
πŸ–₯️ github.com/ShanechiLab/...

04.12.2025 16:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

On two public spike-LFP motor cortex datasets during reaching, MRINE:

βœ… Improves real-time behavior decoding via multimodal fusion
βœ… Outperforms recent multimodal neural models

It also:
βœ… Generalizes to a high-dim dataset with Neuropixels spikes and calcium imaging

04.12.2025 16:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

MRINE tackles these challenges with a multiscale encoder that:

πŸ”Έ Models each modality at its own timescale & forward-predicts its dynamics to infer fast latent factors
πŸ”Έ Nonlinearly fuses modality-specific factors
πŸ”Έ Enables real-time inference via its linear state-space model (SSM) backbone

04.12.2025 16:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Brain activity is recorded through modalities like spikes and LFPs. Fusing them can unlock richer neural representations & improve BCI performance and robustness.

But fusion is hard: modalities differ in timescales & distributions, and BCIs require real-time inference.

04.12.2025 16:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Can we achieve nonlinear multimodal neural fusion while also enabling real-time recursive decoding for #BCI?

In our third paper at #NeurIPS2025, we present MRINE, which does exactly that β€” improving decoding even for modalities w/ distinct timescales & distributions.

πŸ‘ Eray Erturk
🧡 Paper Code ⬇️

04.12.2025 16:34 πŸ‘ 12 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

πŸš€BaRISTA shows how to flexibly encode spatial information toward future foundation models of intracranial brain activity.

πŸ‘L Oganesian, S Hashemi

πŸ“Poster: Wed Dec 3 4:30-7:30pm, Exhibit Hall C,D,E 2011 @neuripsconf.bsky.social

Paper: openreview.net/forum?id=LDj...
Code: github.com/ShanechiLab/...

02.12.2025 20:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

We also show that BaRISTA:

βœ… Scales with increased pretraining data
βœ… Generalizes to held-out subjects
βœ… Can use spatial scales larger than channel level without sacrificing channel reconstruction performance

02.12.2025 20:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

On a public iEEG dataset (Brain Treebank):

βœ… BaRISTA shows that spatial encoding at scales larger than individual channels improves downstream decoding of auditory or visual features

βœ… BaRISTA outperforms state-of-the-art iEEG models given its flexible spatial encoding

02.12.2025 20:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

To address how to encode spatial information, BaRISTA:

πŸ”Ή Flexibly incorporates multiple spatial scales in a transformer model trained with masked latent reconstruction

πŸ”Ή Separates the choice of spatial encoding scale from the masking scale to isolate their effects

02.12.2025 20:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

At what scale should spatial information be encoded toward future foundation models of intracranial brain activity?

In our second paper at #NeurIPS2025, we present BaRISTA β˜• β€” a self-supervised multi-subject model that enables flexible spatial encoding & boosts downstream decoding.

🧡 Paper Code ⬇️

02.12.2025 20:20 πŸ‘ 8 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

πŸš€Overall, our cross-modal distillation can enable high-accuracy and stable LFP models for #BCIs.

πŸ‘Eray Erturk, Saba Hashemi

See Eray at #NeurIPS2025

πŸ“Poster Session 1, Hall C,D,E #2115 | Wed 3 Dec | 11AM - 2PM @neuripsconf.bsky.social

πŸ“ƒ openreview.net/forum?id=hT7...
πŸ–₯️ github.com/ShanechiLab/...

25.11.2025 20:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Our framework achieves:

βœ… Substantial decoding performance improvement over several LFP-only baselines
βœ… Consistent improvements in unsupervised, supervised & multi-session distillation setups
βœ… Generalization to unseen sessions without additional distillation
βœ… Spike-aligned LFP latent structure

25.11.2025 20:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Our framework:

1️⃣ Pretrains a multi-session spike model
2️⃣ Fine-tunes the multi-session spike model on new spike signals
3️⃣ Trains the Distilled LFP model via cross-modal representation alignment

πŸ”₯ This produces spike-informed LFP models with significantly improved decoding.

25.11.2025 20:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Why this matters:

🧠 LFPs are used in many #BCIs and have high stability, but often underperform for behavior decoding compared with spikes.

❓ We ask: Can we transfer representational knowledge from spike models β†’ LFP models?

βœ… Answer: Yes β€” and the gains are substantial!

25.11.2025 20:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

πŸŽ‰ New in #NeurIPS2025, we present β€œCross-Modal Representational Knowledge Distillation for Enhanced Spike-Informed LFP Modeling”.

We show that high-fidelity spike transformer models can teach LFP models to substantially enhance LFP decoding. #BCI

πŸ‘ Eray Erturk
🧡 Paper, Code ⬇️

25.11.2025 20:15 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Excited for SBIND to support neural image modalities, thus expanding our prior neural-behavioral models:
PSID & DPAD (Nat Neuro 2021 & 2024), IPSID (PNAS 2024), PGLDM (NeurIPS 2024), BRAID (ICLR 2025)

πŸ“œ Paper: openreview.net/pdf?id=k4KVh...
πŸ’»Code: github.com/shanechiLab/...

14.07.2025 17:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Also on public data (πŸ™to Churchland, Andersen, and Shapiro labs)
βœ… Self-attention improves neural-behavior predictions by learning long-range patterns while convolutions learn local ones
βœ… Two-stage learning improves behavior prediction by disentangling behaviorally relevant dynamics

14.07.2025 17:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

On public widefield calcium (Churchland lab) and functional ultrasound (Andersen and Shapiro labs) neural imaging data, SBIND outperforms other neural-behavioral models in decoding continuous and categorical behaviors in visual decision-making and memory-guided saccade tasks.

14.07.2025 17:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

SBIND:
βœ… Operates directly on raw images & avoids preprocessing.
βœ… Combines self-attention and convolutional layers to model both global and local patterns.
βœ… Uses two-stage learning of convolutional RNNs (ConvRNNs) to disentangle behaviorally relevant and other neural dynamics.

14.07.2025 17:46 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

πŸŽ‰ New in #ICML2025, we develop SBIND for modeling neural imaging modalities and disentangling their behaviorally relevant dynamics.

SBIND learns local and global spatiotemporal patterns in raw widefield calcium and functional ultrasound neural images.

πŸ‘M Hoseini
🧡Paper, Code⬇️

14.07.2025 17:46 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Excited for BRAID to expand our neural-behavioral models: PSID & DPAD (Nat Neurosci 2020 & 2024), PGLDM (NeurIPS 2024), IPSID (PNAS 2024)!

See Parsa Vahidi at #ICLR2025!

πŸ“Poster Session 5, Hall 3+Hall 2B #57 | Sat 4/26 | 10AM - 12:30PM

πŸ“œ openreview.net/forum?id=3us...
πŸ’» github.com/ShanechiLab/...

21.04.2025 19:43 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

On public motor cortex data during reaching from the Sabes lab, BRAID outperformed several baselines in neural-behavioral predictions by capturing nonlinearity, modeling sensory task instructions as input, and disentangling intrinsic behaviorally relevant neural dynamics.

21.04.2025 19:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

In nonlinear simulations, BRAID accurately disentangled intrinsic neural-behavioral dynamics from input dynamics. In terms of learning the intrinsic dynamics and decoding behavior, BRAID outperformed prior neural-behavioral models, which either don’t include input or are linear.

21.04.2025 19:40 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

BRAID

βœ… Disentangles intrinsic behaviorally relevant neural dynamics from input, neural-specific & behavior-specific dynamics
βœ… Captures nonlinearity

It is a multi-stage RNN: each stage learns a subtype of dynamics & combines a predictor network w/ a generative network to learn intrinsic dynamics.

21.04.2025 19:40 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

πŸŽ‰ New in #ICLR2025, we present BRAID for input-driven nonlinear dynamical modeling of neural-behavioral data.

BRAID disentangles the intrinsic dynamics shared between modalities from input dynamics and modality-specific dynamics.

πŸ‘ Parsa Vahidi & Omid Sani
@iclr-conf.bsky.social

🧡, Paper & Code ⬇️

21.04.2025 19:40 πŸ‘ 10 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

You can see our poster at #ICLR2025!

πŸ“ Poster Session 1, Hall 3 + Hall 2B, #68 | Thu, Apr 24 | 10 AM - 12:30 PM

Poster: iclr.cc/virtual/2025...
πŸ“œ πŸ’»Paper and code: openreview.net/pdf?id=mkDam...

17.04.2025 18:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

On public neural data from the mice head direction circuit from BuzsΓ‘ki lab, PGPCA outperforms baselines across all state dimensions. Also, interestingly, the geometric coordinate outperforms the Euclidean one, showing that the noise around the manifold also follows the same geometry.

17.04.2025 18:54 πŸ‘ 3 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0
Post image

In simulations, PGPCA recovers the true data distribution and distinguishes between different coordinates (geometric vs. Euclidean) regardless of the manifold state distribution p(z). Also, PGPCA outperforms Probabilistic PCA (PPCA) in modeling data around a nonlinear manifold.

17.04.2025 18:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

PGPCA decomposes the data distribution p(y) into a state distribution on a nonlinear manifold p(z) plus a deviation from the manifold captured by the distribution coordinate K(z). K(z) can be Euclidean or geometric, as we derive. A new algorithm learns the model parameters.

17.04.2025 18:54 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

New in our #ICLR2025 spotlight ✨, we introduce PGPCA, a Probabilistic Geometric method that enables modeling and dimensionality reduction for data distributed around nonlinear manifolds. We also show PGPCA’s application to brain data.

πŸ‘ Han-Lin Hsieh
@iclr-conf.bsky.social
Paper, Code, πŸ§΅β¬‡οΈ

17.04.2025 18:54 πŸ‘ 20 πŸ” 6 πŸ’¬ 1 πŸ“Œ 0