I don't think there's enough time for the poster presentation. Such good posters and so little time!
#cosyne2026
I don't think there's enough time for the poster presentation. Such good posters and so little time!
#cosyne2026
Happy to see you at #cosyne2026 to show you the unrolled version!
[3-221] | March 14 | 1:45pm - 4:15pm |
Excited to give a talk at #Cosyne2026 about my PhD work!
We show that RNNs trained on visual search converge on brain-like solutions, producing primate-like behavior and neural representations. Happy to chat if you're at Cosyne!
๐
March 15, 2026
๐ Lisbon, Portugal
www.biorxiv.org/content/10.1...
New PhD and post-doc job openings!
Join me and Prof. Nina Kazanina @ Uni Geneva, Switzerland, to take part in an exciting project on relations and binding in language and vision, explored with cutting-edge neurophysiology (#iEEG and MEG).
Full details in the job offer below.
New preprint from the lab! รbel Sรกgodi developed a theory of approximating dynamical systems that goes beyond finite time. #theoreticalNeuroscience
Follow @neurabel.bsky.social
Universal Approximation Theorems for Dynamical Systems with Infinite-Time Horizon Guarantees. . arxiv.org/abs/2602.08640
๐ ๐ Excited to share that our paper has just been published in Chaos.
Here, we introduce Network Reciprocity Control (NRC) #algorithms that make it possible to systematically tune #reciprocity and directional #asymmetry in #networks while preserving density or total weight.
Labeled cortical neurons projecting to the preBรถtC, the central pattern generator for breathing
A projection atlas of excitatory and inhibitory inputs to the
preBรถtzinger Complex:
substrates for multimodal breathing control
www.biorxiv.org/content/bior...
Calling all scientists: Support your Iranian colleagues www.nature.com/articles/d41...
7 days of digital black-out in Iran. One week of darkness...
Approximately 20,000 innocent, empty-handed protesters got killed!
Over 12,000 people peacefully protesting against the current Iranian regime have been killed in just one week. These are among the darkest days of 2026. Shame on every country that recognizes the Iranian regime as a legitimate government that murders its own people.
I left Iran in Jan 9th. I was there to visit my family. Meanwhile protests and strikes began.
I left the country with one of the few last flights before the digital blackout in Iran.
I hugged my family and said goodbye in the airport and never heard from them since then. There is zero connection.
I came from Iran yesterday, by taking one of the last flights. The situation in Iran is unimaginable.
There are unofficial reports that over 2000 protestors have been killed by the regime in the span of two nights, while the internet is cut off. The world must see this and must do something! #iran
๐๐ฎ๐ป ๐๐ฒ ๐๐๐๐ฑ๐ ๐ฏ๐ฟ๐ฎ๐ถ๐ป ๐ฑ๐๐ป๐ฎ๐บ๐ถ๐ฐ๐ ๐๐ถ๐๐ต ๐ณ๐ ๐ฅ๐?
We employed Switching Linear Dynamical Systems to investigate the dynamics of resting-state networks
They are dynamic, not static!
Work with Xiaoyu Zhao with lots of new methods.
Thread.
#neuroskyence
www.biorxiv.org/content/10.6...
happy that our article about mu & alpha rhythm waveform shape in development is now finally out in the open: doi.org/10.1162/jocn...
oscillation frequency changes across development (one of the most robust findings in the oscillation world). in this work, we also look at waveform shape changes.
Very excited about this new work from the omnipotent Owen, with me and Ashok Litwin-Kumar! Can we reconcile low- and high-dimensional activity in neural circuits by recognizing that these circuits ~multitask~?
(Plausibly, yes ๐)
Thanks Andrew๐๐๐ป
I have successfully defended my PhD. You may address all correspondence to me as "Dr."! ๐ of ๐ง !
Happy to see more functionality from staying in the balance state. It has also a critical role in the ongoing rhythmic activity.
pmc.ncbi.nlm.nih.gov/articles/PMC...
Can we achieve nonlinear multimodal neural fusion while also enabling real-time recursive decoding for #BCI?
In our third paper at #NeurIPS2025, we present MRINE, which does exactly that โ improving decoding even for modalities w/ distinct timescales & distributions.
๐ Eray Erturk
๐งต Paper Code โฌ๏ธ
This paper deserved its spot in my special @zotero.org collection:
๐๐๐ป๐ฎ๐บ๐ถ๐ฐ๐ฎ๐น ๐ฝ๐ฟ๐ถ๐ป๐ฐ๐ถ๐ฝ๐น๐ฒ๐ ๐ถ๐ป ๐ป๐ฒ๐๐ฟ๐ผ๐๐ฐ๐ถ๐ฒ๐ป๐ฐ๐ฒ
Oldie but goodie.
doi.org/10.1103/RevM...
#neuroskyence
Postdoc & PhD positions
How intrinsic motivation underlies intelligent, open-ended and embodied behavior in natural and artificial agents
Apply if interested
sites.google.com/view/morenob...
Cool PI, great lab, vibrant city for science! What else one needs!
Our group is at NeurIPS and EurIPS this year with four papers and one workshop poster. If you are either curious about SBI with autoML, with foundation models, or on function spaces or about differentiable simulators with Jaxley, have a look below ๐ 1/11
We recognise that there are many reasons authors might not be able to cover a publication fee. To ensure that it isn't a barrier to publication, we offer a simple way for authors to apply for a fee waiver.
Find out more in our author guide: buff.ly/PAVE39b.
I still see much focus on single factor coding in studies of medial temporal cortex. Although our results in primate hippocampus indicates a more mixed code
www.biorxiv.org/content/10.1...
This is not a satisfactory answer. I can also say that we can get low-d dynamics from high-d systems, given the low-d inputs and task complexity!
The burden of proof is on the Manifold hypothesis!
A growing number of studies point to the high dimensionality of the observed dynamics under more naturalistic conditions.
www.nature.com/articles/nat...
www.biorxiv.org/content/10.1...
www.nature.com/articles/s41...
www.cell.com/neuron/fullt...
A figure on spike variablity across different input characteristics from Gerstner's book.
This is slightly different; nevertheless, it is a very good paper.
But I always come back to this study (Mainen and Sejnowski, 1995) for its emphasis on spike timing.
๐Excited to share that our paper was selected as a Spotlight at #NeurIPS2025!
arxiv.org/pdf/2410.03972
It started from a question I kept running into:
When do RNNs trained on the same task converge/diverge in their solutions?
๐งตโฌ๏ธ