Are you at #cosyne2026 and looking for a postdoc position? CATNIP = Neural Dynamics Lab is hiring! catniplab.github.io/postdoc-hiri...
Are you at #cosyne2026 and looking for a postdoc position? CATNIP = Neural Dynamics Lab is hiring! catniplab.github.io/postdoc-hiri...
I tracked every keyword in 22 years of Cosyne abstracts to map how computational neuroscience evolved β from Bayesian brains to neural manifolds to LLMs β and where it's heading next.
A paid summer internship is available in Microsoft Research NYC for a *PhD student later in their degree*, with experience designing human questionnaires/experiments, collecting data on prolific, running statistical analysis, and writing papers. If you know exceptional candidates please ping me. π
New paper hot off the (pre-)press! We dig into the evolutionary origins of neural computations for behavioral control across mice, monkeys, and humans: www.biorxiv.org/content/10.6....
As our lab's first foray into comparative analysis of neural dynamics, Iβm super excited about this work! 1/18
Join us at TENSS 2026 to open black boxes, explain how things/brains work and debate the impact (or lack or it) of various new technologies on understanding of the brain and on society. tenss.ro Apply by: March 15th!
Excited to announce our new study in which we convert mouse neural responses into natural language (English) descriptions of odorants. Comments, suggestions are highly appreciated as always: www.biorxiv.org/content/10.6...
ah, the perfect motivation for my fruit fly modeling grants!
#neuroAI #neurojobs #compneuro #bioAI #gradschool @cshlaboratory.bsky.social @tonyzador.bsky.social
The Cowley Group at CSHL has an opening for a bioAI PhD student to start Fall 2026 to work on closed-loop AI models for visual processing (see below). You *must* have a Master's degree in a quant/eng/cs field.
www.cshl.edu/phd-program/...
Please reach out to me if interested!
I'll be at Cosyne.
DNN models of the brain are getting bigger. Are we replacing one complicated system in vivo with another in silico?
In new work, we seek the *smallest* DNN models of visual cortex, balancing prediction with parsimony.
It turns out these compact models are surprisingly small!
rdcu.be/e5H8G
New paper out!!!
Such as humans and rats, we showed that birds visual system (π£) is tuned to specific pixel correlations from hatching: royalsocietypublishing.org/rspb/article...
RIP redundancy reduction?
Beautiful work by Liu & colleagues showing that neural redundancy increases with learning, as predicted by a Bayesian model:
www.science.org/doi/10.1126/...
Proud to be a collaborator on this paper on compact deep neural network models of V4, with Ben Cowley (@benjocowley.bsky.social), Pati Stan, & Matt Smith, now finally out in print (by which I mean online).
Psyched to announce our COSYNE workshop on social behaviors (Mar 17th, Cascais). We have a stellar lineup of speakers on topics from animal cooperation and aggression to the social dynamics of LLM agents.
Co-organized with Libby Zhang (Allen Institute + UW).
cosyne-social-behavior.github.io
Can electrical microstimulation be used to steer cortical population activity on- and off-manifold? Our new preprint says yes β using data-driven control in macaque PFC. Joint work with @gbarzon.bsky.social, Anandita De, Isaac Moran, Conner Carnahan, and Luca Mazzucato.
The deadline to apply for the Brain Prize Cajal summer course in Computational Neuroscience has been extended to March 9! Weβre excited for you to join us in sunny Lisbon! Please do not hesitate to send in an application and learn about computational neuroscience! @gjorjulijana.bsky.social
@cshlnews.bsky.social @princetonneuro.bsky.social
@cmu-neuroscience.bsky.social
#neuroAI #compneuro #neuroscience #visualcortex #closedloop #activelearning #modelcompression #distillation #pruning
www.cshl.edu/ai-monkey-br...
Thanks to NPR's All Things Considered Jon Hamilton for the interview!
www.npr.org/2026/02/25/n...
@npr.org #AllThingsConsidered #JonHamilton
Thanks to CV Starr, Pershing Square Innovation Fund, Simons Foundation, NIH, and NIH BRAIN Initiative for funding.
Data and code:
github.com/cowleygroup/...
doi.org/10.1184/R1/3...
We hope to add our V4 data to BrainScore soon!
Thanks to my wonderful collaborators:
Pati Stan (CMU)
Jonathan Pillow (Princeton) @jpillowtime.bsky.social
Matthew Smith (CMU)
This work has inspired myself and research group at CSHL to continue hunting for step-by-step computations of the brain both with closed-loop experiments and model compression.
"Compact deep neural network models of the visual cortex." B. Cowley, P. Stan, J. Pillow*, M. Smith*. Nature, 2026.
How do you build a V4 dot detector?
We dissected the compact model, finding a simple computation for dot size selectivity:
Search for corners of a dot while inhibiting large edges. If the activity overlaps *and* inhibition is low, there must be a small dot.
Future work: Map out these circuits!
What can the compact models tell us about feature processing in V4?
One class of V4 neurons that stuck out were "dot detectors." Perhaps there to build up "eye" detectors in IT?
We focused on a single V4 neuron dot detector whose response-maximizing images were...dots.
(a side note --- interestingly, DNN units from ResNet50-robust were *NOT* compressible. Perhaps these units have to do too much with too little.)
Each V4 neuron had unique feature selectivity. Can we compress a model predicting all 200 V4 neurons at once?
Yes, yes we could.
Our ensemble model was compressible.
ResNet50-robust was compressible.
V1, V4, IT populations were compressible.
Perhaps a V4 neuron is simpler than once thought.
My favorite experiment was optimizing a compact model to slightly perturb an image's pixels that causes the neuron to either excite or suppress its response.
These experiments gave us confidence the inner workings of the compact models likely matched that of real V4 neurons.
We went to work interrogating these compact models. And things got weird.
For example, we found a "palm tree" detecting V4 neuron whose response-maximizing natural and synthesized images were palm trees?!
To make sure this was real, we performed validation experiments.
A compact model is small enough to display *all* of its convolutional weights in one diagram!
To apply Occam's Razor, we used two types of model compression:
knowledge distillation: train a student model via a teacher model
pruning: remove any spurious filters
The result: Compact models 5,000x smaller than our ensemble model but with similar prediction power.