Lindsay Lab - Postdoc Position
Artificial neural networks applied to psychology, neuroscience, and climate change
Spread the word: I'm looking to hire a postdoc to explore the concept of attention (as studied in psych/neuro, not the transformer mechanism) in large Vision-Language Models. More details here: lindsay-lab.github.io/2025/12/08/p...
#MLSky #neurojobs #compneuro
08.12.2025 23:53
π 125
π 91
π¬ 2
π 0
[moments after creating my AI digital clone]
ME: alright clone, do my chores
CLONE-ME: no
US: my god, it worked
30.10.2025 13:02
π 47
π 4
π¬ 1
π 0
share.google
Here at #Indian academy of #neuroscience meeting in the beautiful city of Kovalam in Kerala share.google/EzLqUeTwCeDw... β₯οΈ
@sbaulac.bsky.social is giving the opening talk on brain mosaicism in epilepsy and cortical malformations #epilepsy.
29.10.2025 08:41
π 15
π 3
π¬ 1
π 1
Excited to have 2 papers accepted at @neuripsconf.bsky.social 2025 - Reliable ML Workshop.
Paper #1: openreview.net/forum?id=9Ue...
Paper #2: openreview.net/forum?id=0MW...
Iβll briefly summarize the work below.
(Links above if you want to get right to the papers)
THREAD π§΅
Please RT.
(1/N)
24.10.2025 17:08
π 14
π 4
π¬ 2
π 0
Thanks Krishna!
26.10.2025 13:01
π 1
π 0
π¬ 0
π 0
a little girl is sliding down an orange slide with the words ok bye written on it .
ALT: a little girl is sliding down an orange slide with the words ok bye written on it .
I'll stop here.
Both these papers were led by
@simran-ketha.bsky.social
, an extraordinary Ph.D. student advised by me.
Watch out for many new (and yes exciting!) results that we hope to share soon.
(N/N)
\END
24.10.2025 17:25
π 3
π 0
π¬ 0
π 0
We asked if we could similarly leverage the subspace geometry to obtain robustness to adversarial attacks. We built variants of MASC that ended up being upto ~3x better than the model, even though both use the same initial substrate.
More here: openreview.net/pdf?id=0MWW5...
(10/N)
24.10.2025 17:25
π 0
π 0
π¬ 1
π 0
a cartoon of a monkey sitting on a rock meditating
ALT: a cartoon of a monkey sitting on a rock meditating
Defending against adversarial attacks has become important. Effective defenses usually involve adversarial training, which is expensive, or other ways to defend that involve modifications to standard training paradigms.
But, what if the key to defending from adversarial attacks lies within?
<9/N>
24.10.2025 17:20
π 0
π 0
π¬ 1
π 0
So, whatβs an adversarial attack?
It turns out that with typical Deep Networks a malicious adversary can change inputs, e.g. images so the Deep Net classifies it as something else. See the example image below from blog.mi.hdm-stuttgart.de/index.php/20...
(8/N)
24.10.2025 17:12
π 0
π 0
π¬ 1
π 0
Paper #2 w/
@simran-ketha.bsky.social
@mummani-nuthan.bsky.social
&
@niranjanrajesh.bsky.social
Here we considered the setting of adversarial attacks.
(7/N)
24.10.2025 17:12
π 0
π 0
π¬ 1
π 0
We donβt know why this works well.
Indeed, it is reminiscent of some Neuroscience experiments, where it is known that animals sometimes have significantly poorer behavioral performance than what one can linearly decode from a handful of their neurons.
Paper: openreview.net/pdf?id=9Uen9...
(6/N)
24.10.2025 17:12
π 1
π 0
π¬ 1
π 0
Surprisingly, we find that this works extraordinarily well.
For every model tested, on at least one layer MASC beats the model test accuracy, and in many cases by a significant margin (see table below), and especially for cases where there is high degree of corruption of labels.
(5/N)
24.10.2025 17:12
π 0
π 0
π¬ 1
π 0
More technically, we fitted subspaces, one for each class, to the *corrupted* layerwise outputs. For each incoming point, we asked, which subspace is it closest to, in the angle sense, & predicted the label of the datapoint to be that of that class. We call this the MASC classifier.
(4/N)
24.10.2025 17:08
π 0
π 0
π¬ 1
π 0
We looked into the internals of Deep Networks to see if we could extract much better generalization (i.e. accuracy on unseen data)
Specifically, we looked at the geometry of class-wise internal representations and whether it was organized in a manner that allowed for better generalization.
(3/N)
24.10.2025 17:08
π 0
π 0
π¬ 1
π 0
a group of children are sitting at their desks in a classroom covering their faces with their hands .
ALT: a group of children are sitting at their desks in a classroom covering their faces with their hands .
Paper #1 w/
@simran-ketha.bsky.social
We consider the setting, where the training data has label noise. That is, w/ some probability, each training point has its label shuffled.
Here, Deep Nets *memorize*, i.e. are able to perfectly rote-learn the training data but do badly on unseen data
(2/N)
24.10.2025 17:08
π 0
π 0
π¬ 1
π 0
Excited to have 2 papers accepted at @neuripsconf.bsky.social 2025 - Reliable ML Workshop.
Paper #1: openreview.net/forum?id=9Ue...
Paper #2: openreview.net/forum?id=0MW...
Iβll briefly summarize the work below.
(Links above if you want to get right to the papers)
THREAD π§΅
Please RT.
(1/N)
24.10.2025 17:08
π 14
π 4
π¬ 2
π 0
Mice navigate scent trails using predictive policies
Animals actively sense their environment to extract features of interest to guide behaviors. For mammals, odors are prominent environmental features which are sampled by active modulation of sniffing ...
Thrilled to share this work, long time in the making! Carried through creatively by @siddjakes.bsky.social after initial design & piloting by @trackingskills.bsky.social, with help from @trackingactions.bsky.social. Modeling in collaboration with Massimo Vergassola & Nicola Rigolli.
01.09.2025 19:06
π 125
π 42
π¬ 1
π 2
www.thewayofcode.com by
@rickrubin.bsky.social
01.09.2025 11:31
π 4
π 0
π¬ 0
π 0
"Early to bed, early to rise, work like hell, and advertise."
-Ted Turner
23.08.2025 17:33
π 4
π 0
π¬ 0
π 0
Ian, how about a bit of intellectual honesty in posting the full official statement by India, so your readers can make their own judgement about the rationale for India's position?
05.08.2025 04:42
π 1
π 0
π¬ 0
π 0
Will start my posts here with a preprint!
First preprint from my postdoctoral workβwhere we redefine and remap the isocortical efferent projectome through two foundational neurogenic mechanisms.
*
(1/5)
25.07.2025 10:46
π 11
π 3
π¬ 1
π 0
Our new paper out now in Science explores how neural activity in the lateral entorhinal cortex (LEC) *drifts* over time - and *jumps* at key boundaries - to help organize events in memory.
π www.science.org/doi/10.1126/...
Here's a quick summary of what we found π§΅π
26.06.2025 18:15
π 113
π 36
π¬ 9
π 3
(1/7) New preprint from Rajan lab! π§ π€
@ryanpaulbadman1.bsky.social & Riley Simmons-Edler showβthrough cog sci, neuro & ethologyβhow an AI agent with fewer βneuronsβ than an insect can forage, find safety & dodge predators in a virtual world. Here's what we built
Preprint: arxiv.org/pdf/2506.06981
02.07.2025 18:33
π 94
π 32
π¬ 3
π 2
Foundations of Computer Vision
The print version was published by
Our computer vision textbook is now available for free online here:
visionbook.mit.edu
We are working on adding some interactive components like search and (beta) integration with LLMs.
Hope this is useful and feel free to submit Github issues to help us improve the text!
15.06.2025 15:45
π 117
π 32
π¬ 3
π 1
Almost all Nobel Prizes are awarded for work that is exploratory, or absolutely basic science with no obvious commercial or medical benefit.
You cannot predict where advancements come from, so you have to invest in science and scientists.
Targeted (corporate) science investment will never do this.
08.06.2025 22:26
π 292
π 118
π¬ 4
π 5
Subcortical correlates of consciousness with human single neuron recordings
Subcortical brain structures contain neurons encoding subjective reports of stimulus detection in human participants, suggesting a role for conscious perception.
New βversion of recordβ in @elife.bsky.social! β¬
We recorded neurons in the thalamus and subthalamic nucleus of humans and found both sensory- and perception-selective neurons with two distinct latencies!
7-year project with @nfaivre.bsky.social + @foscobernasconi.bsky.social.
A short thread π
01.06.2025 16:35
π 50
π 21
π¬ 3
π 1
#neuroskyence What are your favorite papers on the idea of redundancy in the brain? E.g. how a different neural circuit can step in when the "original" is damaged? Or just how the same function can be executed in different ways within the same brain?
30.05.2025 16:05
π 61
π 13
π¬ 13
π 1