Can testify! In my experience, this is the perfect gateway into the head-direction rabbit hole π
Can testify! In my experience, this is the perfect gateway into the head-direction rabbit hole π
New paper alert! π¨
We found that the brain's compass is remarkably stable at two scales
1οΈβ£ the system maintains its internal organization for weeks
2οΈβ£ It "remembers" its orientation for weeks, even after a single visit
This may be key to how the brain aligns its other maps.
Paper: rdcu.be/e3waP
At #NeurIPS in San Diego and interested in #NeuroAI? Today in theπ11am-2pm session, I will present our work on brain-inspired continual learning via TMCL (πPoster 2206), with Emre Neftci and @dlg-ai.bsky.social.
Say hi or DM me if you want to chat about continual, local or modular learning!
The perfect wrap up for an awesome time at #SfN25. Thanks to everyone for all the great impressions and discussions. It was wonderful meeting such cool and nice people! @apeyrache.bsky.social @adrian-du.bsky.social @anujvadher.bsky.social & Paul Dudchenko
π At #SfN25? Interested in how head-direction cells anchor to visual cues?
π§ Come visit me Tuesday Morning (8am - 12pm)
π Poster RR15
showing work w/ @adrian-du.bsky.social on parallax in PoSub and what we learn from it for visual cue integration. Come say hi or DM for a coffee chat! βοΈ #Neurosky
Great to be back at SfN in San Diego, even though the weather gods have decided to go full 'Scottish summer' this year π§οΈ
If you're here and you're into spatial navigation do check out the posters below π [1/n]
Really excited to share this Opinion piece we've been working on with fellow head-direction cell geeks @apeyrache.bsky.social @desdemonafricker.bsky.social and (bsky-less?) Andrea Burgalossi! While head-direction cells pop up in many cortical regions, we think that one of them is quite unique (1/8)
I love open science! Not only is this absolutely brilliant work on how time and events are encoded in LEC, it is also a crazy rich dataset and it's just available to everyone! Thank you so much for sharing the recordings, I can't wait to play around with it!
Very proud to have presented our work on modeling cognitive maps and episodic memory at the GEM conference today! @for2812.bsky.social
Check out our preprint to learn more about how you can bring together semantic and spatial information with our grid-cell-VSA model arxiv.org/abs/2503.08608
Very excited to get the opportunity to give a talk at GEM2025 on how we combine spatial and semantic information to form episodic memories. See you in beautiful Bochum :)
Very proud to announce the first paper for my PhD:
Grid Cell-Inspired Vector Algebra: Bridging the Brain's Navigation System with Symbolic Reasoning. Unifying spatial and symbolic computation!
Happy to be in Heidelberg next week, presenting this at NICE.
arxiv.org/abs/2503.08608
#NICE2025
I was curious about the wonky AI overview results being delivered by Google search, so I looked at this a bit further.
"What is heavier: an elephant or an elephant with an ant on its back?"
Screenshot of the introduction to the paper 'Cognitive maps in rats and men' by Edward Tolman. In this abstract, he mentions rats "misspending their lives" in non-US laboratories and his experiments being executed by Graduate students and underpaid research assistants. It reads rather ironically and whimsical for an introduction to a scientific paper.
Re-reading classic papers from the 1940s, I have to say... introductions were different back then
Some of my thoughts on OpenAI's o3 and the ARC-AGI benchmark
aiguide.substack.com/p/did-openai...
OK If we are moving to Bluesky I am rescuing my favourite ever twitter thread (Jan 2019).
The renamed:
Bluesky-sized history of neuroscience (biased by my interests)
there is actually a really nice blogpost by the numenta people illustrating how multiple modules create a unique position encoding although every module repeats with some scale and orientation
www.numenta.com/blog/2018/05...
The key point is that you don't wrap the whole room into a torus. It's not the length and width of the room that tells you the periodicity on your torus. Rather you have multiple Tori, say one that wraps every 3 meters and another that wraps every 5 meters and so on.
Why would they expect that? The periodicity is not defined by the width of the room and different grid cell modules have different periodicities (scales and orientations), therefore the whole population encoding is not at all periodic at the borders of the environment
I don't really see the prediction part. Isn't it pretty safe to assume that the real papers in the database are already in the training set for all of these LLMs? In that case it is more of a remembering task than a prediction, right?