Sure. Maybe we can have a brief chat at Cosyne.
Sure. Maybe we can have a brief chat at Cosyne.
19/N: Finally, youβll all be glad to hear this is β for once β a short paper ;). I hope people find it interesting.
19/N: Massive thanks to @doellerlab.bsky.social at @mpicbs.bsky.social for the scientific environment in which these ideas could mature, and to @vigano.bsky.social, @stephanie-theves.bsky.social, @vstudenyak.bsky.social, @keckjanis.bsky.social, and my group for discussions.
18/N: The reliance on extant spatial navigation architectures also means we get all the machinery of spatial memory for free. Replay, remapping, the role of theta, etc. Post learning adjustment β¦ though not modeled here.
17/N β D3: The model works with discontinuous stimulus spaces (OASIS being positive proof that this is needed) but would also be compatible with morphing/continuous navigation in abstract space. In the BB model paper @NeilBurgess10 and I proposed a model of mental navigation that could be used.
16/N - D2: I see this as in line with Eichenbaum & Cohenβs (also articulated by others) conjecture that we can unify the spatial navigation view of the hippocampal formation with the memory view. The present model makes this explicit for abstract cognition.
15/N: A bit of mini discussion - D1: Here grid cells are not latent variables in an ML architecture, but rather pre-existing, re-purposed from spatial navigation with minimal (plausible) assumptions (like velocity scaling).
14/N: What I personally find satisfying here: the same mechanisms that are used to build the map (VNA and PIN) are the ones that subsequently operate on it to implement reasoning (e.g., the analogies above). No additional machinery. Different combination for different operations that could transfer
13/N: Similarly we can combine PIN and VNA in various ways to compute the key quantities for perspective taking in abstract space (similar to the 2023 study by @vigano.bsky.social) or Subspace construction. Like a composable set that could transfer. More types of reasoning outlined in the paper.
12/N: Now the fun part. Once we have anchored stimuli to the map we can do abstract reasoning. Say we have grid cell PVs for points A and B and the displacement vector. Then we pick a random point C. Do PIN(C,d) = D. Now we know C is to (a stimulus near) D as A is B. We got an analogy.
11/N To check that mapping is successful, we recover grid cells from memorized PVs and bin them. Recovered grids are a proxy for what one would record. This works if, and only if, the mapping respects stimulus relations. Scrambling similarities breaks these recovered grids, as it should.
10/N: If you can't agree, refuse to anchor the stimulus. This implies a very straight-forward (and reasonable) behavioral prediction: hard tasks take longer in the model. Too difficult a task can never be accomplished in a limited time.
9/N: Using multiple anchors and overlap is powerful and can be extended. This gives us fault tolerance! If stimuli are hard to map, do more triangulations and take a majority vote or average at the end. Then anchor the next stimulus.
8/N: The PIN takes in S1 and d to infer the grid cell PV at which to anchor S2. Then proceed onwards with the rest of the stimuli. We can also infer S3 from two locations, e.g. with anchor S1 and with anchor S2, and then check for overlap in the inferred GC PV.
7/N: Then we map. Pick a random next stimulus S2. Calculate the similarity in stimulus space (e.g. valence and arousal for OASIS, leg and neck length for birds), rescale it to within the range of the grid cell network, which gives us the displacement vector.
6/N: How the relevant dimensions could be isolated is also sketched out in the article, but I focus on the mapping. How does that work? We take a starting grid cell PV and anchor a first stimulus S1 (e.g., a specific OASIS image) to it. S1 <-> GC PV.
5/N: I test the model with the public OASIS dataset, also used by @lukaskunz.bsky.social and colleagues to directly record non-spatial grid cells. I also generate some stretchy birds, akin to seminal study by Alexandra Constantinescu and @behrenstimb.bsky.social
4/N: Two key hypotheses: First, grid scaling is plastic, as shown by @caswell.bsky.social and used in my prior model of visual grid cells (BicanskiBurgess2019). Second, similarity in stimulus space is quite literally distance. That brings me to the stimuli.
3/N: For methods details, see the article. What happens in the model: Point A is a grid cell population vector (PV), B is also signaled by a GC PV. d is the relative displacement vector. The same d connects many point pairs, and the PIN implementation accounts for that.
2/N: VNAs can take two points A and B and spit out the vector connecting them. Beautifully explored by Bush et al. 2015, very useful for spatial navigation, but to build UCMs we need the opposite. Take in A and the vector d, and spit out point B. That would be the PIN.
1/N: Dear cognitive map fans, Iβd like to share a model Iβve been working on for a while (clearing backlog :). I show how a vector navigation architecture (VNA) and a βpositional inference networkβ (PIN) can build Universal Cognitive Maps (UCMs) for abstract spaces.
www.biorxiv.org/content/10.6...
Now out in Hippocampus. Fei Wangβs model of Trace Vector Cells and intra-subiculum processing, consistent with know effects in CA1.
onlinelibrary.wiley.com/share/CPZPYM...
Human hippocampal thetaβgamma coupling coordinates sequential planning during navigation
Impressive study from Dan Bush's Lab at UCL:
www.pnas.org/doi/10.1073/...
Great to have @lukaskunz.bsky.social with next week! Looking forward to this.
Can humans & animals really use internal maps to take shortcuts?
Tolman famously said yes - based largely on his Sunburst maze.
Our new review & meta-analysis suggests evidence is far weaker than you might think.
π§΅π doi.org/10.1111/ejn....
@uofgpsychneuro.bsky.social @ejneuroscience.bsky.social
New preprint! Have you ever wondered, what are these fuzzy simplicial sets, the theoretical framework behind e.g. UMAP? Here we show that you may simply see them as marginal distributions over simplicial sets. This provides a generative model for UMAP. (1/2)
arxiv.org/abs/2512.03899
Our new paper, now published in @natcomms.nature.com , asks a simple question: when two tasks share a common structure, does the brain learn them more efficiently? Surprisingly, this was not the case. Thread below (1/7)
rdcu.be/eSwvU
Dear colleagues, please check out this paper on aging and the HD system + gracefully degrading, biologically plausible HD attractor networks. Well done Matthieu!
I wrote a thing on episodic memory and systems consolidation. I hope you all enjoy it and/or find it interesting.
A neural state space for episodic memories
www.sciencedirect.com/science/arti...
#neuroskyence #psychscisky #cognition π§ͺ