Andrej Bicanski's Avatar

Andrej Bicanski

@andrejbicanski

Research group leader at MPI_CBS. Physicist turned neuroscientist (interests: too many, but mainly spatial cognition), sporadically on X/BSky, alter ego writes fiction.

170
Followers
192
Following
39
Posts
15.10.2023
Joined
Posts Following

Latest posts by Andrej Bicanski @andrejbicanski

Sure. Maybe we can have a brief chat at Cosyne.

10.03.2026 09:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

19/N: Finally, you’ll all be glad to hear this is – for once – a short paper ;). I hope people find it interesting.

09.03.2026 15:26 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

19/N: Massive thanks to @doellerlab.bsky.social at @mpicbs.bsky.social for the scientific environment in which these ideas could mature, and to @vigano.bsky.social, @stephanie-theves.bsky.social, @vstudenyak.bsky.social, @keckjanis.bsky.social, and my group for discussions.

09.03.2026 15:26 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

18/N: The reliance on extant spatial navigation architectures also means we get all the machinery of spatial memory for free. Replay, remapping, the role of theta, etc. Post learning adjustment … though not modeled here.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

17/N – D3: The model works with discontinuous stimulus spaces (OASIS being positive proof that this is needed) but would also be compatible with morphing/continuous navigation in abstract space. In the BB model paper @NeilBurgess10 and I proposed a model of mental navigation that could be used.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

16/N - D2: I see this as in line with Eichenbaum & Cohen’s (also articulated by others) conjecture that we can unify the spatial navigation view of the hippocampal formation with the memory view. The present model makes this explicit for abstract cognition.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

15/N: A bit of mini discussion - D1: Here grid cells are not latent variables in an ML architecture, but rather pre-existing, re-purposed from spatial navigation with minimal (plausible) assumptions (like velocity scaling).

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

14/N: What I personally find satisfying here: the same mechanisms that are used to build the map (VNA and PIN) are the ones that subsequently operate on it to implement reasoning (e.g., the analogies above). No additional machinery. Different combination for different operations that could transfer

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

13/N: Similarly we can combine PIN and VNA in various ways to compute the key quantities for perspective taking in abstract space (similar to the 2023 study by @vigano.bsky.social) or Subspace construction. Like a composable set that could transfer. More types of reasoning outlined in the paper.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

12/N: Now the fun part. Once we have anchored stimuli to the map we can do abstract reasoning. Say we have grid cell PVs for points A and B and the displacement vector. Then we pick a random point C. Do PIN(C,d) = D. Now we know C is to (a stimulus near) D as A is B. We got an analogy.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

11/N To check that mapping is successful, we recover grid cells from memorized PVs and bin them. Recovered grids are a proxy for what one would record. This works if, and only if, the mapping respects stimulus relations. Scrambling similarities breaks these recovered grids, as it should.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

10/N: If you can't agree, refuse to anchor the stimulus. This implies a very straight-forward (and reasonable) behavioral prediction: hard tasks take longer in the model. Too difficult a task can never be accomplished in a limited time.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

9/N: Using multiple anchors and overlap is powerful and can be extended. This gives us fault tolerance! If stimuli are hard to map, do more triangulations and take a majority vote or average at the end. Then anchor the next stimulus.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

8/N: The PIN takes in S1 and d to infer the grid cell PV at which to anchor S2. Then proceed onwards with the rest of the stimuli. We can also infer S3 from two locations, e.g. with anchor S1 and with anchor S2, and then check for overlap in the inferred GC PV.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

7/N: Then we map. Pick a random next stimulus S2. Calculate the similarity in stimulus space (e.g. valence and arousal for OASIS, leg and neck length for birds), rescale it to within the range of the grid cell network, which gives us the displacement vector.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

6/N: How the relevant dimensions could be isolated is also sketched out in the article, but I focus on the mapping. How does that work? We take a starting grid cell PV and anchor a first stimulus S1 (e.g., a specific OASIS image) to it. S1 <-> GC PV.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

5/N: I test the model with the public OASIS dataset, also used by @lukaskunz.bsky.social and colleagues to directly record non-spatial grid cells. I also generate some stretchy birds, akin to seminal study by Alexandra Constantinescu and @behrenstimb.bsky.social

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

4/N: Two key hypotheses: First, grid scaling is plastic, as shown by @caswell.bsky.social and used in my prior model of visual grid cells (BicanskiBurgess2019). Second, similarity in stimulus space is quite literally distance. That brings me to the stimuli.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

3/N: For methods details, see the article. What happens in the model: Point A is a grid cell population vector (PV), B is also signaled by a GC PV. d is the relative displacement vector. The same d connects many point pairs, and the PIN implementation accounts for that.

09.03.2026 15:26 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

2/N: VNAs can take two points A and B and spit out the vector connecting them. Beautifully explored by Bush et al. 2015, very useful for spatial navigation, but to build UCMs we need the opposite. Take in A and the vector d, and spit out point B. That would be the PIN.

09.03.2026 15:26 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

1/N: Dear cognitive map fans, I’d like to share a model I’ve been working on for a while (clearing backlog :). I show how a vector navigation architecture (VNA) and a β€œpositional inference network” (PIN) can build Universal Cognitive Maps (UCMs) for abstract spaces.

www.biorxiv.org/content/10.6...

09.03.2026 15:26 πŸ‘ 40 πŸ” 15 πŸ’¬ 2 πŸ“Œ 0
Dynamic Updating of Cognitive Maps via Traces of Experience in the Subiculum You have to enable JavaScript in your browser's settings in order to use the eReader.

Now out in Hippocampus. Fei Wangβ€˜s model of Trace Vector Cells and intra-subiculum processing, consistent with know effects in CA1.

onlinelibrary.wiley.com/share/CPZPYM...

09.03.2026 05:42 πŸ‘ 25 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0
Post image

Human hippocampal theta–gamma coupling coordinates sequential planning during navigation

Impressive study from Dan Bush's Lab at UCL:

www.pnas.org/doi/10.1073/...

02.03.2026 16:52 πŸ‘ 78 πŸ” 26 πŸ’¬ 2 πŸ“Œ 1
Preview
Non‐Canonical Subiculum Circuit Organization and Function The subiculum is highly interconnected with the hippocampus, sub-regions of the thalamus, and the entorhinal and retrosplenial cortices. Together, these regions form a distributed network that plays ...

Great new update on the subiculum circuits!

02.03.2026 14:19 πŸ‘ 9 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Great to have @lukaskunz.bsky.social with next week! Looking forward to this.

06.02.2026 08:18 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Tolman's Sunburst Maze 80 Years on: A Meta‐Analysis Reveals Poor Replicability and Little Evidence for Shortcutting In 1946, Tolman etΒ al. reported that rats could take a novel shortcut to a goal after training on an indirect route, supporting the Cognitive Map theory. However, a review of subsequent Sunburst maze...

Can humans & animals really use internal maps to take shortcuts?

Tolman famously said yes - based largely on his Sunburst maze.

Our new review & meta-analysis suggests evidence is far weaker than you might think.
πŸ§΅πŸ‘‡ doi.org/10.1111/ejn....

@uofgpsychneuro.bsky.social @ejneuroscience.bsky.social

05.01.2026 19:52 πŸ‘ 135 πŸ” 57 πŸ’¬ 7 πŸ“Œ 11
Preview
Probabilistic Foundations of Fuzzy Simplicial Sets for Nonlinear Dimensionality Reduction Fuzzy simplicial sets have become an object of interest in dimensionality reduction and manifold learning, most prominently through their role in UMAP. However, their definition through tools from alg...

New preprint! Have you ever wondered, what are these fuzzy simplicial sets, the theoretical framework behind e.g. UMAP? Here we show that you may simply see them as marginal distributions over simplicial sets. This provides a generative model for UMAP. (1/2)

arxiv.org/abs/2512.03899

04.12.2025 12:31 πŸ‘ 14 πŸ” 7 πŸ’¬ 1 πŸ“Œ 0
Preview
The effects of task similarity during representation learning in brains and neural networks Nature Communications - Here, the authors show learning tasks with similar structures can initially cause interference and slow down learning, but both the brain and artificial networks gradually...

Our new paper, now published in @natcomms.nature.com , asks a simple question: when two tasks share a common structure, does the brain learn them more efficiently? Surprisingly, this was not the case. Thread below (1/7)
rdcu.be/eSwvU

02.12.2025 09:41 πŸ‘ 87 πŸ” 35 πŸ’¬ 4 πŸ“Œ 1

Dear colleagues, please check out this paper on aging and the HD system + gracefully degrading, biologically plausible HD attractor networks. Well done Matthieu!

01.12.2025 11:59 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
A neural state space for episodic memories Episodic memories are highly dynamic and change in nonlinear ways over time. This dynamism is not captured by existing systems consolidation theories …

I wrote a thing on episodic memory and systems consolidation. I hope you all enjoy it and/or find it interesting.

A neural state space for episodic memories

www.sciencedirect.com/science/arti...

#neuroskyence #psychscisky #cognition πŸ§ͺ

03.11.2025 12:56 πŸ‘ 162 πŸ” 67 πŸ’¬ 2 πŸ“Œ 4