Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.
tl;dr: you can now chat with a brain scan π§ π¬
1/n
03.11.2025 15:17
π 132
π 52
π¬ 6
π 8
If you are interested in development and development-inspired NeuroAI, and are coming to CCN this year,
come join the workshop with us on 1οΈβ£1οΈβ£ Monday, Aug 11
π 3:00 β 6:00 pm
π Room A2.11
Register here: sites.google.com/view/child2m...
(You can also come by my poster to chat!)
10.08.2025 17:41
π 8
π 0
π¬ 0
π 0
model-vs-human/modelvshuman/plotting/plot.py at master Β· bethgelab/model-vs-human
Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral) - bethgelab/model-vs-human
Hi Lukas, very interesting work! Is it possible to know the shape bias in the Geirhos way? He reports the average shape bias across categories (see his plot code here: github.com/bethgelab/mo...).
It would be even better if we could also know the average shape bias of each model across seeds:)!
10.07.2025 08:53
π 1
π 0
π¬ 0
π 0
π¨ Preprint alert! Excited to share my second PhD project: βAdopting a human developmental visual diet yields robust, shape-based AI visionβ -- a nice case showing that biology, neuroscience, and psychology can still help AI :)! arxiv.org/abs/2507.03168
08.07.2025 13:09
π 13
π 3
π¬ 0
π 0
In conclusion, All-TNNs are an exciting new class of networks for modelling primate vision, which address questions that are beyond the scope of CNNs and their topographic derivatives. 12/12
06.06.2025 10:21
π 2
π 0
π¬ 1
π 0
Next, we will use All-TNNs to explore the various factors of how smooth maps emerge from model training, without the need for a secondary smoothness loss. Possible avenues include wiring length optimization, energy constraints, local inhibition, or top-down connectivity patterns. 11/12
06.06.2025 10:21
π 1
π 0
π¬ 1
π 0
Can TNNs expand to self-supervised objectives? Yes, to a degree. We show that training All-TNNs with SimCLR yields smooth topography and category-independent spatial biases. However, SimCLR training fails to reproduce the structure of human-like category-specific spatial biases. 10/12
06.06.2025 10:20
π 1
π 0
π¬ 1
π 0
We show that these behavioural accuracy maps are structured and exhibit category-specific effects. Importantly, All-TNNs better reproduce these spatial structures of human visual biases than CNNs and other control models. 9/12
06.06.2025 10:19
π 1
π 0
π¬ 1
π 0
To study the impact of topography on behaviour, we conducted a human psychophysical experiment to quantify object recognition performance across spatial locations. This provided us with category-specific spatial accuracy maps for humans. 8/12
06.06.2025 10:19
π 1
π 0
π¬ 1
π 0
Similarly, All-TNNs allocate energy expenditure to task-relevant input regions, using an order of magnitude less βmetabolicβ cost than CNNs! And the smoother the topography, the greater the energy efficiency of the network! Energy efficiency was not explicitly optimised for. 7/12
06.06.2025 10:18
π 1
π 0
π¬ 1
π 0
Interestingly, All-TNNs exhibit a form of foveation, and allocate more processing resources to spatial regions rich in task-relevant information. 6/12
06.06.2025 10:17
π 1
π 0
π¬ 1
π 0
Upon training, topographical features reminiscent of the ventral stream emerge in All-TNNs, including smooth orientation selectivity maps in the first layer, and category-based selectivity clusters for tools, scenes, and faces in the last layer. 6/12
06.06.2025 10:17
π 1
π 0
π¬ 1
π 0
Overall network architecture of All-TNNs
All-TNNs overcome this limitation. In All-TNNs 1) each unit has its own local RF, 2) units in each layer are arranged on a 2D βcortical sheetβ without weight sharing, and 3) feature selectivity varies smoothly across space by encouraging similar selectivity in neighboring units. 5/12
06.06.2025 10:16
π 1
π 0
π¬ 1
π 0
Yet, their reliance on weight sharing, i.e., detecting identical features across visual space, renders them unable to model central aspects of biological vision, such as the origin of topography and its relation to behaviour. 4/12
06.06.2025 10:14
π 1
π 0
π¬ 1
π 0
Background: CNNs are commonly used to model primate vision, and have been successful at predicting neural activity and at accounting for complex visual behaviour. 3/12
06.06.2025 10:14
π 1
π 0
π¬ 1
π 0
With Adrien Doerig(@adriendoerig.bsky.social) , Victoria Bosch (@initself.bsky.social), Daniel Kaiser (@dkaiserlab.bsky.social), Radoslaw Martin Cichy and Tim C Kietzmann (@timkietzmann.bsky.social). 2/12
06.06.2025 10:14
π 2
π 0
π¬ 1
π 0
In this work, we introduce All-Topographic Neural Networks (All-TNNs)βANNs that drop weight sharing and learn on a smooth βcortical sheet,β capturing both human-like neural topography and visual biases in behaviour. 2/12
06.06.2025 10:13
π 3
π 0
π¬ 1
π 0
Now out in Nature Human Behaviour @nathumbehav.nature.com : βEnd-to-end topographic networks as models of cortical map formation and human visual behaviourβ. Please check our NHB link: www.nature.com/articles/s41...
06.06.2025 10:06
π 53
π 21
π¬ 4
π 6