Zejin Lu's Avatar

Zejin Lu

@zejinlu

PhD Student @FU_Berlin co-supervised by Prof. Radoslaw M. Cichy and Prof. Tim Kietzmann, interested in machine learning and cognitive science. Personal webpage: zejinlu.com

92
Followers
117
Following
18
Posts
21.01.2025
Joined
Posts Following

Latest posts by Zejin Lu @zejinlu

Preview
Predicting upcoming visual features during eye movements yields scene representations aligned with human visual cortex Scenes are complex, yet structured collections of parts, including objects and surfaces, that exhibit spatial and semantic relations to one another. An effective visual system therefore needs unified ...

🚨New Preprint!
How can we model natural scene representations in visual cortex? A solution is in active vision: predict the features of the next glimpse! arxiv.org/abs/2511.12715

+ @adriendoerig.bsky.social , @alexanderkroner.bsky.social , @carmenamme.bsky.social , @timkietzmann.bsky.social
🧡 1/14

18.11.2025 12:34 πŸ‘ 86 πŸ” 28 πŸ’¬ 3 πŸ“Œ 5
Post image

Introducing CorText: a framework that fuses brain data directly into a large language model, allowing for interactive neural readout using natural language.

tl;dr: you can now chat with a brain scan πŸ§ πŸ’¬

1/n

03.11.2025 15:17 πŸ‘ 132 πŸ” 52 πŸ’¬ 6 πŸ“Œ 8

If you are interested in development and development-inspired NeuroAI, and are coming to CCN this year,
come join the workshop with us on 1️⃣1️⃣ Monday, Aug 11
πŸ•’ 3:00 – 6:00 pm
πŸ“ Room A2.11
Register here: sites.google.com/view/child2m...
(You can also come by my poster to chat!)

10.08.2025 17:41 πŸ‘ 8 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
CCN 2025 Satellite Event Background The human visual system is full of optimisationsβ€”mechanisms designed to extract the most useful information from a constant stream of incoming data. The field of neuro-AI has made significa...

Not just one, but two fantastic chances to discuss how infant development can inform machine learning and vice-versa at CCN 2025 in Amsterdam!!! Satellite workshop sites.google.com/view/child2m...
and Generative Adversarial Collaboration sites.google.com/ccneuro.org/...

25.06.2025 20:45 πŸ‘ 31 πŸ” 13 πŸ’¬ 0 πŸ“Œ 3
Preview
High-level visual representations in the human brain are aligned with large language models - Nature Machine Intelligence Doerig, Kietzmann and colleagues show that the brain’s response to visual scenes can be modelled using language-based AI representations. By linking brain activity to caption-based embeddings from lar...

🚨 Finally out in Nature Machine Intelligence!!
"Visual representations in the human brain are aligned with large language models"
πŸ”— www.nature.com/articles/s42...

07.08.2025 13:06 πŸ‘ 91 πŸ” 35 πŸ’¬ 3 πŸ“Œ 1
Preview
model-vs-human/modelvshuman/plotting/plot.py at master Β· bethgelab/model-vs-human Benchmark your model on out-of-distribution datasets with carefully collected human comparison data (NeurIPS 2021 Oral) - bethgelab/model-vs-human

Hi Lukas, very interesting work! Is it possible to know the shape bias in the Geirhos way? He reports the average shape bias across categories (see his plot code here: github.com/bethgelab/mo...).
It would be even better if we could also know the average shape bias of each model across seeds:)!

10.07.2025 08:53 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

🚨 Preprint alert! Excited to share my second PhD project: β€œAdopting a human developmental visual diet yields robust, shape-based AI vision” -- a nice case showing that biology, neuroscience, and psychology can still help AI :)! arxiv.org/abs/2507.03168

08.07.2025 13:09 πŸ‘ 13 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Preview
End-to-end topographic networks as models of cortical map formation and human visual behaviour - Nature Human Behaviour Lu et al. introduce all-topographic neural networks as a parsimonious model of the human visual cortex.

If the above link doesn’t work for you, please try this one: www.nature.com/articles/s41...

06.06.2025 12:52 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

In conclusion, All-TNNs are an exciting new class of networks for modelling primate vision, which address questions that are beyond the scope of CNNs and their topographic derivatives. 12/12

06.06.2025 10:21 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Next, we will use All-TNNs to explore the various factors of how smooth maps emerge from model training, without the need for a secondary smoothness loss. Possible avenues include wiring length optimization, energy constraints, local inhibition, or top-down connectivity patterns. 11/12

06.06.2025 10:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Can TNNs expand to self-supervised objectives? Yes, to a degree. We show that training All-TNNs with SimCLR yields smooth topography and category-independent spatial biases. However, SimCLR training fails to reproduce the structure of human-like category-specific spatial biases. 10/12

06.06.2025 10:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We show that these behavioural accuracy maps are structured and exhibit category-specific effects. Importantly, All-TNNs better reproduce these spatial structures of human visual biases than CNNs and other control models. 9/12

06.06.2025 10:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

To study the impact of topography on behaviour, we conducted a human psychophysical experiment to quantify object recognition performance across spatial locations. This provided us with category-specific spatial accuracy maps for humans. 8/12

06.06.2025 10:19 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Similarly, All-TNNs allocate energy expenditure to task-relevant input regions, using an order of magnitude less β€œmetabolic” cost than CNNs! And the smoother the topography, the greater the energy efficiency of the network! Energy efficiency was not explicitly optimised for. 7/12

06.06.2025 10:18 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Interestingly, All-TNNs exhibit a form of foveation, and allocate more processing resources to spatial regions rich in task-relevant information. 6/12

06.06.2025 10:17 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Upon training, topographical features reminiscent of the ventral stream emerge in All-TNNs, including smooth orientation selectivity maps in the first layer, and category-based selectivity clusters for tools, scenes, and faces in the last layer. 6/12

06.06.2025 10:17 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Overall network architecture of All-TNNs

Overall network architecture of All-TNNs

All-TNNs overcome this limitation. In All-TNNs 1) each unit has its own local RF, 2) units in each layer are arranged on a 2D β€œcortical sheet” without weight sharing, and 3) feature selectivity varies smoothly across space by encouraging similar selectivity in neighboring units. 5/12

06.06.2025 10:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Yet, their reliance on weight sharing, i.e., detecting identical features across visual space, renders them unable to model central aspects of biological vision, such as the origin of topography and its relation to behaviour. 4/12

06.06.2025 10:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Background: CNNs are commonly used to model primate vision, and have been successful at predicting neural activity and at accounting for complex visual behaviour. 3/12

06.06.2025 10:14 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

With Adrien Doerig(@adriendoerig.bsky.social) , Victoria Bosch (@initself.bsky.social), Daniel Kaiser (@dkaiserlab.bsky.social), Radoslaw Martin Cichy and Tim C Kietzmann (@timkietzmann.bsky.social). 2/12

06.06.2025 10:14 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

In this work, we introduce All-Topographic Neural Networks (All-TNNs)β€”ANNs that drop weight sharing and learn on a smooth β€œcortical sheet,” capturing both human-like neural topography and visual biases in behaviour. 2/12

06.06.2025 10:13 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Video thumbnail

Now out in Nature Human Behaviour @nathumbehav.nature.com : β€œEnd-to-end topographic networks as models of cortical map formation and human visual behaviour”. Please check our NHB link: www.nature.com/articles/s41...

06.06.2025 10:06 πŸ‘ 53 πŸ” 21 πŸ’¬ 4 πŸ“Œ 6