Giacomo Aldegheri's Avatar

Giacomo Aldegheri

@gialdegheri

Cognitive scientist, postdoc at Justus Liebig University, Giessen. Natural/artificial vision/cognition.

376
Followers
471
Following
85
Posts
01.09.2023
Joined
Posts Following

Latest posts by Giacomo Aldegheri @gialdegheri

Post image

πŸ“’ PhD position in the NeuroAI of Language

Why can LLMs predict brain activity so well? We're hiring a PhD student to find out -- AI interpretability meets neuroimaging
Deadline March 20
Please RT πŸ™
πŸ‘‡
mpi.nl/career-education/vacancies/vacancy/fully-funded-4-year-phd-position-neuroai-language

05.03.2026 13:34 πŸ‘ 48 πŸ” 39 πŸ’¬ 2 πŸ“Œ 1
Post image

Recently, van der Stigchel and colleagues posted a provocative commentary suggesting that we should be wary of bots in online behavioral data collection (🧡by @cstrauch.bsky.social here: bsky.app/profile/cstr...). But should we? Here is my response letter osf.io/preprints/ps.... 1/5

04.03.2026 12:51 πŸ‘ 51 πŸ” 31 πŸ’¬ 6 πŸ“Œ 3
Preview
Ida Momennejad, The Ontological Reversal of Computation and the Brain - PhilArchive The Brain Abstracted (2025) critiques treating abstractions in neuroscience as complete explanations of the brain, for their oversimplification and control-orientation. Chirimuuta argues that neurosci...

On the invitation for a commentary on Mazviita Chirimuuta's The Brain Abstracted, I had the pleasure of writing on equating the brain with computation.

The Ontological Reversal of Computation and the Brain

in Philosophy & the Mind Sciences.
Below see what I agree & disagree with in the book.
🧡 1/n

17.02.2026 16:52 πŸ‘ 43 πŸ” 13 πŸ’¬ 1 πŸ“Œ 3
Preview
A 2D Gabor-wavelet baseline model out-performs a 3D surface model in scene-responsive cortex Author summary To gain a more complete picture of human visual processing, it is critical to understand the precise format of representations of naturalistic visual scenes. Recent work has approached ...

Excited that this work with @serences.bsky.social and @timbrady.bsky.social is now out! Our Gabor-wavelet model better predicted voxel responses in scene regions than 3D models. Does this mean that scene areas aren’t β€œfor” processing 3D scene structure? NO, we argue. 1/
dx.plos.org/10.1371/jour...

17.02.2026 16:28 πŸ‘ 26 πŸ” 7 πŸ’¬ 1 πŸ“Œ 0
Post image

10 PhD positions at JLU Giessen in the new Research Training Group "PIMON"! We will explore how humans perceive and interact with materials and objects in natural environments.
More information on the project, the PIs, and how to apply here:
www.uni-giessen.de/de/ueber-uns...
Please share!

26.02.2026 09:29 πŸ‘ 32 πŸ” 19 πŸ’¬ 1 πŸ“Œ 1

Even if bots may not have been a problem in online behavioural datasets 2-3 years ago, we should still be careful about generalizing between online studies and lab studies. See our study on dataset bias: www.nature.com/articles/s41...

24.02.2026 09:41 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science πŸ“•πŸ‘‡

Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!

Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!

12.02.2026 22:22 πŸ‘ 52 πŸ” 22 πŸ’¬ 0 πŸ“Œ 1
Post image

Our paper is out in @natneuro.nature.com!

www.nature.com/articles/s41...

We develop a geometric theory of how neural populations support generalization across many tasks.

@zuckermanbrain.bsky.social
@flatironinstitute.org
@kempnerinstitute.bsky.social

1/14

10.02.2026 15:56 πŸ‘ 273 πŸ” 100 πŸ’¬ 7 πŸ“Œ 1
Preview
MRD: Using Physically Based Differentiable Rendering to Probe Vision Models for 3D Scene Understanding While deep learning methods have achieved impressive success in many vision benchmarks, it remains difficult to understand and explain the representations and decisions of these models. Though vision ...

Happy to announce that I will be giving a remote talk @vssmtg.bsky.social this year. I will be presenting our recent pre-print MRD: Metamers rendered differentiably (arxiv.org/abs/2512.12307).

Happy to take questions before the presentation in May, or live during the online Q&A.

#VSS2026 #VSS26

10.02.2026 12:34 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

Check out our latest preprint: In naturalistic (virtual) scenes, we tested how spatial (proximity to anchor) and cognitive (scene semantics) factors influence allocentric representations of local target objects in a memory-guided placement task: πŸ‘‡
osf.io/preprints/ps...

09.02.2026 23:51 πŸ‘ 8 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ“’ Workshop announcement.

We are super excited to announce the workshop Perceptual Inferences, from philosophy to neuroscience, organized by Alexander SchΓΌtz and Daniel Kaiser.

πŸ“ Rauischholzhausen Castle, near Marburg, Germany
πŸ—“οΈ June 8 to 10, 2026.
1/4

10.02.2026 09:00 πŸ‘ 42 πŸ” 17 πŸ’¬ 1 πŸ“Œ 1
Figure 11, original image decomposed into shape, light source, shading, and reflectance.

Figure 11, original image decomposed into shape, light source, shading, and reflectance.

Preparing for class and remembering how brilliant this essay is: persci.mit.edu/pub_pdfs/sha... - "The perception of shading and reflectance" (Adelson & Pentland, 1996). Early and beautiful description of Bayesian perceptionΒ theories.

10.02.2026 05:35 πŸ‘ 30 πŸ” 7 πŸ’¬ 0 πŸ“Œ 0

Check out our new preprint! πŸŽ‰ We demonstrate that real-world object search is shaped by both objects’ inherent variability and searchers’ individual priors, using a new approach that combines human drawings with DNN representational similarity analysis. ✍️πŸ–₯οΈπŸ‘€

09.02.2026 10:51 πŸ‘ 12 πŸ” 4 πŸ’¬ 0 πŸ“Œ 0
OSF

New preprint with @SamJung @timbrady.bsky.social and @violastoermer.bsky.social: osf.io/preprints/ps.... Here we uncover what might be driving the β€œmeaningfulness benefit” in visual working memory. Studies show that real objects are remembered better in VWM tasks than abstract stimuli. But why? 1/

09.02.2026 21:06 πŸ‘ 41 πŸ” 24 πŸ’¬ 1 πŸ“Œ 0
Preview
Visual language models show widespread visual deficits on neuropsychological tests - Nature Machine Intelligence Tangtartharakul and Storrs use standardized neuropsychological tests to compare human visual abilities with those of visual language models (VLMs). They report that while VLMs excel in high-level obje...

Our latest paper, β€œVisual language models show widespread visual deficits on neuropsychological tests”, is now out in Nature Machine Intelligence: www.nature.com/articles/s42...

Non-paywalled version:
arxiv.org/abs/2504.10786

Tweet thread below from first author @genetang.bsky.social...

09.02.2026 02:40 πŸ‘ 70 πŸ” 36 πŸ’¬ 1 πŸ“Œ 2
Post image

The visual world is composed of objects, and those objects are composed of features. But do VLMs exploit this compositional structure when processing multi-object scenes? In our πŸ†’πŸ†• #ICLR2026 paper, we find they do – via emergent symbolic mechanisms for visual binding. πŸ§΅πŸ‘‡

05.02.2026 20:54 πŸ‘ 83 πŸ” 25 πŸ’¬ 1 πŸ“Œ 3
Preview
Learning Abstractions for Hierarchical Planning in Program-Synthesis Agents Humans learn abstractions and use them to plan efficiently to quickly generalize across tasks -- an ability that remains challenging for state-of-the-art large language model (LLM) agents and deep rei...

A new and improved version of TheoryCoder, which learns to play video games in a human-like way by synthesizing both high-level abstractions and a low-level model of game mechanics:
arxiv.org/abs/2602.00929

03.02.2026 10:26 πŸ‘ 40 πŸ” 11 πŸ’¬ 2 πŸ“Œ 1
Graphic titled β€œThe Cognitive Science Society Weekly Update.” The design includes the Society’s logo and a circular arrow with a checkmark, suggesting updates or reminders. The color palette is teal, white, and bright green.

Graphic titled β€œThe Cognitive Science Society Weekly Update.” The design includes the Society’s logo and a circular arrow with a checkmark, suggesting updates or reminders. The color palette is teal, white, and bright green.

Wait, it's February?!

We're now about 6 months away from welcoming everyone to Rio for #CogSci2026 πŸ‡§πŸ‡·

Here's a quick weekly update with key dates and reminders you don't want to miss πŸ‘‡

01.02.2026 12:50 πŸ‘ 5 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0
Preview
High-dimensional structure underlying individual differences in naturalistic visual experience Han and Bonner reveal that individual visual experience arises from high-dimensional neural geometry distributed across multiple representational scales. By characterizing the full dimensional spectru...

Human visual cortex representations may be much higher-dimensional than earlier work suggested, but are these higher dimensions of cortical activity actually relevant to behavior? Our new paper tackles this by studying how different people experience the same movies. 🧡 www.cell.com/current-biol...

30.01.2026 18:52 πŸ‘ 60 πŸ” 16 πŸ’¬ 2 πŸ“Œ 2

This tool is extremely helpful and I think all scientists could benefit from it. Good at all stages of the research process, especially the very beginning.

It's also a great of example of how we *should* be building AI tools for science. Not to replace scientific thinking, but to hone it.

28.01.2026 18:16 πŸ‘ 12 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0

Finally out in eLife!!
"Early foveal cortex predicts the features of saccade targets through feedback from higher cortical areas."
elifesciences.org/articles/107...

26.01.2026 14:20 πŸ‘ 30 πŸ” 11 πŸ’¬ 0 πŸ“Œ 0

Congratulations! πŸ‘πŸ‘πŸ‘

27.01.2026 10:32 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

With some trepidation, I'm putting this out into the world:
gershmanlab.com/textbook.html
It's a textbook called Computational Foundations of Cognitive Neuroscience, which I wrote for my class.

My hope is that this will be a living document, continuously improved as I get feedback.

09.01.2026 01:27 πŸ‘ 585 πŸ” 237 πŸ’¬ 16 πŸ“Œ 10

🚨🧡 Happy to share my new preprint β€œSensory expectations and prediction error during feedback control in the human brain” with @gribblelab.org , @andpru.bsky.social , @jonathanamichaels.bsky.social and @diedrichsenjorn.bsky.social.

www.biorxiv.org/content/10.6...

21.01.2026 15:49 πŸ‘ 17 πŸ” 7 πŸ’¬ 2 πŸ“Œ 1
Post image

"How much of the brain's learned algorithms depend on the fact it is a brain?" arxiv.org/abs/2601.02063 The brain is a neural network, but also a biological organ (unlike artificial neural networks). How much does this matter to cognition?

25.01.2026 09:09 πŸ‘ 38 πŸ” 8 πŸ’¬ 0 πŸ“Œ 1
Preview
BBS: Pay Attention to Eye Movement Behavior β€” Anthony Barnhart: Cognitive Scientist, Magic Scholar, & Speaker Hayward Godwin, Michael Hout, and I have written a response to a provocative piece by Ruth Rosenholtz in Behavioral & Brain Sciences where she argues for an abandonment of the concept of β€œattention” as an explanatory tool.

Now available in its entirety, you can read Ruth Rosenholtz's Behavioral & Brain Sciences critique of attention as an explanation tool in vision science and our reaction to her piece.

www.anthonybarnhart.com/news/bbs-pay...

24.01.2026 20:58 πŸ‘ 14 πŸ” 5 πŸ’¬ 2 πŸ“Œ 0
Preview
How the brain predicts objects in a changing world | Radboud University In everyday life, we often encounter objects that are partially hidden or only seen from the corner of our eye. Our brain is remarkably good at keeping track of objects, and new research reveals how t...

New research from @peelen.bsky.social, shows our brain actively predicts how they should look based on the 3D structure of the environment. Even when objects are temporarily hidden from view, their expected orientation can be decoded from activity in the visual cortex. πŸ‘‡

www.ru.nl/en/donders-i...

23.01.2026 15:09 πŸ‘ 12 πŸ” 8 πŸ’¬ 0 πŸ“Œ 1
Preview
Dynamic context–based updating of object representations in the visual cortex Objects are mentally rotated together with the changing viewpoint on a scene, affecting their representation in the visual cortex.

Please check out the paper (science.org/doi/10.1126/...) for more details! All data can be found here: doi.org/10.34973/30x... and analysis code here: github.com/GAldegheri/d...

23.01.2026 15:15 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

These results suggest that changes in viewpoint can automatically evoke expectations of "mentally rotated" objects. This is a form of mental rotation which could more directly interface with visual information in the real world.

23.01.2026 15:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Finally, in our second experiment, we find that we can decode the expected proximal shape of the object from visual cortex even without any visual input (the objects doesn't reappear after the occlusion). This is more direct evidence of scene-driven predictions!

23.01.2026 15:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0