Ningyu Xu's Avatar

Ningyu Xu

@ningyuxu

PhD student @FudanUniv | Artificial Intelligence | Cognitive Science | Computational Linguistics. https://ningyuxu.github.io/

11
Followers
37
Following
11
Posts
17.01.2025
Joined
Posts Following

Latest posts by Ningyu Xu @ningyuxu

Finally, thanks to the reviewers and the PNAS editors for their constructive and thoughtful feedback, and for helping shape the final version of the paper into what it is today.

11/N = 11

01.11.2025 10:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Revealing emergent human-like conceptual representations from language prediction | PNAS People acquire concepts through rich physical and social experiences and use them to understand and navigate the world. In contrast, large language...

πŸ“„ Paper: doi.org/10.1073/pnas...

πŸ”¨ Code and data: github.com/ningyuxu/llm...

Work by Ningyu Xu @ningyuxu.bsky.social , Qi Zhang, Chao Du, Qiang Luo, Xipeng Qiu, Xuanjing Huang, and Menghan Zhang.

10/N

01.11.2025 10:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We also find notable divergences from human behavioral and neural patternsβ€”LLM-derived concepts remain limited in capturing visually grounded perceptual features, pointing to future directions for improving human–machine alignment.

9/N

01.11.2025 10:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Our work suggests that LLMs offer a tractable window into human conceptual representation, providing resources for future research on the nature of human concepts.
It also opens new pathways for probing the mechanisms underlying LLMs' intelligent behaviors.

8/N

01.11.2025 10:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

These findings demonstrate that critical aspects of human concepts are learnable purely from language prediction. Rather than relying on real-world grounding, LLMs organize concepts through meaningful interrelationships preserved across contexts.

7/N

01.11.2025 10:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

The LLM-derived conceptual representations also align closely with neural activity patterns in the human brain even when people view visual (not textual) stimuli, exhibiting biological plausibility.

6/N

01.11.2025 10:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

Moreover, these representations effectively capture human behavioral judgments across key psychological phenomena including similarities, categories, and gradient scales along features. They substantially surpass traditional embeddings derived from individual words.

5/N

01.11.2025 10:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We find that LLMs can flexibly derive concepts from linguistic descriptions across varying contexts. The derived representations converge toward a shared, context-independent structure, which predicts model performance across various understanding and reasoning tasks.

4/N

01.11.2025 10:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We propose to derive conceptual representations from LLMs through an in-context concept inference taskβ€”the reverse dictionary task. This task simulates the process by which people identify a concept from its definition or description.

3/N

01.11.2025 10:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

TL;DR: We show that LLMs develop conceptual representations that capture key aspects of human concepts. These representations are organized through meaningful, stable relationships, which reliably predict the models' understanding and reasoning performance. πŸ€–πŸ§ 

2/N

01.11.2025 10:16 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Our paper is now out in PNAS!

Are LLMs developing human-like concepts that are central to human cognition? If so, how are such concepts represented, organized, and related to behavior?

doi.org/10.1073/pnas...

1/N

01.11.2025 10:16 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0