π¨ Hiring in Munich π©πͺ: 2 open-topic PhD positions in human & machine learning (TVΓΆD E13 80%).
Start ~June 2026 (flexible). Deadline: March 2, 2026.
Apply/info: hcai-munich.com/PhD_Job_Ad.pdf
Reposts appreciated π
π¨ Hiring in Munich π©πͺ: 2 open-topic PhD positions in human & machine learning (TVΓΆD E13 80%).
Start ~June 2026 (flexible). Deadline: March 2, 2026.
Apply/info: hcai-munich.com/PhD_Job_Ad.pdf
Reposts appreciated π
love the paper and this line of work in general! Congrats!
join us!
Interdisciplinary approaches connecting cognitive science and machine learning to study and evaluate metacognition are especially welcome.
sites.google.com/view/metacog...
We are organizing a workshop on Metacognition in Generative AI at @euripsconf.bsky.social in Copenhagen later this year.
Submission deadline for short papers is on October 17th.
check out our GAC on benchmarks at CCN happening later today!
What happens when you train AI on psychological experiments? It behaves a lot like a human mind. Here's my story on Centaur, and the debate about what AI has to offer to cognitive science. Gift link nyti.ms/3ZYqXcg π§ͺ
Huge thanks to the team and collaborators who made this possible.
You can also explore the model via our @hf.co space: huggingface.co/spaces/marce...
More information on the project landing page: marcelbinz.github.io/centaur
We also present a case study showing how Centaur can support scientific discovery.
An updated version of this approach is available in our new preprint: arxiv.org/abs/2505.17661
Centaur can also be adapted to predict secondary measurements like neural activity and response times -- despite never being trained to do so.
We find that Centaur generalizes to unseen experiments and accurately predicts human behavior under modified cover stories, problem structures, and even in entirely novel domains.
Centaur was trained on Psych-101, a new dataset with trial-by-trial data from 160 psychological experiments, containing over 60,000 participants and 10,000,000 choices.
Excited to see our Centaur project out in @nature.com.
TL;DR: Centaur is a computational model that predicts and simulates human behavior for any experiment described in natural language.
New short-form preprint in which we use Centaur to identify gaps in interpretable cognitive models and revise them accordingly using Qwen3 -- fully automated and without a human-in-the-loop.
arxiv.org/abs/2505.17661
Registration for IICCSSS 2025 in Darmstadt is open! π₯³ Sign up now for a week of exciting talks, hands-on projects and inspiring discussions! www.iiccsss.org/registration/
As always, IICCSSS is free, and open to all students who are excited about computational cognitive science π‘π§
#AIinScience: Ethical & Practical Challenges
π€Interview with Dr. Marcel Binz, #HelmholtzMunich, on how Large Language Models are transforming scientific methods & the need of reshaping the scientific mindset:
π t1p.de/p82vj
@marcelbinz.bsky.social @ericschulz.bsky.social β¬@zeynepakata.bsky.social
Hi Yifei, sadly we don't have any intern positions available right now (and we are in general constrained to hiring interns who are enrolled at German universities).
We are looking for two PhD students at our institute in Munich.
Both postions are open-topic, so anything between cognitive science and machine learning is possible.
More information: hcai-munich.com/PhDHCAI.pdf
Feel free to share broadly!
Ever wondered why only some memories π§ come easily? Our latest work (osf.io/preprints/ps...) led by S. Haridi, with @ericschulz.bsky.social, shows that targeted memory retrieval speeds up with precise semantic and temporal retrieval cues. Hence, crafting cues can give you instant access to memoriesβ‘
In previous work we found that VLMs fall short of human visual cognition. To make them better, we fine-tuned them on visual cognition tasks. We find that while this improves performance on the fine-tuning task, it does not lead to models that generalize to other related tasks:
About a month late posting this, but here's a new project with @ericschulz.bsky.social, @akjagadish.bsky.social, @marvinmathony.bsky.social and Tobias Ludwig
We are using LLMs to propose cognitive models in learning and decision making data. Presenting this work at RLDM!
arxiv.org/abs/2502.00879
The German Cognitive Science Society is organizing a PhD symposium in Tuebingen in April.
If you are a PhD student in the vicinity, you should definetely register (by February 28th) -- it will be fun!
cogsciprag.github.io/context-in-c...
In our latest article, published in @pnas.org and led by @marcelbinz.bsky.social and Stephan Alaniz, we got together four diverse groups of scientists to reflect on how LLMs should affect science. From treating them like co-authors to using other tools instead, many interesting arguments emerged.
We are currently building the largest, cross-domain data set of human behavior as part of an open collaborative project. Contributions of any form are welcome, but especially experiments with meta-data from developmental, cross-cultural, or clinical studies.
More details: github.com/marcelbinz/P...
Have we built machines that learn and think like people?
In our new paper, we find that vision large language models still fall short when it comes to cognitive abilities in the domains of causal reasoning, intuitive physics, and theory of mind.
www.nature.com/articles/s42...
and mark in your calendars the following dates & speakers:
David Danks, Jan. 7
Dimitri Coelho Mollo, Jan 14
Raphael Milliere, Jan 21
Ben Bergen, Feb 4
David Garcia, Feb 18
Jay McClelland, Mar 4
Chris Summerfield, Mar 18
Marcel Binz, April 1st
Tom Griffiths, April 29
Thomas Icard, May 13
Preprint alert! We explore 3 exploration tasks, testing if they measure a stable construct & its link to real-world exploration. We find improved robustness of latent factors compared to single-task estimates.
With Mirko Thalmann & @ericschulz.bsky.social
πhttps://osf.io/preprints/psyarxiv/tzuey