Reward function compression facilitates goal-dependent reinforcement learning
Reinforcement learning agents learn from rewards, but humans can uniquely assign value to novel, abstract outcomes in a goal-dependent manner. However, this flexibility is cognitively costly, making l...
π’ New preprint!
How do humans learn from arbitrary, abstract goals? We show that, when goal spaces can be compressed, costly working-memory processes give way to internalized reward functions, enabling efficient goal-dependent reinforcement learning. @annecollins.bsky.social arxiv.org/abs/2509.06810
09.09.2025 01:58
π 59
π 24
π¬ 2
π 1
β
Works with tractable & intractable models
β
Handles continuous & discrete latent spaces
β
Applicable to real-world datasets.
29.08.2025 21:10
π 2
π 0
π¬ 0
π 0
We tested our method on various cognitive models, including reinforcement learning models, Bayesian models, and GLM-HMM. A collective effort with @drjingjing.bsky.social @wdt.bsky.social @annecollins.bsky.socialβ¬
29.08.2025 21:10
π 3
π 0
π¬ 1
π 0