The proxy reward coming from this satisfies our conditions; we include empirical results showing improvement when learned with this proxy reward in the upcoming camera ready version.
n/n
The proxy reward coming from this satisfies our conditions; we include empirical results showing improvement when learned with this proxy reward in the upcoming camera ready version.
n/n
Apart from the example given, there are also a lot of natural frameworks satisfying our conditions. For example, increased temperature from tempered softmax causes bias in learning reward functions.
9/n
4. If the expert judges two symptoms as similar, the trainee must also judge the two symptoms as similar except up to some relaxation constant L; similarity is measured by a metric between distributions mapped by the policies. Our sample complexity improvement depends on L.
8/n
3. There exist a low-dimensional encoding of the image of the proxy policy satisfying some smoothness conditions. Note that this is standard in the majority of machine learning tasks.
7/n
Crucially it's not necessary that D1=D2.
2. The proxy's image must contain that of the true. This essentially means that all the possible diseases D diagnosable by the expert can also be diagnosed by the trainee, though the trainee may map the wrong symptom to a given D.
6/n
1. The proxy and true policies must share level sets: using trainee doctors as proxies for expert doctors, then whenever the expert judges two distinct symptoms S1, S2 to indicate the same disease D1, the trainee also judge S1, S2 to indicate the same disease D2.
5/n
Our work is the first to provide a theoretical foundation of using cheap but noisy rewards for preference learning of large generative models.
What do our technical conditions essentially say?
4/n
Crucially, we prove that under our conditions the true policy is given by a low-dimensional adaptation of the proxy policy. This leads to a significant sample complexity improvement which we formally prove using PAC theory.
3/n
Our work discusses sufficient conditions under which proxy rewards can be used to improve the learning of the underlying true policy in preference learning algorithms.
2/n
Arxiv version is already online:
arxiv.org/abs/2412.16475
New work! ๐ช๐ป๐ฅ๐คฏ When Can Proxies Improve the Sample Complexity of Preference Learning? Our paper is accepted at
@icmlconf.bsky.social 2025. Fantastic joint work with @spectral.space, Zhengyan Shi, @meng-yue-yang.bsky.social, @neuralnoise.com, Matt Kusner, @alexdamour.bsky.social.
1/n
1. neurips.cc/virtual/2024...
2. neurips.cc/virtual/2024...
Come talk to me about Causal Abstraction and LLM Theory/Alignment! I'm at #NeurIPS2024 presenting
1. Structured Learning of Compositional Sequential Interventions (Thu 11am-2pm, West Ballroom A-D #5002)
2. Unsupervised Causal Abstraction
(Sunday, CRL workshop)