This work was devised at the ECIR collab-a-thon last year, and we hope to continue discussions at this year's collab-a-thon in Lucca! Read more here: arxiv.org/abs/2502.20937 #ECIR2025 #SIGIR2025
This work was devised at the ECIR collab-a-thon last year, and we hope to continue discussions at this year's collab-a-thon in Lucca! Read more here: arxiv.org/abs/2502.20937 #ECIR2025 #SIGIR2025
We consider that a human represents a bound on performance under a subjective task such as determining relevance, as only a single intent is defined in each topic. We find that systems are either indistinguishable from humans or exceed humans as oracle rankers.
We then look downstream, what effect does re-annotation have on modern systems? We find that modern system comparisons are increasingly unstable on DLβ19, meaning that determining the pair-wise ordering of systems when measured nDCG values are far apart remains unstable.
We look into causes of disagreement, finding that subtle differences in query intent, even when relevance is well defined, can lead to greater disagreement in 4-grade relevance. However, we find that it is challenging to agree on what is relevant even under a fixed narrative.
Re-annotation is commonly performed to validate how variations in relevance judgements affect our ability to discriminate between retrieval systems. We validate hypotheses on stability, but in a modern setting, there are no narratives, 4-grade relevance, and a neural pool.
π¨Β New Pre-Print! π¨Β Reviewer 2 has once again asked for DLβ19, what can you say in rebuttal? Β To help, we have re-annotated DLβ19. Work done with @maik_froebe.bsky.social, @hscells.bsky.social, @fschlatt1.bsky.social, Guglielmo Faggioli, Saber Zerhoudi, @macavaney.bsky.social, Eugene Yang π§΅