QRRanker
A lightweight reranking framework that leverages Query-focused Retrieval (QR) heads to produce continuous relevance scores, enabling effective listwise reranking with small-scale models.
Built upon the existing analysis of retrieval heads in large language models, we propose an alternative reranking framework that trains models to estimate passage–query relevance using the attention scores of selected heads. This approach provides a listwise solution that leverages the holistic information within the entire candidate shortlist during ranking. At the same time, it naturally produces continuous relevance scores, enabling training on arbitrary retrieval datasets without requiring Likert-scale supervision.
Our framework is lightweight and effective, requiring only small-scale models (e.g., 4B parameters) to achieve strong performance. Extensive experiments demonstrate that our method outperforms existing state-of-the-art pointwise and listwise rerankers across multiple domains, including Wikipedia and long narrative datasets. It further establishes a new state-of-the-art on the LoCoMo benchmark that assesses the capabilities of dialogue understanding and memory usage.
We further demonstrate that our framework supports flexible extensions. For example, augmenting candidate passages with contextual information further improves ranking accuracy, while training attention heads from middle layers enhances efficiency without sacrificing performance.
Building upon the discovery of Query-focused Retrieval (QR) heads (Zhang et al., 2025)—attention heads in LLMs whose attention patterns naturally focus on query-relevant passages—we train these heads with a contrastive ranking objective. This allows us to directly use their attention scores as relevance signals for reranking, providing a listwise solution that benefits from holistic information across the entire candidate shortlist.
Our approach consists of the following steps:
Figure 3. The retrieval score and QR score are computed based on the attention scores of QR attention heads. In this example, Doc2 is the gold document.
We evaluate QRRanker on diverse benchmarks including Wikipedia QA (MuSiQue, HotpotQA), story understanding (NarrativeQA, DetectiveQA), and long-term dialogue memory (LoCoMo).
| Method | Musique | HotpotQA | NarrativeQA | DetectiveQA | ||||
|---|---|---|---|---|---|---|---|---|
| R@5 | R@10 | R@5 | R@10 | R@5 | R@10 | R@5 | R@10 | |
| Qwen3-Embedding-8B | 62.55 | 72.47 | 89.05 | 95.15 | 20.92 | 32.39 | 20.00 | 31.17 |
| Qwen-Reranker-4B | 66.37 | 74.26 | 94.15 | 96.75 | 28.25 | 41.98 | 30.50 | 42.09 |
| GroupRank-32B | 65.08 | 73.07 | 90.60 | 94.50 | 33.76 | 48.83 | 39.21 | 51.38 |
| QRHeads-4B (out-of-box) | 71.22 | 78.99 | 94.80 | 96.90 | 33.44 | 48.89 | 32.89 | 45.58 |
| QRRanker-4B (Ours) | 77.37 | 82.13 | 96.90 | 97.70 | 38.89 | 54.93 | 41.32 | 53.76 |
Table 1. Retrieval performance measured by Recall@k. QRRanker achieves the best results across all datasets.
| Method | Tokens | Single-hop | Multi-hop | Temporal | Open-domain | Overall F1 |
|---|---|---|---|---|---|---|
| Qwen3-Embedding-8B | 846 | 47.95 | 35.24 | 41.36 | 24.79 | 42.81 |
| A-Mem | 2,712 | 44.65 | 27.02 | 45.85 | 12.14 | 39.65 |
| Mem0 | 1,764 | 47.65 | 38.72 | 48.93 | 28.64 | 45.09 |
| TiMem | 511 | – | – | – | – | 54.40 |
| Membox | 2,166 | 60.09 | 39.88 | 58.03 | 27.96 | 53.10 |
| CompassMem | 20,000 | 57.36 | 38.84 | 57.96 | 26.61 | 52.18 |
| QRRanker (Ours) | 854 | 62.95 | 43.06 | 61.90 | 29.79 | 57.03 |
Table 2. Results on LoCoMo benchmark for long-term dialogue memory understanding. QRRanker achieves the best performance with a compact token budget.
@misc{li2026queryfocusedmemoryawarererankerlong,
title={Query-focused and Memory-aware Reranker for Long Context Processing},
author={Yuqing Li and Jiangnan Li and Mo Yu and Guoxuan Ding and Zheng Lin and Weiping Wang and Jie Zhou},
year={2026},
eprint={2602.12192},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2602.12192},
}
Work in Progress — This project is under active development.