QRRanker: Query-focused and Memory-aware Reranker for Long Context Processing

A lightweight reranking framework that leverages Query-focused Retrieval (QR) heads to produce continuous relevance scores, enabling effective listwise reranking with small-scale models.

Yuqing Li1,2,*, Jiangnan Li3,*, Mo Yu3,*, Guoxuan Ding1,2, Zheng Lin1,2,✉, Weiping Wang1, Jie Zhou3

* Equal contribution.    Corresponding author.

1Institute of Information Engineering, Chinese Academy of Sciences   2School of Cyber Security, UCAS
3Pattern Recognition Center, WeChat AI, Tencent Inc

2.9B
Used Params
57.03
F1 on LoCoMo
Recall Results on DetectiveQA and MuSiQue

Figure 1. Retrieval and reranking recall performance on DetectiveQA (average of Chinese and English datasets) and MuSiQue. QRRanker consistently improves recall across different retrieval stages.

Framework Overview

Figure 2. Overview of QRRanker. The highlighted attention heads are QR heads used for document scoring. QRRanker can incorporate memory enhancement to capture contextual information, enabling effective retrieval for narratives and dialogues.

Abstract

Built upon the existing analysis of retrieval heads in large language models, we propose an alternative reranking framework that trains models to estimate passage–query relevance using the attention scores of selected heads. This approach provides a listwise solution that leverages the holistic information within the entire candidate shortlist during ranking. At the same time, it naturally produces continuous relevance scores, enabling training on arbitrary retrieval datasets without requiring Likert-scale supervision.

Our framework is lightweight and effective, requiring only small-scale models (e.g., 4B parameters) to achieve strong performance. Extensive experiments demonstrate that our method outperforms existing state-of-the-art pointwise and listwise rerankers across multiple domains, including Wikipedia and long narrative datasets. It further establishes a new state-of-the-art on the LoCoMo benchmark that assesses the capabilities of dialogue understanding and memory usage.

We further demonstrate that our framework supports flexible extensions. For example, augmenting candidate passages with contextual information further improves ranking accuracy, while training attention heads from middle layers enhances efficiency without sacrificing performance.

Method

Key Idea

Building upon the discovery of Query-focused Retrieval (QR) heads (Zhang et al., 2025)—attention heads in LLMs whose attention patterns naturally focus on query-relevant passages—we train these heads with a contrastive ranking objective. This allows us to directly use their attention scores as relevance signals for reranking, providing a listwise solution that benefits from holistic information across the entire candidate shortlist.

Our approach consists of the following steps:

  1. QR-Head Identification: Compute QR scores on seed data to identify attention heads that naturally attend to relevant documents.
  2. Listwise Training: Construct training instances with top-50 retrieved candidates and optimize using a group-wise contrastive loss.
  3. Attention-based Scoring: During inference, aggregate attention weights from QR heads to produce continuous relevance scores without generation.
  4. Memory Enhancement (Optional): Prepend contextual summaries to capture broader context for long narrative understanding.
QR Score Computation

Figure 3. The retrieval score and QR score are computed based on the attention scores of QR attention heads. In this example, Doc2 is the gold document.

Main Results

We evaluate QRRanker on diverse benchmarks including Wikipedia QA (MuSiQue, HotpotQA), story understanding (NarrativeQA, DetectiveQA), and long-term dialogue memory (LoCoMo).

Retrieval Performance

Method Musique HotpotQA NarrativeQA DetectiveQA
R@5R@10 R@5R@10 R@5R@10 R@5R@10
Qwen3-Embedding-8B 62.5572.47 89.0595.15 20.9232.39 20.0031.17
Qwen-Reranker-4B 66.3774.26 94.1596.75 28.2541.98 30.5042.09
GroupRank-32B 65.0873.07 90.6094.50 33.7648.83 39.2151.38
QRHeads-4B (out-of-box) 71.2278.99 94.8096.90 33.4448.89 32.8945.58
QRRanker-4B (Ours) 77.3782.13 96.9097.70 38.8954.93 41.3253.76

Table 1. Retrieval performance measured by Recall@k. QRRanker achieves the best results across all datasets.

57.03
Overall F1
854
Avg Tokens
Method Tokens Single-hop Multi-hop Temporal Open-domain Overall F1
Qwen3-Embedding-8B 846 47.95 35.24 41.36 24.79 42.81
A-Mem 2,712 44.65 27.02 45.85 12.14 39.65
Mem0 1,764 47.65 38.72 48.93 28.64 45.09
TiMem 511 54.40
Membox 2,166 60.09 39.88 58.03 27.96 53.10
CompassMem 20,000 57.36 38.84 57.96 26.61 52.18
QRRanker (Ours) 854 62.95 43.06 61.90 29.79 57.03

Table 2. Results on LoCoMo benchmark for long-term dialogue memory understanding. QRRanker achieves the best performance with a compact token budget.

Citation

@misc{li2026queryfocusedmemoryawarererankerlong,
      title={Query-focused and Memory-aware Reranker for Long Context Processing}, 
      author={Yuqing Li and Jiangnan Li and Mo Yu and Guoxuan Ding and Zheng Lin and Weiping Wang and Jie Zhou},
      year={2026},
      eprint={2602.12192},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2602.12192}, 
}

Work in Progress — This project is under active development.