top of page
Search

Neural Search Talks [4] — Transformer Memory as a Differentiable Search Index

In this episode of Neural Information Retrieval Talks, Andrew Yates and Sergi Castella discuss the paper titled "Transformer Memory as a Differentiable Search Index" by Yi Tay et al. at Google.


This work proposes a new approach to document retrieval in which document ids are memorized by a transformer during training (or "indexing") and for retrieval, a query is fed to the model, which then generates autoregressively relevant doc ids for that query.



Paper: https://arxiv.org/abs/2202.06991

Listen on other platforms: https://anchor.fm/neural-ir-talks/episodes/Transformer-Memory-as-a-Differentiable-Search-Index-memorizing-thousands-of-random-doc-ids-works-e1g4b4v


Timestamps:

00:00 Intro: Transformer memory as a Differentiable Search Index (DSI)

01:15 The gist of the paper, motivation

4:20 Related work: Autoregressive Entity Linking

7:38 What is an index? Conventional vs. "differentiable"

10:20 Indexing and Retrieval definitions in the context of the DSI

12:40 Learning representations for documents

17:20 How to represent document ids: atomic, string, semantically relevant

22:00 Zero-shot vs. finetuned settings

24:10 Datasets and baselines

27:08 Dinetuned results

36:40 Zero-shot results

43:50 Ablation results

47:15 Where could this model be useds?

52:00 Is memory efficiency a fundamental problem of this approach?

55:14 What about semantically relevant doc ids?

60:30 Closing remarks


1 view
bottom of page