Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/38/5e/2e/385e2e1a-fd6d-cf4d-1acf-6a4f92248552/mza_1515038687252973743.jpg/600x600bb.jpg
Neural Search Talks — Zeta Alpha
Zeta Alpha
21 episodes
5 days ago
A monthly podcast where we discuss recent research and developments in the world of Neural Search, LLMs, RAG and Natural Language Processing with our co-hosts Jakub Zavrel (AI veteran and founder at Zeta Alpha) and Dinos Papakostas (AI Researcher at Zeta Alpha).
Show more...
Technology
RSS
All content for Neural Search Talks — Zeta Alpha is the property of Zeta Alpha and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
A monthly podcast where we discuss recent research and developments in the world of Neural Search, LLMs, RAG and Natural Language Processing with our co-hosts Jakub Zavrel (AI veteran and founder at Zeta Alpha) and Dinos Papakostas (AI Researcher at Zeta Alpha).
Show more...
Technology
Episodes (20/21)
Neural Search Talks — Zeta Alpha
AGI vs ASI: The future of AI-supported decision making with Louis Rosenberg

In this episode of Neural Search Talks, we have invited Louis Rosenberg, CEO of Unanimous.AI, to discuss the future of AI in decision-making, contrasting the development of artificial superintelligence (ASI) with collective human intelligence systems, such as swarm intelligence. In particular, Louis argues that the advancement of AI should focus on amplifying human intelligence rather than replacing it, drawing from the biological inspiration found in nature, where species evolve by connecting individuals into systems that function as a singular intelligent entity, exemplified by schools of fish and swarms of bees. Tune into our conversation to learn more about how AI can assist humans in disseminating knowledge and making better decisions!

Check out the Zeta Alpha Neural Discovery platform: https://zeta-alpha.com Subscribe to the Zeta Alpha calendar to not miss out on any of our events: https://lu.ma/zeta-alpha Timestamps: 0:00 Intro by Jakub Zavrel 2:08 Using AI to amplify human intelligence 18:19 How AI and humans learn from each other 26:41 Scaling human collaboration with AI 40:13 Satisfying information needs with AI 45:57 How Unanimous AI connects experts to make better decisions 51:37 Predictions for AI progress in one year 53:21 Outro

Show more...
10 months ago
54 minutes 42 seconds

Neural Search Talks — Zeta Alpha
EXAONE 3.0: An Expert AI for Everyone (with Hyeongu Yun)

In this episode of Neural Search Talks, we welcome Hyeongu Yun from LG AI Research to discuss the newest addition to the EXAONE Universe: EXAONE 3.0. The model demonstrates strong capabilities in both English and Korean, excelling not only in real-world instruction-following scenarios but also achieving impressive results in math and coding benchmarks. Hyeongu shares the team's approach to the development of this model, revealing key training factors that contributed to its success while also highlighting the challenges they faced along the way. We close this episode off with a look at EXAONE's future, as well as Hyeongu's perspective on the evolving role of AI systems.


Check out the Zeta Alpha Neural Discovery platform. Subscribe to the Zeta Alpha calendar to not miss out on any of our events! Sources: - https://lgresearch.ai/blog/view?seq=460 - https://huggingface.co/LGAI-EXAONE/EXAONE-3.0-7.8B-Instruct - https://arxiv.org/abs/2408.03541 Timestamps: 0:00 Intro by Jakub Zavrel 1:37 The journey of the EXAONE project 4:34 The main challenges in the development of EXAONE 3.0 6:37 The secret to achieving great bilingual performance in English & Korean 7:51 How EXAONE 3.0 stacks against other open-source models 9:20 The trade-off between instruction-following and reasoning skills 12:32 How will retrieval and generative models evolve in the future 16:36 Open sourcing and user feedback on EXAONE 19:20 The role of synthetic data in model training 20:57 The role of LLMs as evaluators 23:16 Outro

Show more...
11 months ago
24 minutes 57 seconds

Neural Search Talks — Zeta Alpha
Zeta-Alpha-E5-Mistral: Finetuning LLMs for Retrieval (with Arthur Câmara)

In the 30th episode of Neural Search Talks, we have our very own Arthur Câmara, Senior Research Engineer at Zeta Alpha, presenting a 20-minute guide on how we fine-tune Large Language Models for effective text retrieval. Arthur discusses the common issues with embedding models in a general-purpose RAG pipeline, how to tackle the lack of retrieval-oriented data for fine-tuning with InPars, and how we adapted E5-Mistral to rank in the top 10 on the BEIR benchmark.
## Sources

InPars

  • https://github.com/zetaalphavector/InPars
  • https://dl.acm.org/doi/10.1145/3477495.3531863
  • https://arxiv.org/abs/2301.01820
  • https://arxiv.org/abs/2307.04601


Zeta-Alpha-E5-Mistral

  • https://zeta-alpha.com/post/fine-tuning-an-llm-for-state-of-the-art-retrieval-zeta-alpha-s-top-10-submission-to-the-the-mteb-be
  • https://huggingface.co/zeta-alpha-ai/Zeta-Alpha-E5-Mistral

NanoBEIR

  • https://huggingface.co/collections/zeta-alpha-ai/nanobeir-66e1a0af21dfd93e620cd9f6
Show more...
11 months ago
19 minutes 35 seconds

Neural Search Talks — Zeta Alpha
ColPali: Document Retrieval with Vision-Language Models only (with Manuel Faysse)

In this episode of Neural Search Talks, we're chatting with Manuel Faysse, a 2nd year PhD student from CentraleSupélec & Illuin Technology, who is the first author of the paper "ColPali: Efficient Document Retrieval with Vision Language Models". ColPali is making waves in the IR community as a simple but effective new take on embedding documents using their image patches and the late-interaction paradigm popularized by ColBERT. Tune in to learn how Manu conceptualized ColPali, his methodology for tackling new research ideas, and why this new approach outperforms all classic multimodal embedding models. A must-watch episode! Timestamps: 0:00 Introduction with Jakub & Manu 4:09 The "Aha!" moment that led to ColPali 7:06 Challenges that had to be solved 9:16 The main idea behind ColPali 13:20 How ColPali simplifies the IR pipeline 15:54 The ViDoRe benchmark 18:23 Why ColPali is superior to CLIP-based retrievers 20:41 The training setup used for ColPali 24:00 Optimizations to make ColPali more efficient 29:00 How ColPali could work with text-only datasets 31:21 Outro: The next steps for this line of research

Show more...
1 year ago
34 minutes 48 seconds

Neural Search Talks — Zeta Alpha
Using LLMs in Information Retrieval (w/ Ronak Pradeep)

In this episode of Neural Search Talks, we're chatting with Ronak Pradeep, a PhD student from the University of Waterloo, about his experience using LLMs in Information Retrieval, both as a backbone of ranking systems and for their end-to-end evaluation. Ronak analyzes the impact of the advancements in language models on the way we think about IR systems and shares his insights on efficiently integrating them in production pipelines, with techniques such as knowledge distillation. Timestamps: 0:00 Introduction & the impact of the LLM day in SIGIR 2024 2:11 The perspective of the IR community on LLMs 6:10 Language models as backbones for Information Retrieval 13:49 The feasibility & tricks for using LLMs in production IR pipelines 20:11 Ronak's hidden gems from the SIGIR 2024 programme 21:36 Outro

Show more...
1 year ago
22 minutes 15 seconds

Neural Search Talks — Zeta Alpha
Designing Reliable AI Systems with DSPy (w/ Omar Khattab)

In this episode of Neural Search Talks, we're chatting with Omar Khattab, the author behind popular IR & LLM frameworks like ColBERT and DSPy. Omar describes the current state of using AI models in production systems, highlighting how thinking at the right level of abstraction with the right tools for optimization can deliver reliable solutions that extract the most out of the current generation of models. He also lays out his vision for a future of Artificial Programmable Intelligence (API), rather than jumping on the hype of Artificial General Intelligence (AGI), where the goal would be to build systems that effectively integrate AI, with self-improving mechanisms that allow the developers to focus on the design and the problem, rather than the optimization of the lower-level hyperparameters. Timestamps: 0:00 Introduction with Omar Khattab 1:14 How to reliably integrate LLMs in production-grade software 12:19 DSPy's philosophy differences from agentic approaches 14:55 Omar's background in IR that helped him pivot to DSPy 25:47 The strengths of DSPy's optimization framework 39:22 How DSPy has reimagined modularity in AI systems 45:45 The future of using AI models for self-improvement 49:41 How open-sourcing a project like DSPy influences its development 52:32 Omar's vision for the future of AI and his research agenda 59:12 Outro

Show more...
1 year ago
59 minutes 57 seconds

Neural Search Talks — Zeta Alpha
The Power of Noise (w/ Florin Cuconasu)

In this episode of Neural Search Talks, we're chatting with Florin Cuconasu, the first author of the paper "The Power of Noise", presented at SIGIR 2024. We discuss the current state of the field of Retrieval-Augmented Generation (RAG), and how LLMs interact with retrievers to power modern Generative AI applications, with Florin delivering practical advice for those developing RAG systems, and laying out his research agenda for the near future. Timestamps: 0:00 Introduction & how RAG has taken over the IR literature 1:40 How retrievers and LLMs interact in Retrieval-Augmented Generation 2:55 What practitioners should pay attention to when developing RAG systems 5:04 What is the power of noise in the context of RAG? 7:31 Florin's long-term research agenda on RAG interactions 9:25 How advances in LLMs can impact IR research 11:26 Outro

Show more...
1 year ago
11 minutes 45 seconds

Neural Search Talks — Zeta Alpha
Benchmarking IR Models (w/ Nandan Thakur)

In this episode of Neural Search Talks, we're chatting with Nandan Thakur about the state of model evaluations in Information Retrieval. Nandan is the first author of the paper that introduced the BEIR benchmark, and since its publication in 2021, we've seen models try to hill-climb on the leaderboard, but also fail to outperform the BM25 baseline in subsets like Touché 2020. Plus some insights into what the future of benchmarking IR systems might look like, such as the newly announced TREC RAG track this year.


Timestamps: 0:00 Introduction & the vibe at SIGIR'24 1:19 Nandan's two papers at the conference 2:09 The backstory of the BEIR benchmark 5:55 The shortcomings of BEIR in 2024 8:04 What's up with the Touché 2020 subset of BEIR 11:24 The problem with overfitting on benchmarks 13:09 TREC-RAG: the future of IR benchmarking 17:34 MIRACL & the importance of multilinguality in IR 21:38 Outro

Show more...
1 year ago
21 minutes 55 seconds

Neural Search Talks — Zeta Alpha
Baking the Future of Information Retrieval Models

In this episode of Neural Search Talks, we're chatting with Aamir Shakir from Mixed Bread AI, who shares his insights on starting a company that aims to make search smarter with AI. He details their approach to overcoming challenges in embedding models, touching on the significance of data diversity, novel loss functions, and the future of multilingual and multimodal capabilities. We also get insights on their journey, the ups and downs, and what they're excited about for the future.


Timestamps: 0:00 Introduction 0:25 How did mixedbread.ai start? 2:16 The story behind the company name and its "bakers" 4:25 What makes Berlin a great pool for AI talent 6:12 Building as a GPU-poor team 7:05 The recipe behind mxbai-embed-large-v1 9:56 The Angle objective for embedding models 15:00 Going beyond Matryoshka with mxbai-embed-2d-large-v1 17:45 Supporting binary embeddings & quantization 19:07 Collecting large-scale data is key for robust embedding models 21:50 The importance of multilingual and multimodal models for IR 24:07 Where will mixedbread.ai be in 12 months? 26:46 Outro

Show more...
1 year ago
27 minutes 5 seconds

Neural Search Talks — Zeta Alpha
Hacking JIT Assembly to Build Exascale AI Infrastructure

Ash shares his journey from software development to pioneering in the AI infrastructure space with Unum. He discusses Unum's focus on unleashing the full potential of modern computers for AI, search, and database applications through efficient data processing and infrastructure. Highlighting Unum's technical achievements, including SIMD instructions and just-in-time compilation, Ash also touches on the future of computing and his vision for Unum to contribute to advances in personalized medicine and extending human productivity.


Timestamps: 0:00 Introduction 0:44 How did Unum start and what is it about? 6:12 Differentiating from the competition in vector search 17:45 Supporting modern features like large dimensions & binary embeddings 27:49 Upcoming model releases from Unum 30:00 The future of hardware for AI 34:56 The impact of AI in society 37:35 Outro

Show more...
1 year ago
38 minutes 4 seconds

Neural Search Talks — Zeta Alpha
The Promise of Language Models for Search: Generative Information Retrieval

In this episode of Neural Search Talks, Andrew Yates (Assistant Prof at the University of Amsterdam) Sergi Castella (Analyst at Zeta Alpha), and Gabriel Bénédict (PhD student at the University of Amsterdam) discuss the prospect of using GPT-like models as a replacement for conventional search engines. Generative Information Retrieval (Gen IR) SIGIR Workshop

  • Workshop organized by Gabriel Bénédict, Ruqing Zhang, and Donald Metzler https://coda.io/@sigir/gen-ir
  • Resources on Gen IR: https://github.com/gabriben/awesome-generative-information-retrieval

References

  • Rethinking Search: https://arxiv.org/abs/2105.02274
  • Survey on Augmented Language Models: https://arxiv.org/abs/2302.07842
  • Differentiable Search Index: https://arxiv.org/abs/2202.06991
  • Recommender Systems with Generative Retrieval: https://shashankrajput.github.io/Generative.pdf


Timestamps: 00:00 Introduction, ChatGPT Plugins 02:01 ChatGPT plugins, LangChain 04:37 What is even Information Retrieval? 06:14 Index-centric vs. model-centric Retrieval 12:22 Generative Information Retrieval (Gen IR) 21:34 Gen IR emerging applications 24:19 How Retrieval Augmented LMs incorporate external knowledge 29:19 What is hallucination? 35:04 Factuality and Faithfulness 41:04 Evaluating generation of Language Models 47:44 Do we even need to "measure" performance? 54:07 How would you evaluate Bing's Sydney? 57:22 Will language models take over commercial search? 1:01:44 NLP academic research in the times of GPT-4 1:06:59 Outro

Show more...
1 year ago
1 hour 7 minutes 31 seconds

Neural Search Talks — Zeta Alpha
Task-aware Retrieval with Instructions

Andrew Yates (Assistant Prof at University of Amsterdam) and Sergi Castella (Analyst at Zeta Alpha) discuss the paper "Task-aware Retrieval with Instructions" by Akari Asai et al. This paper proposes to augment a conglomerate of existing retrieval and NLP datasets with natural language instructions (BERRI, Bank of Explicit RetRieval Instructions) and use it to train TART (Multi-task Instructed Retriever).  

📄 Paper: https://arxiv.org/abs/2211.09260

🍻 BEIR benchmark: https://arxiv.org/abs/2104.08663

📈 LOTTE (Long-Tail Topic-stratified Evaluation, introduced in ColBERT v2): https://arxiv.org/abs/2112.01488

Timestamps: 

00:00 Intro: "Task-aware Retrieval with Instructions"

02:20 BERRI, TART, X^2 evaluation

04:00 Background: recent works in domain adaptation

06:50 Instruction Tuning 08:50 Retrieval with descriptions

11:30 Retrieval with instructions

17:28 BERRI, Bank of Explicit RetRieval Instructions

21:48 Repurposing NLP tasks as retrieval tasks

23:53 Negative document selection

27:47 TART, Multi-task Instructed Retriever

31:50 Evaluation: Zero-shot and X^2 evaluation

39:20 Results on Table 3 (BEIR, LOTTE)

50:30 Results on Table 4 (X^2-Retrieval)

55:50 Ablations

57:17 Discussion: user modeling, future work, scale

Show more...
2 years ago
1 hour 11 minutes 13 seconds

Neural Search Talks — Zeta Alpha
Generating Training Data with Large Language Models w/ Special Guest Marzieh Fadaee

Marzieh Fadaee — NLP Research Lead at Zeta Alpha — joins Andrew Yates and Sergi Castella to chat about her work in using large Language Models like GPT-3 to generate domain-specific training data for retrieval models with little-to-no human input. The two papers discussed are "InPars: Data Augmentation for Information Retrieval using Large Language Models" and "Promptagator: Few-shot Dense Retrieval From 8 Examples".

InPars: https://arxiv.org/abs/2202.05144

Promptagator: https://arxiv.org/abs/2209.11755


Timestamps:

00:00 Introduction

02:00 Background and journey of Marzieh Fadaee

03:10 Challenges of leveraging Large LMs in Information Retrieval

05:20 InPars, motivation and method

14:30 Vanilla vs GBQ prompting

24:40 Evaluation and Benchmark

26:30 Baselines

27:40 Main results and takeaways (Table 1, InPars)

35:40 Ablations: prompting, in-domain vs. MSMARCO input documents

40:40 Promptagator overview and main differences with InPars

48:40 Retriever training and filtering in Promptagator

54:37 Main Results (Table 2, Promptagator)

1:02:30 Ablations on consistency filtering (Figure 2, Promptagator)

1:07:39 Is this the magic black-box pipeline for neural retrieval on any documents

1:11:14 Limitations of using LMs for synthetic data

1:13:00 Future directions for this line of research


Show more...
2 years ago
1 hour 16 minutes 14 seconds

Neural Search Talks — Zeta Alpha
ColBERT + ColBERTv2: late interaction at a reasonable inference cost

Andrew Yates (Assistant Professor at the University of Amsterdam) and Sergi Castella (Analyst at Zeta Alpha) discus the two influential papers introducing ColBERT (from 2020) and ColBERT v2 (from 2022), which mainly propose a fast late interaction operation to achieve a performance close to full cross-encoders but at a more manageable computational cost at inference; along with many other optimizations.


📄 ColBERT: "ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT" by Omar Khattab and Matei Zaharia. https://arxiv.org/abs/2004.12832

📄 ColBERTv2: "ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction" by Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. https://arxiv.org/abs/2112.01488

📄 PLAID: "An Efficient Engine for Late Interaction Retrieval" by Keshav Santhanam, Omar Khattab, Christopher Potts, and Matei Zaharia. https://arxiv.org/abs/2205.09707

📄 CEDR: "CEDR: Contextualized Embeddings for Document Ranking" by Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. https://arxiv.org/abs/1904.07094


🪃 Feedback form: https://scastella.typeform.com/to/rg7a5GfJ


Timestamps:

00:00 Introduction

00:42 Why ColBERT?

03:34 Retrieval paradigms recap

08:04 ColBERT query formulation and architecture

09:04 Using ColBERT as a reranker or as an end-to-end retriever

11:28 Space Footprint vs. MRR on MS MARCO

12:24 Methodology: datasets and negative sampling

14:37 Terminology for cross encoders, interaction-based models, etc.

16:12 Results (ColBERT v1) on MS MARCO

18:41 Ablations on model components

20:34 Max pooling vs. mean pooling

22:54 Why did ColBERT have a big impact?

26:31 ColBERTv2: knowledge distillation

29:34 ColBERTv2: indexing improvements

33:59 Effects of clustering compression in performance

35:19 Results (ColBERT v2): MS MARCO

38:54 Results (ColBERT v2): BEIR

41:27 Takeaway: strong specially in out-of-domain evaluation

43:59 Qualitatively how do ColBERT scores look like?

46:21 What's the most promising of all current neural IR paradigms

49:34 How come there's still so much interest in Dense retrieval?

51:09 Many to many similarity at different granularities

53:44 What would ColBERT v3 include?

56:39 PLAID: An Efficient Engine for Late Interaction Retrieval


Contact: castella@zeta-alpha.com

Show more...
3 years ago
57 minutes 30 seconds

Neural Search Talks — Zeta Alpha
Evaluating Extrapolation Performance of Dense Retrieval: How does DR compare to cross encoders when it comes to generalization?

How much of the training and test sets in TREC or MS Marco overlap? Can we evaluate on different splits of the data to isolate the extrapolation performance?

In this episode of Neural Information Retrieval Talks, Andrew Yates and Sergi Castella i Sapé discuss the paper "Evaluating Extrapolation Performance of Dense Retrieval" byJingtao Zhan, Xiaohui Xie, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma.


📄 Paper: https://arxiv.org/abs/2204.11447

❓ About MS Marco: https://microsoft.github.io/msmarco/

❓About TREC: https://trec.nist.gov/

🪃 Feedback form: https://scastella.typeform.com/to/rg7a5GfJ  


Timestamps: 

00:00 Introduction 

01:08 Evaluation in Information Retrieval, why is it exciting 

07:40 Extrapolation Performance in Dense Retrieval 

10:30 Learning in High Dimension Always Amounts to Extrapolation 

11:40 3 Research questions 

16:18 Defining Train-Test label overlap: entity and query intent overlap 

21:00 Train-test Overlap in existing benchmarks TREC 

23:29 Resampling evaluation methods: constructing distinct train-test sets 

25:37 Baselines and results: ColBERT, SPLADE

29:36 Table 6: interpolation vs. extrapolation performance in TREC 

33:06 Table 7: interplation vs. extrapolation in MS Marco 

35:55 Table 8: Comparing different DR training approaches 

40:00 Research Question 1 resolved: cross encoders are more robust than dense retrieval in extrapolation 

42:00 Extrapolation and Domain Transfer: BEIR benchmark. 

44:46 Figure 2: correlation between extrapolation performance and domain transfer performance 

48:35 Broad strokes takeaways from this work 

52:30 Is there any intuition behind the results where Dense Retrieval generalizes worse than Cross Encoders? 

56:14 Will this have an impact on the IR benchmarking culture? 

57:40 Outro   


Contact: castella@zeta-alpha.com

Show more...
3 years ago
58 minutes 30 seconds

Neural Search Talks — Zeta Alpha
Open Pre-Trained Transformer Language Models (OPT): What does it take to train GPT-3?

Andrew Yates (Assistant Professor at the University of Amsterdam) and Sergi Castella i Sapé discuss the recent "Open Pre-trained Transformer (OPT) Language Models" from Meta AI (formerly Facebook). In this replication work, Meta developed and trained a 175 Billion parameter Transformer very similar to GPT-3 from OpenAI, documenting the process in detail to share their findings with the community. The code, pretrained weights, and logbook are available on their Github repository (links below). 

Links 

❓Feedback Form: https://scastella.typeform.com/to/rg7a5GfJ

📄 OPT paper: https://arxiv.org/abs/2205.01068

👾 Code: https://github.com/facebookresearch/metaseq

📒 Logbook: https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/chronicles/OPT175B_Logbook.pdf

✍️ OPT Official Blog Post: https://ai.facebook.com/blog/democratizing-access-to-large-scale-language-models-with-opt-175b/  

OpenAI Embeddings API: https://openai.com/blog/introducing-text-and-code-embeddings/

Nils Reimers' critique of OpenAI Embeddings API: https://medium.com/@nils_reimers/openai-gpt-3-text-embeddings-really-a-new-state-of-the-art-in-dense-text-embeddings-6571fe3ec9d9 


Timestamps: 

00:00 Introduction and housekeeping: new feedback form, ACL conference highlights 

02:42 The convergence between NLP and Neural IR techniques 

06:43 Open Pretrained Transformer motivation and scope, reproducing GPT-3 and open-sourcing 

08:16 Basics of OPT: architecture, pre-training objective, teacher forcing, tokenizer, training data 

13:40 Preliminary experiments findings: hyperparameters, training stability, spikiness 

20:08 Problems that appear at scale when training with 992 GPUs

23:01 Using temperature to check whether GPUs are working

25:00 Training the largest model: what to do when the loss explodes? (which happens quite often)

29:15 When they switched away from AdamW to SGD

32:00 Results: successful but not quite GPT-3 level.

Toxicity? 35:45 Replicability of Large Language Models research. Was GPT-3 replicable? What difference does it make?

37:25 What makes a paper replicable?

40:33 Directions in which large Language Models are applied to Information Retrieval

45:15 Final thoughts and takeaways

Show more...
3 years ago
47 minutes 12 seconds

Neural Search Talks — Zeta Alpha
Few-Shot Conversational Dense Retrieval (ConvDR) w/ special guest Antonios Krasakis

We discuss Conversational Search with our usual cohosts Andrew Yates and Sergi Castella i Sapé; along with a special guest Antonios Minas Krasakis, PhD candidate at the University of Amsterdam. 

We center our discussion around the ConvDR paper: "Few-Shot Conversational Dense Retrieval" by Shi Yu et al. which was the first work to perform Conversational Search without an explicit conversation to query rewriting step.

Timestamps:

00:00 Introduction

00:50 Conversational AI and Conversational Search

05:40 What makes Conversational Search challenging

07:00 ConvDR paper introduction

10:10 Passage representations

11:30 Conversation representations: query rewriting

19:12 ConvDR novel proposed method: teacher-student setup with ANCE

22:50 Datasets and benchmarks: CAsT, CANARD

25:32 Teacher-student advantages and knowledge distillation vs. ranking loss functions

28:09 TREC CAsT and OR-QuAC

35:50 Metrics: MRR, NDCG, holes@10

44:16 Main Results on CAsT and OR-QuAC (Table 2)

57:35 Ablations on combinations of loss functions (Table 4)

1:00:10 How fast is ConvDR? (Table 3)

1:02:40 Qualitative analysis on ConvDR embeddings (Figure 4)

1:04:50 How has this work aged? More recent works in similar directions: Contextualized Quesy Embeddings for Conversational Search.

1:07:02 Is "end-to-end" the silver-bullet for Conversational Search?

1:10:04 Will conversational search become more mainstream?

1:18:44 Latest initiatives for Conversational Search


Show more...
3 years ago
1 hour 23 minutes 11 seconds

Neural Search Talks — Zeta Alpha
Transformer Memory as a Differentiable Search Index: memorizing thousands of random doc ids works!?

Andrew Yates and Sergi Castella discuss the paper titled "Transformer Memory as a Differentiable Search Index" by Yi Tay et al at Google. This work proposes a new approach to document retrieval in which document ids are memorized by a transformer during training (or "indexing") and for retrieval, a query is fed to the model, which then generates autoregressively relevant doc ids for that query.

Paper: https://arxiv.org/abs/2202.06991

Timestamps:

00:00 Intro: Transformer memory as a Differentiable Search Index (DSI)

01:15 The gist of the paper, motivation

4:20 Related work: Autoregressive Entity Linking

7:38 What is an index? Conventional vs. "differentiable"

10:20 Indexing and Retrieval definitions in the context of the DSI

12:40 Learning representations for documents

17:20 How to represent document ids: atomic, string, semantically relevant

22:00 Zero-shot vs. finetuned settings

24:10 Datasets and baselines

27:08 Dinetuned results

36:40 Zero-shot results

43:50 Ablation results

47:15 Where could this model be useds?

52:00 Is memory efficiency a fundamental problem of this approach?

55:14 What about semantically relevant doc ids?

60:30 Closing remarks 


Contact: castella@zeta-alpha.com

Show more...
3 years ago
1 hour 1 minute 40 seconds

Neural Search Talks — Zeta Alpha
Learning to Retrieve Passages without Supervision: finally unsupervised Neural IR?

In this third episode of the Neural Information Retrieval Talks podcast, Andrew Yates and Sergi Castella discuss the paper "Learning to Retrieve Passages without Supervision" by Ori Ram et al.  

Despite the massive advances in Neural Information Retrieval in the past few years, statistical models still overperform neural models when no annotations are available at all. This paper proposes a new self-supervised pertaining task for Dense Information Retrieval that manages to beat BM25 on some benchmarks without using any label.  

Paper: https://arxiv.org/abs/2112.07708 

Timestamps:

00:00 Introduction

00:36 "Learning to Retrieve Passages Without Supervision"

02:20 Open Domain Question Answering

05:05 Related work: Families of Retrieval Models

08:30 Contrastive Learning

11:18 Siamese Networks, Bi-Encoders and Dual-Encoders

13:33 Choosing Negative Samples

17:46 Self supervision: how to train IR models without labels.

21:31 The modern recipe for SOTA Retrieval Models

23:50 Methodology: a new proposed self supervision task

26:40 Datasets, metrics and baselines

\33:50 Results: Zero-Shot performance

43:07 Results: Few-shot performance

47:15 Practically, is not using labels relevant after all?

51:37 How would you "break" the Spider model?

53:23 How long until Neural IR models outperform BM25 out-of-the-box robustly?

54:50 Models as a service: OpenAI's text embeddings API


Contact: castella@zeta-alpha.com

Show more...
3 years ago
59 minutes 10 seconds

Neural Search Talks — Zeta Alpha
The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes

We discuss the Information Retrieval publication "The Curse of Dense Low-Dimensional Information Retrieval for Large Index Sizes" by Nils Reimers and Iryna Gurevych, which explores how Dense Passage Retrieval performance degrades as the index size varies and how it compares to traditional sparse or keyword-based methods.


Timestamps:

00:00 Co-host introduction

00:26 Paper introduction

02:18 Dense vs. Sparse retrieval

05:46 Theoretical analysis of false positives(1)

08:17 What is low vs. high dimensional representations

11:49 Theoretical analysis o false positives (2)

20:10 First results: growing the MS-Marco index

28:35 Adding random strings to the index

39:17 Discussion, takeaways

44:26 Will dense retrieval replace or coexist with sparse methods?

50:50 Sparse, Dense and Attentional Representations for Text Retrieval


Referenced work:

Sparse, Dense and Attentional Representations for Text Retrieval by Yi Luan et al. 2020. 


Show more...
3 years ago
54 minutes 13 seconds

Neural Search Talks — Zeta Alpha
A monthly podcast where we discuss recent research and developments in the world of Neural Search, LLMs, RAG and Natural Language Processing with our co-hosts Jakub Zavrel (AI veteran and founder at Zeta Alpha) and Dinos Papakostas (AI Researcher at Zeta Alpha).