Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Fiction
History
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/41/5f/f4/415ff42b-f3e4-62d7-e017-6ccd1fe8b935/mza_9584984547342647834.jpg/600x600bb.jpg
Deep Papers
Arize AI
57 episodes
3 days ago
Santosh Vempala, Frederick Storey II Chair of Computing and Distinguished Professor in the School of Computer Science at Georgia Tech, explains his paper co-authored by OpenAI's Adam Tauman Kalai, Ofir Nachum, and Edwin Zhang. Read the paper: Sign up for future AI research paper readings and author office hours. See LLM hallucination examples here for context. Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
Show more...
Mathematics
Technology,
Business,
Science
RSS
All content for Deep Papers is the property of Arize AI and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Santosh Vempala, Frederick Storey II Chair of Computing and Distinguished Professor in the School of Computer Science at Georgia Tech, explains his paper co-authored by OpenAI's Adam Tauman Kalai, Ofir Nachum, and Edwin Zhang. Read the paper: Sign up for future AI research paper readings and author office hours. See LLM hallucination examples here for context. Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
Show more...
Mathematics
Technology,
Business,
Science
Episodes (20/57)
Deep Papers
Georgia Tech's Santosh Vempala Explains Why Language Models Hallucinate, His Research With OpenAI
Santosh Vempala, Frederick Storey II Chair of Computing and Distinguished Professor in the School of Computer Science at Georgia Tech, explains his paper co-authored by OpenAI's Adam Tauman Kalai, Ofir Nachum, and Edwin Zhang. Read the paper: Sign up for future AI research paper readings and author office hours. See LLM hallucination examples here for context. Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.
Show more...
3 days ago
31 minutes

Deep Papers
Atropos Health’s Arjun Mukerji, PhD, Explains RWESummary: A Framework and Test for Choosing LLMs to Summarize Real-World Evidence (RWE) Studies
Large language models are increasingly used to turn complex study output into plain-English summaries. But how do we know which models are safest and most reliable for healthcare? In this most recent community AI research paper reading, Arjun Mukerji, PhD – Staff Data Scientist at Atropos Health – walks us through RWESummary, a new benchmark designed to evaluate LLMs on summarizing real-world evidence from structured study output — an important but often under-tested scenario compared t...
Show more...
3 weeks ago
26 minutes

Deep Papers
Stan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon Walks Us Through His New Paper
This episode dives into "Category-Theoretic Analysis of Inter-Agent Communication and Mutual Understanding Metric in Recursive Consciousness." The paper presents an extension of the Recursive Consciousness framework to analyze communication between agents and the inevitable loss of meaning in translation. We're thrilled to feature the paper's author, Stan Miasnikov, Distinguished Engineer, AI/ML Architecture, Consumer Experience at Verizon, to walk us through the research and its implications...
Show more...
1 month ago
48 minutes

Deep Papers
Small Language Models are the Future of Agentic AI
We had the privilege of hosting Peter Belcak – an AI Researcher working on the reliability and efficiency of agentic systems at NVIDIA – who walked us through his new paper making the rounds in AI circles titled “Small Language Models are the Future of Agentic AI.” The paper posits that small language models (SLMs) are sufficiently powerful, inherently more suitable, and necessarily more economical for many invocations in agentic systems, and are therefore the future of agentic AI. The autho...
Show more...
1 month ago
31 minutes

Deep Papers
Watermarking for LLMs and Image Models
In this AI research paper reading, we dive into "A Watermark for Large Language Models" with the paper's author John Kirchenbauer. This paper is a timely exploration of techniques for embedding invisible but detectable signals in AI-generated text. These watermarking strategies aim to help mitigate misuse of large language models by making machine-generated content distinguishable from human writing, without sacrificing text quality or requiring access to the model’s internals. Learn mo...
Show more...
2 months ago
42 minutes

Deep Papers
Self-Adapting Language Models: Paper Authors Discuss Implications
The authors of the new paper *Self-Adapting Language Models (SEAL)* shared a behind-the-scenes look at their work, motivations, results, and future directions. The paper introduces a novel method for enabling large language models (LLMs) to adapt their own weights using self-generated data and training directives — “self-edits.” Learn more about the Self-Adapting Language Models paper. Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on Lin...
Show more...
3 months ago
31 minutes

Deep Papers
The Illusion of Thinking: What the Apple AI Paper Says About LLM Reasoning
This week we discuss The Illusion of Thinking, a new paper from researchers at Apple that challenges today’s evaluation methods and introduces a new benchmark: synthetic puzzles with controllable complexity and clean logic. Their findings? Large Reasoning Models (LRMs) show surprising failure modes, including a complete collapse on high-complexity tasks and a decline in reasoning effort as problems get harder. Dylan and Parth dive into the paper's findings as well as the debate around ...
Show more...
3 months ago
30 minutes

Deep Papers
Accurate KV Cache Quantization with Outlier Tokens Tracing
Join us as we discuss Accurate KV Cache Quantization with Outlier Tokens Tracing, a deep dive into improving the efficiency of LLM inference. The authors enhance KV Cache quantization, a technique for reducing memory and compute costs during inference, by introducing a method to identify and exclude outlier tokens that hurt quantization accuracy, striking a better balance between efficiency and performance. Paper: https://arxiv.org/abs/2505.10938 Slides: https://bit.ly/45wolpr Join us for Ar...
Show more...
4 months ago
25 minutes

Deep Papers
Scalable Chain of Thoughts via Elastic Reasoning
In this week's episode, we talk about Elastic Reasoning, a novel framework designed to enhance the efficiency and scalability of large reasoning models by explicitly separating the reasoning process into two distinct phases: thinking and solution. This separation allows for independent allocation of computational budgets, addressing challenges related to uncontrolled output lengths in real-world deployments with strict resource constraints. Our discussion explores how Elastic Reasoning ...
Show more...
5 months ago
28 minutes

Deep Papers
Sleep-time Compute: Beyond Inference Scaling at Test-time
What if your LLM could think ahead—preparing answers before questions are even asked? In this week's paper read, we dive into a groundbreaking new paper from researchers at Letta, introducing sleep-time compute: a novel technique that lets models do their heavy lifting offline, well before the user query arrives. By predicting likely questions and precomputing key reasoning steps, sleep-time compute dramatically reduces test-time latency and cost—without sacrificing performance. ​We explore n...
Show more...
5 months ago
30 minutes

Deep Papers
LibreEval: The Largest Open Source Benchmark for RAG Hallucination Detection
For this week's paper read, we actually dive into our own research. We wanted to create a replicable, evolving dataset that can keep pace with model training so that you always know you're testing with data your model has never seen before. We also saw the prohibitively high cost of running LLM evals at scale, and have used our data to fine-tune a series of SLMs that perform just as well as their base LLM counterparts, but at 1/10 the cost. So, over the past few weeks, the Arize team ...
Show more...
6 months ago
27 minutes

Deep Papers
AI Benchmark Deep Dive: Gemini 2.5 and Humanity's Last Exam
This week we talk about modern AI benchmarks, taking a close look at Google's recent Gemini 2.5 release and its performance on key evaluations, notably Humanity's Last Exam (HLE). In the session we covered Gemini 2.5's architecture, its advancements in reasoning and multimodality, and its impressive context window. We also talked about how benchmarks like HLE and ARC AGI 2 help us understand the current state and future direction of AI. Read it on the blog: https://arize.com/blog/ai-be...
Show more...
6 months ago
26 minutes

Deep Papers
Model Context Protocol (MCP)
We cover Anthropic’s groundbreaking Model Context Protocol (MCP). Though it was released in November 2024, we've been seeing a lot of hype around it lately, and thought it was well worth digging into. Learn how this open standard is revolutionizing AI by enabling seamless integration between LLMs and external data sources, fundamentally transforming them into capable, context-aware agents. We explore the key benefits of MCP, including enhanced context retention across interactions, impr...
Show more...
6 months ago
15 minutes

Deep Papers
AI Roundup: DeepSeek’s Big Moves, Claude 3.7, and the Latest Breakthroughs
This week, we're mixing things up a little bit. Instead of diving deep into a single research paper, we cover the biggest AI developments from the past few weeks. We break down key announcements, including: DeepSeek’s Big Launch Week: A look at FlashMLA (DeepSeek’s new approach to efficient inference) and DeepEP (their enhanced pretraining method).Claude 3.7 & Claude Code: What’s new with Anthropic’s latest model, and what Claude Code brings to the AI coding assistant space.Stay ahead of ...
Show more...
7 months ago
30 minutes

Deep Papers
How DeepSeek is Pushing the Boundaries of AI Development
This week, we dive into DeepSeek. SallyAnn DeLucia, Product Manager at Arize, and Nick Luzio, a Solutions Engineer, break down key insights on a model that have dominating headlines for its significant breakthrough in inference speed over other models. What’s next for AI (and open source)? From training strategies to real-world performance, here’s what you need to know. Read a summary: https://arize.com/blog/how-deepseek-is-pushing-the-boundaries-of-ai-development/ Learn more about AI observa...
Show more...
7 months ago
29 minutes

Deep Papers
Multiagent Finetuning: A Conversation with Researcher Yilun Du
We talk to Google DeepMind Senior Research Scientist (and incoming Assistant Professor at Harvard), Yilun Du, about his latest paper "Multiagent Finetuning: Self Improvement with Diverse Reasoning Chains." This paper introduces a multiagent finetuning framework that enhances the performance and diversity of language models by employing a society of agents with distinct roles, improving feedback mechanisms and overall output quality. The method enables autonomous self-improvement through itera...
Show more...
8 months ago
30 minutes

Deep Papers
Training Large Language Models to Reason in Continuous Latent Space
LLMs have typically been restricted to reason in the "language space," where chain-of-thought (CoT) is used to solve complex reasoning problems. But a new paper argues that language space may not always be the best for reasoning. In this paper read, we cover an exciting new technique from a team at Meta called Chain of Continuous Thought—also known as "Coconut." In the paper, "Training Large Language Models to Reason in a Continuous Latent Space" explores the potential of allowing LLMs to rea...
Show more...
9 months ago
24 minutes

Deep Papers
LLMs as Judges: A Comprehensive Survey on LLM-Based Evaluation Methods
We discuss a major survey of work and research on LLM-as-Judge from the last few years. "LLMs-as-Judges: A Comprehensive Survey on LLM-based Evaluation Methods" systematically examines the LLMs-as-Judge framework across five dimensions: functionality, methodology, applications, meta-evaluation, and limitations. This survey gives us a birds eye view of the advantages, limitations and methods for evaluating its effectiveness. Read a breakdown on our blog: https://arize.com/blog/llm-a...
Show more...
9 months ago
28 minutes

Deep Papers
Merge, Ensemble, and Cooperate! A Survey on Collaborative LLM Strategies
LLMs have revolutionized natural language processing, showcasing remarkable versatility and capabilities. But individual LLMs often exhibit distinct strengths and weaknesses, influenced by differences in their training corpora. This diversity poses a challenge: how can we maximize the efficiency and utility of LLMs?A new paper, "Merge, Ensemble, and Cooperate: A Survey on Collaborative Strategies in the Era of Large Language Models," highlights collaborative strategies to address this challen...
Show more...
10 months ago
28 minutes

Deep Papers
Agent-as-a-Judge: Evaluate Agents with Agents
This week, we break down the “Agent-as-a-Judge” framework—a new agent evaluation paradigm that’s kind of like getting robots to grade each other’s homework. Where typical evaluation methods focus solely on outcomes or demand extensive manual work, this approach uses agent systems to evaluate agent systems, offering intermediate feedback throughout the task-solving process. With the power to unlock scalable self-improvement, Agent-as-a-Judge could redefine how we measure and enhance agent perf...
Show more...
10 months ago
24 minutes

Deep Papers
Santosh Vempala, Frederick Storey II Chair of Computing and Distinguished Professor in the School of Computer Science at Georgia Tech, explains his paper co-authored by OpenAI's Adam Tauman Kalai, Ofir Nachum, and Edwin Zhang. Read the paper: Sign up for future AI research paper readings and author office hours. See LLM hallucination examples here for context. Learn more about AI observability and evaluation, join the Arize AI Slack community or get the latest on LinkedIn and X.