Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Health & Fitness
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/21/f5/0a/21f50a00-e939-5dda-4905-c62a9fdd024f/mza_9854866287433778669.jpg/600x600bb.jpg
YAAP (Yet Another AI Podcast)
AI21
10 episodes
6 days ago
Show more...
Technology
RSS
All content for YAAP (Yet Another AI Podcast) is the property of AI21 and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Show more...
Technology
Episodes (10/10)
YAAP (Yet Another AI Podcast)
Scraping Without Getting Sued (Or Falling Asleep)
Everyone (and we do mean EVERYONE) needs data, and the web is the largest database humanity has ever built. But tapping into it at scale requires more than technical skills. If your product touches web data, scraping isn't just a backend task, it can be risky and have real consequences. In this episode, Yuval sits down with Rony Shalit, Chief Compliance and Ethics Officer at Bright Data, to talk about what can go wrong when you treat data collection as “just an implementation detail”. From lawsuits with Meta and X to wild edge cases and vendor breakdowns, they dive into what it takes to collect data responsibly and stay out of trouble.
Show more...
6 days ago
48 minutes

YAAP (Yet Another AI Podcast)
The Judge Model Diaries: Judging the Judges
Your LLM gave a great answer. But who decides what “great” means?   In this episode, Yuval talks with Noam Gat about judge language models — reward models, critic models, and how LLMs can be trained to rate, rank, and critique each other. They dive into the difference between scoring and feedback, how to use judge models during inference, and why most evaluation benchmarks don’t tell the full story.   Turns out, getting a good answer is easy. Knowing it’s good? That’s the hard part.
Show more...
2 months ago
30 minutes

YAAP (Yet Another AI Podcast)
RLVR Lets Models Fail Their Way to the Top
Think you know fine-tuning? If your answer is RLHF, you don’t. In this episode, Itay, who leads the Alignment group at AI21, gives a no-fluff crash course on RLVR (Reinforcement Learning with Verifiable Rewards), the method powering today’s smartest coding and reasoning models. He explains why RLVR beats RLHF at its own game, how “hard to solve, easy to verify” tasks unlock exploration without chaos, and the emergent behaviors you only get when models are allowed to screw up. If you want to actually understand RLVR (and use it), start here. Key topics: How RLVR outsmarts RLHF in real-world training The “verified rewards” trick that kills reward hacking Emergent skills you don’t get with hand-holding: self-verification, backtracking, multi-path reasoning Why coding models took a giant leap forward Practical steps to train (and actually benefit from) RLVR models
Show more...
2 months ago
49 minutes

YAAP (Yet Another AI Podcast)
RAG Is Not Solved – Your Evaluation Just Sucks
<p><b>RAG Is Not Solved – Your Evaluation Just Sucks</b></p><p>Your RAG pipeline is passing benchmarks, but failing reality. In this episode, Yuval sits down with Niv from AI21 to expose why most RAG evaluation is fundamentally flawed. From overhyped retrieval scores to chunking strategies that collapse under real-world complexity, they break down why your system isn’t as good as you think — and how structured RAG solves problems that traditional pipelines simply can't. </p><p>Bonus: what do Seinfeld trivia, World Cup stats, and your enterprise SharePoint have in common? (hint: your RAG pipeline chokes on all of them).</p><p><strong>Key Topics:</strong></p><ol><li>Why most RAG benchmarks reward the wrong thing (and hide real failures)</li><li>The chunking trap: how bad segmentation sabotages good retrieval</li><li>When LLMs ace the answer—but your pipeline still fails</li><li>Structured RAG: pipeline that solves RAG problem over aggregative data (such as financial reports)</li><li>Evaluation tips, tricks, and traps for AI builders<p></p></li></ol><p><br></p>
Show more...
3 months ago
43 minutes

YAAP (Yet Another AI Podcast)
The Call Is Coming From Inside the Agent (And It Has Your Credentials)
<p><b>The Call Is Coming From Inside the Agent (And It Has Your Credentials)</b></p><p>You’ve shipped your first agent. It works. It’s useful. It might also be a security liability you don’t even know about. In this episode, Yuval talks to Zenity CTO Michael Bargury about how easy it is to hijack popular agent systems like Copilot and Cursor, what “zero-click” attacks look like in the agent era, and how to monitor, constrain, and secure your AI Agent in production. From sneaky prompt injections to memory-based persistence and infected multi-agent workflows, this is the “oh no” moment every builder needs.</p><p>Key Topics:</p><ul><li>Why “ignore previous instructions” still works better than it should<p></p></li><li>How one agent goes rogue… and infects the others<p></p></li><li>Real-world attacks: social media triggers, CRM leaks, and logic bombs<p></p></li><li>Observability 101 for AI: logs, reasoning traces, and root cause sanity<p></p></li><li>The new rule: build like it <em>will</em> go rogue—because one day it will<p></p></li></ul><p><br></p>
Show more...
3 months ago
49 minutes

YAAP (Yet Another AI Podcast)
Building Enterprise RAG: Lessons from 2+ Years of Production Deployments
<p>Building production AI systems is hard — especially when you're pioneering entirely new categories. In this episode, Yuval speaks with Guy Becker, Group Product Manager at AI21, to trace the evolution from task-specific models to Agent planning and orchestration systems. Guy shares hard-won lessons from building some of the first RAG-as-a-service offerings when there were literally zero handbooks to follow.</p><p><strong>Key Topics:</strong></p><ol><li><strong>Task-specific models vs. general LLMs</strong>: Why focused, smaller models with pre and post-processing beat general purpose LLMs for business use cases.</li><li><strong>Building RAG before it was cool</strong>: Creating one of the first RAG-as-a-service platforms in early 2023 without any established patterns.</li><li><strong>The one-size-fits-all problem</strong>: Why chunking strategies, embedding models, and retrieval parameters need customization per use case.</li><li><strong>From SaaS to on-prem</strong>: Scaling deployment models for enterprise customers with sensitive data.</li><li><strong>When RAG breaks down</strong>: Multi-hop queries, metadata filtering, and why semantic search isn't always enough.</li><li><strong>Multi-agent orchestration</strong>: How AI21 Maestro uses automated planning to break complex queries into parallelizable subtasks.</li><li><strong>Production lessons</strong>: Evaluation strategies, quality guarantees, and building explainable AI systems for enterprise..</li></ol>
Show more...
4 months ago
37 minutes

YAAP (Yet Another AI Podcast)
Trailer
4 months ago

YAAP (Yet Another AI Podcast)
You Can’t Have an Agent Without a Plan: What 90% of ’Agents’ Are Missing
<p>Everyone's talking about AI agents, but most of what we call "agents" are just workflows in disguise. Real autonomous agents require planning. And that, changes everything. In this episode, Yuval speaks with AI21's Algo Tech Lead, Nitzan Cohen about why the popular React framework isn't enough and how planning architecture unlocks true agent capabilities.</p><p>Key Topics:<br>1. The difference between workflows/chains and real autonomous agents<br>2. Why React agents fail at complex tasks, parallel execution, and user transparency<br>3. Free text vs. code-based planning approaches and their trade-offs<br>4. How planning enables multi-agent systems and model delegation<br>5. Training planners with reinforcement learning and replanning mechanisms<br>6. Evaluation challenges: Gaia benchmark, Agent Bench, and building custom datasets<br>7. Practical advice: When to upgrade from React and which frameworks to use</p><p>From competitive analysis that runs in parallel to breaking down complex coding tasks, discover how planning transforms AI agents from simple tool-calling loops into sophisticated problem-solving systems.</p>
Show more...
4 months ago
33 minutes

YAAP (Yet Another AI Podcast)
The Hard Truths About AI Agents: Why Benchmarks Lie and Frameworks Fail
<p>Building AI agents that actually work is harder than the hype suggests — and most people are doing it wrong. In this special "YAAP: Unplugged" episode (a live panel from AI Tinkerers meetup at the Hugging Face offices in Paris), Yuval sits down with Aymeric Roucher (Project Lead for Agents at Hugging Face) and Niv Granot (Algorithms Group Lead at AI21 Labs) for an unfiltered discussion about the uncomfortable realities of agent development.</p><p><strong><br>Key Topics:</strong></p><ol><li><strong>Why current benchmarks are broken</strong>: From MMLU's limitations to RAG leaderboards that don't reflect real-world performance</li><li><strong>The tool use illusion</strong>: Why 95% accuracy on tool calling benchmarks doesn't mean your agent can actually plan</li><li><strong>LLM-as-a-judge problems</strong>: How evaluation bottlenecks are capping progress compared to verifiable domains like coding</li><li><strong>Framework: friend or foe?</strong> When to ditch LangChain, LlamaIndex, and why minimal implementations often work better</li><li><strong>The real agent stack</strong>: MCP, sandbox environments, and the four essential components you actually need</li><li><strong>Beyond the hype cycle</strong>: From embeddings that can't distinguish positive from negative numbers to what comes after agents</li></ol><p>From FIFA World Cup benchmarks that expose retrieval failures to the circular dependency problem with LLM judges, this conversation cuts through the marketing noise to reveal what it <em>really</em> takes to build agents that solve real problems — not just impressive demos.</p><p><em>Warning: Contains unpopular opinions about popular frameworks and uncomfortable truths about the current state of AI agent development.</em></p>
Show more...
4 months ago
39 minutes

YAAP (Yet Another AI Podcast)
Tool Calling 2.0: How MCP Is Standardizing AI Connections
<p><strong>MCP (Model Context Protocol) is changing how developers connect AI applications to external tools – but what exactly is it, and why should you care?</strong> In this episode, Yuval speaks with Etan Grundstein, Technical Product Manager (and formerly Director of Engineering) at AI21, to break down the protocol that’s standardizing AI integrations, moving beyond basic weather APIs and calculators to real-world productivity workflows.</p><p><strong>Key Topics:<br></strong>1)<strong> </strong>What MCP actually is and how it differs from traditional tool calling<br>2) Real-world examples: Connecting AI to Jira, Notion, Git, and even Blender<br>3) The evolution from local MCP servers to cloud integrations<br>4) Authentication challenges and how they’re being addressed<br>5) Why developers are building MCP servers to build other MCP servers<br>6) Looking ahead: Agent-to-Agent protocols and what comes next</p>
Show more...
5 months ago
29 minutes

YAAP (Yet Another AI Podcast)