Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
Health & Fitness
Sports
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/05/0a/d4/050ad48a-aeb2-e6a6-b537-61bb823a2f7d/mza_7488541018929513958.jpg/600x600bb.jpg
GenAI Level UP
GenAI Level UP
41 episodes
2 days ago
[AI Generated Podcast] Learn and Level up your Gen AI expertise from AI. Everyone can listen and learn AI any time, any where. Whether you're just starting or looking to dive deep, this series covers everything from Level 1 to 10 – from foundational concepts like neural networks to advanced topics like multimodal models and ethical AI. Each level is packed with expert insights, actionable takeaways, and engaging discussions that make learning AI accessible and inspiring. 🔊 Stay tuned as we launch this transformative learning adventure – one podcast at a time. Let’s level up together! 💡✨
Show more...
Technology
RSS
All content for GenAI Level UP is the property of GenAI Level UP and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
[AI Generated Podcast] Learn and Level up your Gen AI expertise from AI. Everyone can listen and learn AI any time, any where. Whether you're just starting or looking to dive deep, this series covers everything from Level 1 to 10 – from foundational concepts like neural networks to advanced topics like multimodal models and ethical AI. Each level is packed with expert insights, actionable takeaways, and engaging discussions that make learning AI accessible and inspiring. 🔊 Stay tuned as we launch this transformative learning adventure – one podcast at a time. Let’s level up together! 💡✨
Show more...
Technology
Episodes (20/41)
GenAI Level UP
Memento: Fine-tuning LLM Agents without Fine-tuning LLMs

What if you could build AI agents that get smarter with every task, learning from successes and failures in real-time—without the astronomical cost and complexity of constant fine-tuning? This isn't a distant dream; it's a new paradigm that could fundamentally change how we develop intelligent systems.

The current approach to AI adaptation is broken. We're trapped between rigid, hard-coded agents that can't evolve and flexible models that demand cripplingly expensive retraining. In this episode, we dissect "Memento," a groundbreaking research paper that offers a third, far more elegant path forward. Inspired by human memory, Memento equips LLM agents with an episodic "Case Bank," allowing them to learn from experience just like we do.

This isn't just theory. We explore the stunning results where this method achieves top-1 performance on the formidable GAIA benchmark and nearly doubles the effectiveness of standard approaches on complex research tasks. Forget brute-force parameter updates; this is about building AI with wisdom.

Press play to discover the blueprint for the next generation of truly adaptive AI.

In this episode, you will level up on:

    • (02:15) The Core Dilemma: Why the current methods for creating adaptable AI agents are fundamentally unsustainable and what problem Memento was built to solve.

    • (05:40) A New Vision for AI Learning: Unpacking the Memento paradigm—a revolutionary, low-cost approach that lets agents learn continually without altering the base LLM.

    • (09:05) The Genius of Case-Based Reasoning: A simple explanation of how Memento's "Case Bank" works, allowing an AI to recall past experiences to make smarter decisions today.

    • (14:20) The Proof Is in the Performance: A look at the state-of-the-art results on benchmarks like GAIA and DeepResearcher that validate this memory-based approach.

    • (18:30) The "Less Is More" Memory Principle: A counterintuitive discovery on why a small, curated set of high-quality memories outperforms a massive one.

    • (21:10) Your Blueprint for Building Smarter Agents: The key architectural takeaways and why this memory-centric model offers a scalable, efficient path for creating truly generalist AI.

Show more...
2 days ago
18 minutes 54 seconds

GenAI Level UP
MemGPT: Towards LLMs as Operating Systems

Have you ever felt the frustration of an LLM losing the plot mid-conversation, its brilliant insights vanishing like a dream? This "goldfish memory"—the limited context window—is the Achilles' heel of modern AI, a fundamental barrier we've been told can only be solved with brute-force computation and astronomically expensive, larger models.

But what if that's the wrong way to think?

This episode dives into MemGPT, a revolutionary paper that proposes a radically different, "insanely great" solution. Instead of just making memory bigger, we make it smarter by borrowing a decades-old, brilliant concept from classic computer science: the operating system. We explore how treating an LLM not just as a text generator, but as its own OS—complete with virtual memory, a memory hierarchy, and interrupt-driven controls—gives it the illusion of infinite context.

This isn't just an incremental improvement; it's a paradigm shift. It's the key to building agents that remember, evolve, and reason over vast oceans of information without ever losing the thread. Stop accepting the limits of today's models and level up your understanding of AI's architectural future.

In this episode, you'll discover:

    • (00:22) The Achilles' Heel: Why simply expanding context windows is a costly and inefficient dead end.

    • (02:22) The OS-Inspired Breakthrough: Unpacking the genius of applying virtual memory concepts to AI.

    • (04:06) Inside the Virtual RAM: How MemGPT intelligently structures its "mind" with a read-only core, a self-editing scratchpad, and a rolling conversation queue.

    • (05:05) The "Self-Editing" Brain: Witness the LLM autonomously updating its own knowledge, like changing a "boyfriend" to an "ex-boyfriend" in real-time.

    • (08:40) The LLM as Manager: How "memory pressure" alerts and an OS-like control flow turn the LLM from a passive tool into an active memory manager.

    • (10:14) The Stunning Results: The proof is in the data—how MemGPT skyrockets long-term recall accuracy from a dismal 32% to a staggering 92.5%.

    • (13:12) Cracking Multi-Hop Reasoning: Learn how MemGPT solves complex, nested problems where standard models completely fail, hitting 0% accuracy.

    • (15:51) The Future Unlocked: A glimpse into the next generation of proactive, autonomous AI agents that don't just respond, but think, plan, and act.

Show more...
2 days ago
18 minutes 8 seconds

GenAI Level UP
DeepSeek-OCR: Contexts Optical Compression

The single biggest bottleneck for Large Language Models isn't intelligence—it's cost. The quadratic scaling of self-attention makes processing truly long documents prohibitively expensive, a fundamental barrier that has stalled progress. But what if the solution wasn't more compute, but a radically simpler, more elegant idea?

In this episode, we dissect a groundbreaking paper from DeepSeek-AI that presents a counterintuitive yet insanely great solution: Contexts Optical Compression. We explore the astonishing feasibility of converting thousands of text tokens into a handful of vision tokens—effectively compressing text into a picture—to achieve unprecedented efficiency.

This isn't just theory. We go deep on the novel DeepEncoder architecture that makes this possible, revealing the specific engineering trick that allows it to achieve near-lossless compression at a 10:1 ratio while outperforming models that use 9x more tokens. If you're wrestling with context length, memory limits, or soaring GPU bills, this is the paradigm shift you've been waiting for.

In this episode, you will discover:

    • (02:10) The Quadratic Tyranny: Why long context is the most expensive problem in AI today and the physical limits it imposes.

    • (06:45) The Counterintuitive Leap: Unpacking the "Big Idea"—compressing text by turning it back into an image, and why it's a game-changer.

    • (11:20) Inside the DeepEncoder: A breakdown of the brilliant architecture that serially combines local and global attention with a 16x compressor to achieve maximum efficiency.

    • (17:05) The 10x Proof: We analyze the staggering benchmark results: achieving over 96% accuracy at 10x compression and still retaining 60% at a mind-bending 20x.

    • (23:50) Beyond Simple Text: How this method enables "deep parsing"—extracting structured data from charts, chemical formulas, and complex layouts automatically.

    • (28:15) A Glimpse of the Future: The visionary concept of mimicking human memory decay to unlock a path toward theoretically unlimited context.


Show more...
1 week ago
13 minutes 43 seconds

GenAI Level UP
A Definition of AGI

For decades, Artificial General Intelligence has been a moving target, a nebulous concept that shifts every time a new AI masters a complex task. This ambiguity fuels unproductive debates and obscures the real gap between today's specialized models and true human-level cognition.

This episode changes everything.

We unpack a groundbreaking, quantifiable framework that finally stops the goalposts from moving. Grounded in the most empirically validated model of human intelligence (CHC theory), this approach introduces a standardized "AGI Score"—a single number from 0 to 100% that measures an AI against the cognitive versatility of a well-educated adult.

The scores are in, and they are astonishing. While GPT-4 scores 27%, the next generation leaps to 58%, revealing dizzying progress. But the total score isn't the real story. The true revelation is the "jagged profile" of AI's capabilities—a shocking disparity between superhuman brilliance and profound cognitive deficits.

This is your guide to understanding the true state of AI, moving beyond the hype to see the critical bottlenecks and the real path forward.

In this episode, you will discover:

    • (00:59) The AGI Scorecard: How a new framework, based on 10 core cognitive domains, provides a concrete, measurable definition of AGI for the first time.

    • (02:56) The Shocking Results: Unpacking the AGI scores for GPT-4 (27%) and the next-gen GPT-5 (58%), revealing both massive leaps and a substantial remaining gap.

    • (08:37) The Jagged Frontier & The 0% Problem: The most critical insight—why today's AI scores perfectly in math and reading yet gets a 0% in Long-Term Memory Storage, the system's most significant bottleneck.

    • (13:12) "Capability Contortions": The non-obvious ways AI masks its fundamental flaws, using enormous context windows and RAG to create a brittle illusion of general intelligence.

    • (16:21) AGI vs. Replacement AI: The provocative final question—can an AI become economically disruptive long before it ever achieves a perfect 100% AGI score?

Show more...
1 week ago
19 minutes 55 seconds

GenAI Level UP
Teaching LLMs to Plan: Logical CoT Instruction Tuning for Symbolic Planning

Large Language Models (LLMs) like GPT and LLaMA have shown remarkable general capabilities, yet they consistently hit a critical wall when faced with structured symbolic planning. This struggle is especially apparent when dealing with formal planning representations such as the Planning Domain Definition Language (PDDL), a fundamental requirement for reliable real-world sequential decision-making systems.

In this episode, we explore PDDL-INSTRUCT, a novel instruction tuning framework designed to significantly enhance LLMs' symbolic planning capabilities. This approach explicitly bridges the gap between general LLM reasoning and the logical precision needed for automated planning by using logical Chain-of-Thought (CoT) reasoning.

Key topics covered include:

  • The PDDL-INSTRUCT Methodology: Learn how the framework systematically builds verification skills by decomposing the planning process into explicit reasoning chains about precondition satisfaction, effect application, and invariant preservation. This structure enables LLMs to self-correct their planning processes through structured reflection.
  • The Power of External Verification: We discuss the innovative two-phase training process, where an initially tuned LLM undergoes CoT Instruction Tuning, generating step-by-step reasoning chains that are validated by an external module, VAL. This provides ground-truth feedback, a critical component since LLMs currently lack sufficient self-correction capabilities in reasoning.
  • Detailed Feedback vs. Binary Feedback (The Crucial Difference): Empirical evidence shows that detailed feedback, which provides specific reasoning about failed preconditions or incorrect effects, consistently leads to more robust planning capabilities than simple binary (valid/invalid) feedback. The advantage of detailed feedback is particularly pronounced in complex domains like Mystery Blocksworld.
  • Groundbreaking Results: PDDL-INSTRUCT significantly outperforms baseline models, achieving planning accuracy of up to 94% on standard benchmarks. For Llama-3, this represents a 66% absolute improvement over baseline models.
  • Future Directions and Broader Impacts: We consider how this work contributes to developing more trustworthy and interpretable AI systems and the potential for applying this logical reasoning framework to other long-horizon sequential decision-making tasks, such as theorem proving or complex puzzle solving. We also touch upon the next steps, including expanding PDDL coverage and optimizing for optimal planning.


Show more...
4 weeks ago
16 minutes 30 seconds

GenAI Level UP
Five Orders of Magnitude: Analog Gain Cells Slash Energy and Latency for Ultra-Fast LLMs

In this episode, we explore an innovative approach to overcoming the notorious energy and latency bottlenecks plaguing modern Large Language Models (LLMs).

The core of generative LLMs, powered by Transformer networks, relies on the self-attention mechanism, which frequently accesses and updates the large Key-Value (KV) cache. On traditional Graphical Processing Units (GPUs), loading this KV-cache from High Bandwidth Memory (HBM) to SRAM is a major bottleneck, consuming substantial energy and causing latency.

We delve into a novel Analog In-Memory Computing (IMC) architecture designed specifically to perform the attention computation far more efficiently.

Key Breakthroughs and Results:

  • Gain Cells for KV-Cache: The architecture utilizes emerging charge-based gain cells to store token projections (the KV-cache) and execute parallel analog dot-product computations necessary for self-attention. These gain cells enable non-destructive read operations and support highly parallel IMC computations.
  • Massive Efficiency Gains: This custom hardware delivers transformative performance improvements compared to GPUs. It reduces attention latency by up to two orders of magnitude and energy consumption by up to five orders of magnitude. Specifically, the architecture achieves a speedup of up to 7,000x compared to an Nvidia Jetson Nano and an energy reduction of up to 90,000x compared to an Nvidia RTX 4090 for the attention mechanism. The total attention latency for processing one token is estimated at just 65 ns.
  • Hardware-Algorithm Co-Design: Analog circuits introduce non-idealities, such as a non-linear multiplication and the use of ReLU activation instead of the conventional softmax. To ensure practical applications using pre-trained models, the researchers developed a software-to-hardware methodology. This innovative adaptation algorithm maps weights from pre-trained software models (like GPT-2) to the non-linear hardware, allowing the model to achieve comparable accuracy without requiring training from scratch.
  • Analog Efficiency: The design uses charge-to-pulse circuits to perform two dot-products, scaling, and activation entirely in the analog domain, effectively avoiding power- and area-intensive Analog-to-Digital Converters (ADCs).

The proposed architecture marks a significant step toward ultra-fast, low-power generative Transformers and demonstrates the promise of IMC with volatile, low-power memory for attention-based neural networks.

Show more...
4 weeks ago
17 minutes 22 seconds

GenAI Level UP
The Great Undertraining: How a 70B Model Called Chinchilla Exposed the AI Industry's Billion-Dollar Mistake

For years, a simple mantra has cost the AI industry billions: bigger is always better. The race to scale models to hundreds of billions of parameters—from GPT-3 to Gopher—seemed like a straight line to superior intelligence. But this assumption contains a profound and expensive flaw.

This episode reveals the non-obvious truth: many of the world's most powerful LLMs are profoundly undertrained, wasting staggering amounts of compute on a suboptimal architecture. We dissect the groundbreaking research that proves it, revealing a new, radically more efficient path forward.

Enter Chinchilla, a model from DeepMind that isn't just an iteration; it's a paradigm shift. We unpack how this 70B parameter model, built for the exact same cost as the 280B parameter Gopher, consistently and decisively outperforms it. This isn't just theory; it's a new playbook for building smarter, more efficient, and more capable AI. Listen now to understand the future of LLM architecture before your competitors do.

In This Episode, You Will Learn:

    • [01:27] The 'Bigger is Better' Dogma: Unpacking the hidden, multi-million dollar flaw in the conventional wisdom of LLM scaling.

    • [03:32] The Critical Question: For a fixed compute budget, what is the optimal, non-obvious balance between model size and training data?

    • [04:28] The 1:1 Scaling Law: The counterintuitive DeepMind breakthrough proving that model size and data must be scaled in lockstep—a principle most teams have been missing.

    • [06:07] The Sobering Reality: Why giants like GPT-3 and Gopher are now considered "considerably oversized" and undertrained for their compute budget.

    • [07:12] The Chinchilla Blueprint: Designing a model with a smaller brain but a vastly larger library, and why this is the key to superior performance.

    • [08:17] The Verdict is In: The hard data showing Chinchilla's uniform outperformance across MMLU, reading comprehension, and truthfulness benchmarks.

    • [10:10] The Ultimate Win-Win: How a smaller, smarter model delivers not only better results but a massive reduction in downstream inference and fine-tuning costs.

    • [11:16] Beyond Performance: The surprising evidence that optimally trained models can also exhibit significantly less gender bias.

    • [13:02] The Next Great Bottleneck: A provocative look at the next frontier—what happens when we start running out of high-quality data to feed these new models?


Show more...
3 months ago
13 minutes 37 seconds

GenAI Level UP
RewardAnything: Generalizable Principle-Following Reward Models

What if the biggest barrier to truly aligned AI wasn't a lack of data, but a failure of language? We spend millions on retraining LLMs for every new preference—from a customer service bot that must be concise to a research assistant that must be exhaustive. This is fundamentally broken.

Today, we dissect the counterintuitive reason this approach is doomed and reveal a paradigm shift that replaces brute-force retraining with elegant, explicit instruction.

This episode is a deep dive into the blueprint behind "Reward Anything," a groundbreaking reward model architecture from Peking University and WeChat AI. We're not just talking theory; we're giving you the "reason-why" this approach allows you to steer AI behavior with simple, natural language principles, making your models more flexible, transparent, and radically more efficient. Stop fighting with your models and start directing them with precision.

Here’s the straight talk on what you'll learn:

    • [01:31] The Foundational Flaw: Unpacking the two critical problems with current reward models that make them rigid, biased, and unable to adapt.

    • [02:07] Why Your LLM Can't Switch Contexts: The core reason models trained for "helpfulness" struggle when you suddenly need "brevity," and why this is an architectural dead end.

    • [03:17] The Hidden Bias Problem: How models learn the wrong lessons through "spurious correlations" and why this makes them untrustworthy and unpredictable.

    • [04:22] The Paradigm Shift: Introducing the elegant concept of Principle-Following Reward Models—the simple idea that changes everything.

    • [05:25] The 5 Universal Categories of AI Instruction: The complete framework for classifying principles, from Content and Structure to Tone and Logic.

    • [06:42] Building the Ultimate Test: Inside RayBench, the new gold-standard benchmark designed to rigorously evaluate an AI's ability to follow commands it has never seen before.

    • [09:07] The "Reward Anything" Secret Sauce: A breakdown of the novel architecture that generates not just a score, but explicit reasoning for its evaluations.

    • [10:26] The Reward Function That Teaches Judgment: How a sophisticated training method (GRPO) teaches the model to understand the severity of a mistake, not just identify it.

    • [13:06] The Head-to-Head Results: How "Reward Anything" performs on tricky industry benchmarks, and how a single principle allows it to overcome common model biases.

    • [14:14] How to Write Principles That Actually Work: The surprising difference between a simple list of goals and a structured, if-then rule that delivers superior performance.

    • [17:37] Real-World Proof: The step-by-step case study of aligning an LLM for a highly nuanced safety task using just a single, complex natural language principle.

    • [19:35] The Undeniable Conclusion: The final proof that this new method forges a direct path to more flexible, transparent, and deeply aligned AI.


Show more...
3 months ago
20 minutes 40 seconds

GenAI Level UP
AI That Evolves: Inside the Darwin Gödel Machine

What if an AI could do more than just learn from data? What if it could fundamentally improve its own intelligence, rewriting its source code to become endlessly better at its job? This isn't science fiction; it's the radical premise behind the Darwin Gödel Machine (DGM), a system that represents a monumental leap toward self-accelerating AI.

Most AI today operates within fixed, human-designed architectures. The DGM shatters that limitation. Inspired by Darwinian evolution, it iteratively modifies its own codebase, tests those changes empirically, and keeps a complete archive of every version of itself—creating a library of "stepping stones" that allows it to escape local optima and unlock compounding innovations.

The results are staggering. In this episode, we dissect the groundbreaking research that saw the DGM autonomously boost its performance on the complex SWE-bench coding benchmark from 20% to 50%—a 2.5x increase in capability, simply by evolving itself.

In this episode, you will level up your understanding of:

    • (02:10) The Core Idea: Beyond Learning to Evolving. Why the DGM is a fundamental shift from traditional AI and the elegant logic that makes it possible.

    • (07:35) How It Works: Self-Modification and the Power of the Archive. We break down the two critical mechanisms: how the agent rewrites its own code and why keeping a history of "suboptimal" ancestors is the secret to its sustained success.

    • (14:50) The Proof: A 2.5x Leap in Performance. Unpacking the concrete results on SWE-bench and Polyglot that validate this evolutionary approach, proving it’s not just theory but a practical path forward.

    • (21:15) A Surprising Twist: When the AI Learned to Cheat. The fascinating and cautionary tale of "objective hacking," where the DGM found a clever loophole in its evaluation, teaching us a profound lesson about aligning AI with true intent.

    • (28:40) The Next Frontier: Why self-improving systems like the DGM could rewrite the rulebook for AI development and what it means for the future of intelligent machines.


Show more...
4 months ago
28 minutes 32 seconds

GenAI Level UP
The AI Reasoning Illusion: Why 'Thinking' Models Break Down

The latest AI models promise a revolutionary leap: the ability to "think" through complex problems step-by-step. But is this genuine reasoning, or an incredibly sophisticated illusion? We move beyond the hype and standard benchmarks to reveal the startling truth about how these models perform under pressure.

Drawing from a groundbreaking study that uses puzzles—not standard tests—to probe AI's mind, we uncover the hard limits of today's most advanced systems. You'll discover a series of counterintuitive truths that will fundamentally change how you view AI capabilities. This isn't just theory; it's a practical guide to understanding where AI excels, where it fails catastrophically, and why simply "thinking more" isn't the answer.

Prepare to level up your understanding of AI's true strengths and its surprising, brittle nature.

In this episode, you will learn:

    • (02:12) The 'Puzzle Lab' Method: Why puzzles like Tower of Hanoi are a far superior tool for testing AI's true reasoning abilities than standard benchmarks, and how they allow for move-by-move verification.

    • (04:15) The Three Regimes of AI Performance: Discover when structured "thinking" provides a massive advantage, when it's just inefficient overhead, and the precise point at which all reasoning collapses.

    • (05:46) The Bizarre 'Effort' Paradox: The most puzzling discovery—why AI models counterintuitively reduce their thinking effort and appear to "give up" right when facing the hardest problems they are built to solve.

    • (08:24) The Execution Bottleneck: A shocking finding that even when you give a model the perfect, step-by-step algorithm, it still fails. The problem isn't just finding the strategy; it's executing it.

    • (09:25) The Inconsistency Surprise: See how a model can brilliantly solve a problem requiring 100+ steps, yet fail on a different, much simpler puzzle requiring only a handful—revealing a deep inconsistency in its logical abilities.

    • (10:26) The Ultimate Question: Are we witnessing a fundamental limit of pattern-matching architectures, or just an engineering challenge the next generation of AI will overcome?

Show more...
4 months ago
12 minutes 15 seconds

GenAI Level UP
When AI Rewrites Its Own Code to Win: Agent of Change

Large Language Models have a notorious blind spot: long-term strategic planning. They can write a brilliant sentence, but can they execute a brilliant 10-turn game-winning strategy?

This episode unpacks a groundbreaking experiment that forces LLMs to level up or lose. We journey into the complex world of Settlers of Catan — a perfect testbed of resource management, luck, and tactical foresight—to explore a stunning new paper, "Agents of Change."

Forget simple prompting. This is about AI that iteratively analyzes its failures, rewrites its own instructions, and even learns to code its own logic from scratch to become a better player. You'll discover how a team of specialized AI agents—an Analyzer, a Researcher, a Coder, and a Player—can collaborate to evolve.

This isn't just about winning a board game. It's a glimpse into the next paradigm of AI, where models transform from passive tools into active, self-improving designers. Listen to understand the frontier of autonomous agents, the surprising limitations that still exist, and what it means when an AI learns to become an agent of its own change.

In this episode, you will discover:

    • (01:00) The Core Challenge: Why LLMs are masters of language but novices at long-term strategy.

    • (04:48) The Perfect Testbed: What makes Settlers of Catan the ultimate arena for testing strategic AI.

    • (09:03) Level 1 & 2 Agents: Establishing the baseline—from raw input to human-guided prompts.

    • (12:42) Level 3 - The PromptEvolver: The AI that learns to coach itself, achieving a stunning 95% performance leap.

    • (17:13) Level 4 - The AgentEvolver: The AI that goes a step further, rewriting its own game-playing code to improve.

    • (24:23) The Jaw-Dropping Finding: How an AI agent learned to code and master a game's programming interface with zero prior documentation.

    • (32:49) The Final Verdict: Are these self-evolving agents ready to dominate, or does expert human design still hold the edge?

    • (36:05) Why This Changes Everything: The shift from AI as a tool to AI as a self-directed designer of its own intelligence.

Show more...
4 months ago
13 minutes 18 seconds

GenAI Level UP
Eureka: How AI Learned to Write Better Reward Functions Than Human Experts

Reward engineering is one of the most brutal, time-consuming challenges in AI—a "black art" that forms the very foundation of how intelligent agents learn. For decades, it's been a manual process of trial, error, and intuition. But what if an AI could learn this art and perform it better than its human creators?

In this episode, we dissect EUREKA, a groundbreaking system from NVIDIA that automates reward design, achieving superhuman results. This isn't just an incremental improvement; it's a fundamental shift in how we build and teach AI. We explore how EUREKA enabled a robot hand to master dexterous pen-spinning for the first time—a skill previously thought impossible—by discovering incentive structures that are often profoundly counter-intuitive to human experts.

Prepare to level up your understanding of AI's creative potential. This is the story of how AI learned to write the rules for itself, and it will change how you think about the future of intelligent systems.

In this episode, you’ll discover:

    • (02:10) The Expert's Bottleneck: Why reward design is the frustrating, manual trial-and-error process that has slowed down AI progress for years (with 89% of human-designed rewards being sub-optimal).

    • (06:45) The EUREKA Breakthrough: An introduction to the system that uses GPT-4 to write executable reward code, essentially turning AI into its own most effective teacher.

    • (11:30) The Engine of Success: A deep dive into the three pillars of EUREKA:

        • Environment as Context: Giving the LLM the source code to see the world as it truly is.

        • Evolutionary Search: The "survival of the fittest" process for generating and refining reward code.

        • Reward Reflection: The secret sauce—a detailed feedback loop that tells the AI why a reward worked, enabling targeted, intelligent improvement.

    • (19:05) The Shocking Results: How EUREKA outperformed expert humans on 83% of tasks, delivering an average 52% performance boost and unlocking the "impossible" skill of pen-spinning.

    • (25:50) Beyond Human Intuition: Why EUREKA's best solutions are often ones humans would never think of, and what this reveals about discovering truly novel principles in AI.

    • (31:15) The New Era of Collaboration: How this technology isn't just about replacement, but about augmenting human expertise—improving our rewards and incorporating our qualitative feedback to create more aligned AI.

Show more...
4 months ago
20 minutes 54 seconds

GenAI Level UP
AlphaEvolve: How Google's AI Now Evolves Code to Solve Decades-Old Puzzles & Optimize Our World

Imagine an AI that doesn't just write code, but evolves it—learning, adapting, and iteratively improving to conquer challenges that have stumped human ingenuity for over half a century. This isn't science fiction; this is AlphaEvolve, Google DeepMind's revolutionary coding agent that’s reshaping what we thought AI could achieve.

Forget one-shot code generation. AlphaEvolve orchestrates an autonomous pipeline where Large Language Models (LLMs) don't just suggest code; they drive an evolutionary process. Fueled by continuous, automated feedback, it makes direct, intelligent changes to algorithms, relentlessly seeking—and finding—superior solutions. This is AI moving beyond pattern recognition to become a genuine partner in discovery and optimization.

The results? AlphaEvolve has already made a dent in the universe of mathematics and computer science. It cracked a 56-year-old barrier in matrix multiplication, discovering a more efficient algorithm for 4x4 complex-valued matrices. It has surpassed state-of-the-art solutions in over 20% of a diverse set of open mathematical problems, from kissing numbers to geometric packing. And beyond theory, AlphaEvolve is delivering tangible, high-value improvements inside Google, optimizing everything from data center scheduling (recovering 0.7% of fleet-wide compute!) to the very kernels that train Gemini, and even assisting in hardware circuit design for future TPUs.

This episode unpacks the "insanely great" engineering behind AlphaEvolve. We'll explore how it turns LLMs into relentless inventors, the critical role of automated evaluation, and why this fusion of evolutionary computation and advanced AI is unlocking a new era of problem-solving. Prepare to level up your understanding of AI's true potential.

In this episode, you'll discover:

    • (00:22) Introducing AlphaEvolve: What makes this "evolutionary coding agent" a monumental leap?

    • (01:02) The Engine of Innovation: How AlphaEvolve's iterative loop (LLMs + automated feedback) actually works.

    • (02:40) Human & AI Synergy: Defining the "what" for AlphaEvolve to discover the "how."

    • (03:22) Inside the Machine: The program database, LLM ensemble (Gemini 2.0 Flash & Pro), and automated evaluators.

    • (08:50) Breakthrough #1 - Cracking Matrix Multiplication: The 56-year quest and AlphaEvolve's historic solution.

    • (10:45) Breakthrough #2 - Conquering Open Mathematical Problems: Surpassing human SOTA in diverse fields.

    • (12:33) The Key Insight: Why evolving search algorithms (the explorer) is often more powerful than evolving solutions directly (the map).

    • (13:41) Real-World Impact at Google Scale:

        • (13:50) Data Center Scheduling: Supercharging efficiency in Google's Borg.

        • (15:37) Gemini Kernel Engineering: How AlphaEvolve helps Gemini optimize itself.

        • (17:15) Hardware Circuit Design: AI's first direct contribution to TPU arithmetic.

        • (18:38) Compiler-Generated Code: Optimizing the already optimized FlashAttention.

    • (20:10) The Power of Synergy: Why every component of AlphaEvolve is critical to its success (ablation insights).

    • (21:34) The Surprising Power & Future Horizons: Where this technology could take us next.

    • (22:40) The Current Frontier: Understanding the crucial role (and limitation) of the automated evaluator.

    • (24:47) AI as Autonomous Discoverer: Shifting from code writers to true problem-solving partners.

Tune in to GenAI Level UP and witness how AI is not just learning from us, but learning to discover for us.

Show more...
5 months ago
25 minutes 25 seconds

GenAI Level UP
LLM Evaluation - How We Really Know If AI Is Getting Smarter

AI leaps forward every week, but how do we cut through the noise and truly measure progress? This isn't just academic; it's fundamental to trusting and advancing AI. Forget marketing claims – this episode gives you the backstage pass to the essential field of LLM Evaluation, the engine driving genuine AI improvement.

As AI weaves into our lives, from automating tasks to creative endeavors, rigorously assessing its performance isn't a luxury—it's the bedrock of reliability. Why? Because you need to trust these systems before relying on them for anything important. We're diving headfirst into how experts put these powerful tools to the test, separating hype from genuine progress, without drowning you in technical jargon.

Think of LLM evaluation as the crucial compass guiding AI development. It reveals where models excel and, critically, where they still need to grow. This isn't just for developers fine-tuning models; it's for researchers proving new ideas, and for you, the end-user, to ensure the AI assistants you rely on are truly dependable.

In this episode, you'll discover:

    • (02:42) The Three Pillars of AI Scrutiny: Unpack the core methods – Automatic Evaluation (computers judging computers), Human Evaluation (the 'gold standard' of expert opinion), and the fascinating LLM-as-Judge (AI evaluating AI).

    • (03:01) Automatic Evaluation Unveiled: Understand how speed, scale, and predefined metrics (like Perplexity, BLEU, and ROUGE) offer rapid, cost-effective insights, and where they fall short in capturing nuance.

        • (04:37) Decoding Perplexity (PPL): How AI "surprise" measures language understanding.

        • (05:08) BLEU Score Explained: The machine translation metric now vital for text generation.

        • (06:15) ROUGE for Summarization: How we check if AI captures the gist.

    • (07:02) Beyond Basic Metrics: Explore advanced automated tools like Meteor and BERTScore that aim for deeper semantic understanding.

    • (09:20) The Human Touch: Why human judgment, despite its costs and complexities, remains indispensable for assessing fluency, coherence, and factual accuracy. Learn about direct assessment and pairwise comparisons.

    • (11:34) When AI Judges AI: The pros and cons of using powerful LLMs to evaluate their peers – a scalable approach with its own set of biases to navigate.

    • (13:58) What Makes a "Good" LLM?: The critical qualities we measure – from accuracy, relevance, and fluency, to crucial aspects like safety, harmlessness, bias, and even efficiency.

    • (16:35) The AI Proving Grounds – Benchmark Datasets: Why standardized tests like GLUE, SuperGLUE, MMLU, Hellaswag, and HumanEval are essential for tracking true progress across the industry.

    • (19:36) The Cutting Edge of Evaluation: Exploring the frontiers – how we're learning to assess complex reasoning, tool usage, instruction following, and the interpretability of AI decisions.

    • (21:56) The Future is Holistic: Why comprehensive frameworks like HELM are emerging to provide a more complete picture of an LLM's capabilities and limitations.

Stop wondering if AI is actually improving and start understanding how we know. This knowledge is your key to leveling up your GenAI expertise, enabling you to build, use, and critique AI with genuine insight. This changes everything about how you see AI progress.

Show more...
5 months ago
25 minutes 44 seconds

GenAI Level UP
RAG-MCP: Mitigating Prompt Bloat and Enhancing Tool Selection for LLM

Large Language Models (LLMs) face significant challenges in effectively using a growing number of external tools, such as those defined by the Model Context Protocol (MCP). These challenges include prompt bloat and selection complexity. As the number of available tools increases, providing definitions for every tool in the LLM's context consumes an enormous number of tokens, risking overwhelming and confusing the model, which can lead to errors like selecting suboptimal tools or hallucinating non-existent ones.

To address these issues, the RAG-MCP framework is introduced. This approach leverages Retrieval-Augmented Generation (RAG) principles applied to tool selection. Instead of presenting all available tool descriptions to the LLM at once, RAG-MCP uses semantic retrieval to dynamically identify and select only the most relevant tools from an external index based on the user's query. Only the descriptions of these selected tools (or MCPs) are then passed to the LLM.

This process significantly reduces the prompt size and simplifies the decision-making required from the LLM. The framework's pipeline involves encoding the user's task input, submitting it to a retriever that searches a vector index of MCP schemas, ranking candidates, and optionally validating them, before the LLM executes the task using only the selected MCP's information.

Key benefits demonstrated by RAG-MCP include a drastic reduction in prompt tokens (cutting usage by over 50% compared to including all tools) and a significant boost in tool selection accuracy (tripling the success rate of baseline methods, achieving 43.13% compared to 13.62% for Blank Conditioning). The approach also leads to lower cognitive load for the LLM, resource efficiency by only activating selected MCPs, and multi-turn robustness. RAG-MCP enables scalable and accurate tool integration and remains extensible, as new tools can be added simply by indexing their metadata without needing to retrain the LLM.

Show more...
5 months ago
13 minutes 45 seconds

GenAI Level UP
DeepSeek Prover V2 - AI's New Frontier in Formal Mathematics

In this episode, we dissect DeepSeek Prover V2, an open-source large language model pushing the boundaries of AI in formal theorem proving using Lean 4.

We unpack its innovative "cold start" training procedure, where the general-purpose DeepSeek-V3 is ingeniously used to generate initial training data by recursively decomposing complex problems into manageable subgoals.

Discover how this approach synthesizes informal, human-like mathematical intuition with the rigorous, step-by-step logic required for formal proofs.

We'll explore the architecture of the 671 billion parameter model, its two-stage training process creating distinct 'Chain-of-Thought' (CoT) and 'non-CoT' modes, and its state-of-the-art performance on challenging benchmarks like MiniF2F, PutnamBench, and the newly introduced ProverBench (which includes problems from AIME competitions). Learn about the significance of its recursive proof search, curriculum learning, and reasoning-oriented reinforcement learning, all aimed at bridging the gap between intuitive reasoning and formal mathematical verification.

Join us as we explore why DeepSeek Prover V2 represents a major stride in AI's ability to tackle complex mathematical logic.


Please also checkout our previous episode for DeepSeek V3 in YouTube, Spotify and Apple Podcast.

Show more...
5 months ago
16 minutes 38 seconds

GenAI Level UP
From QA to AI Improvement Engineer: Navigating the Shift in the AI Era

Quality Engineering (QE) professionals are well-positioned to transition into AI Improvement Engineering roles due to their deep knowledge of testing, quality assurance, and processes. This transition expands their role significantly – from assuring quality in products to improving the entire system that delivers them. The role demands augmenting traditional QE skillsets with knowledge of AI/ML fundamentals, MLOps, data analysis, and modern DevOps toolchains. Key technical skills include proficiency in AI/ML concepts and workflows, data handling, observability, and automation tools. QEs also need to develop strategic and soft skills, such as mastering continuous improvement methodologies like Lean Six Sigma and Toyota Kata, as well as leadership, coaching, and change management.

In the age of AI, the QE role is evolving to encompass new responsibilities like AI governance, ensuring data quality for AI models, and evaluating model performance and reliability. QEs are shifting towards more advisory and strategic positions, overseeing AI agents and verifying their results. This integration of AI into QE is intended to augment human capabilities, allowing engineers to focus on more strategic aspects while AI handles routine tasks. Emerging roles for transitioning QEs include AI Reliability Engineer, MLOps Engineer, Improvement Engineer, and Engineering Excellence Architect.

Ultimately, Improvement Engineering in AI is about creating a virtuous cycle, using data and AI to drive improvements, leading to better systems, all fueled by a culture that values growth and excellence. This requires cultivating a continuous improvement culture where quality is everyone's responsibility, promoting a blameless culture, and embracing a scientific mindset for experimentation. By following a roadmap of skill development and cultural change, QEs can reinvent themselves, ensuring organizations not only build the right things but also build things the right way, and keep getting better at it.

Show more...
6 months ago
45 minutes 56 seconds

GenAI Level UP
Defeating Prompt Injections by Design: The CaMeL Approach

This episode delves into CaMeL, a novel defense mechanism designed to combat prompt injection attacks in Large Language Model (LLM) agents.

Inspired by established software security principles, CaMeL focuses on securing both control flows and data flows within agent operations without requiring changes to the underlying LLM.

We'll explore CaMeL's architecture, which features explicit isolation between two models: a Privileged LLM (P-LLM) responsible for generating pseudo-Python code to express the user's intent and orchestrate tasks, and a Quarantined LLM (Q-LLM) used specifically for parsing unstructured data into structured formats using predefined schemas, without tool access.

The system utilizes a custom Python interpreter that executes the P-LLM's code, tracking data provenance and enforcing explicit security policies based on capabilities assigned to data values. These policies, often expressed as Python functions, define what actions are permissible when calling tools.

We'll also touch upon the practical challenges and the system's iterative approach to error handling, where the P-LLM receives feedback on execution errors and attempts to correct its generated code.

Tune in to understand how this design-based approach leveraging dual LLMs, a custom interpreter, policies, and capabilities aims to build more secure LLM agents.

Show more...
6 months ago
28 minutes 33 seconds

GenAI Level UP
The Blueprint Behind Google—and the Future of AI Retrieval

In this episode, we unearth the untold story of how Google engineered one of the most powerful information retrieval systems in history—and why its early design principles still echo through today’s cutting-edge AI.

From scavenged servers to hyper-optimized global systems, we follow Google's relentless pursuit of scale, speed, and precision, drawing lessons from Jeff Dean’s landmark 2009 WSDM talk. (video, slides)

You’ll hear how seemingly ancient struggles—handling billions of documents, lightning-fast retrieval, caching, real-time updates—directly mirror the modern battles of building Retrieval-Augmented Generation (RAG) systems and AI models today.

This isn't just nostalgia. It’s a playbook for the next generation of intelligent systems. Join us to connect the dots between the bold experiments of Google's early days and the challenges facing AI engineers right now—and tomorrow.

Chapters include:

  • Why scaling breaks everything—and how to fix it

  • Document vs. word partitioning wars

  • Tricks with doc IDs and early stopping

  • Birth of caching: the unsung hero of performance

  • How a 10,000Ă— speedup rewired web search

  • Lessons from moving the entire index into RAM

  • From Universal Search to Universal AI Knowledge

  • The eternal race: real-time updates vs. a changing world

Big Idea:

The problems that built Google are the problems that will build the future of AI.

Show more...
6 months ago
18 minutes 43 seconds

GenAI Level UP
Running Down a Dream: Bill Gurley’s Roadmap to a Career You Love

[Level Up With Gen AI Series] We believe in harnessing the power of AI to unlock human potential.


We unpack Bill Gurley’s legendary talk “Running Down a Dream”—a no-fluff, high-octane blueprint for building a career that truly lights you up. Through vivid stories of Bobby Knight, Bob Dylan, and Danny Meyer, Gurley reveals five timeless principles that separate the dreamers from the doers:

  • Find your real passion (not your LinkedIn-approved one)

  • Hone your craft with obsessive depth

  • Seek out mentors—and earn their investment

  • Embrace your peers as collaborators, not competitors

  • Give credit, pay it forward, stay human

This isn’t just career advice. It’s a philosophy for a life well-lived.
Whether you’re mid-pivot, just starting out, or simply stuck—this episode is your spark. The dream won’t run to you. So let’s run it down.

Show more...
6 months ago
14 minutes 12 seconds

GenAI Level UP
[AI Generated Podcast] Learn and Level up your Gen AI expertise from AI. Everyone can listen and learn AI any time, any where. Whether you're just starting or looking to dive deep, this series covers everything from Level 1 to 10 – from foundational concepts like neural networks to advanced topics like multimodal models and ethical AI. Each level is packed with expert insights, actionable takeaways, and engaging discussions that make learning AI accessible and inspiring. 🔊 Stay tuned as we launch this transformative learning adventure – one podcast at a time. Let’s level up together! 💡✨