Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/8d/1b/42/8d1b4264-d09c-a05b-2cdd-0beef7a69e56/mza_8833058394975998764.jpg/600x600bb.jpg
The MAD Podcast with Matt Turck
Matt Turck
100 episodes
1 day ago
The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Show more...
Technology
RSS
All content for The MAD Podcast with Matt Turck is the property of Matt Turck and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/40657026/40657026-1734365503137-220c3042da297.jpg
Are We Misreading the AI Exponential? Julian Schrittwieser on Move 37 & Scaling RL (Anthropic)
The MAD Podcast with Matt Turck
1 hour 9 minutes 56 seconds
2 weeks ago
Are We Misreading the AI Exponential? Julian Schrittwieser on Move 37 & Scaling RL (Anthropic)

Are we failing to understand the exponential, again?

My guest is Julian Schrittwieser (top AI researcher at Anthropic; previously Google DeepMind on AlphaGo Zero & MuZero). We unpack his viral post (“Failing to Understand the Exponential, again”) and what it looks like when task length doubles every 3–4 months—pointing to AI agents that can work a full day autonomously by 2026 and expert-level breadth by 2027. We talk about the original Move 37 moment and whether today’s AI models can spark alien insights in code, math, and science—including Julian’s timeline for when AI could produce Nobel-level breakthroughs.


We go deep on the recipe of the moment—pre-training + RL—why it took time to combine them, what “RL from scratch” gets right and wrong, and how implicit world models show up in LLM agents. Julian explains the current rewards frontier (human prefs, rubrics, RLVR, process rewards), what we know about compute & scaling for RL, and why most builders should start with tools + prompts before considering RL-as-a-service. We also cover evals & Goodhart’s law (e.g., GDP-Val vs real usage), the latest in mechanistic interpretability (think “Golden Gate Claude”), and how safety & alignment actually surface in Anthropic’s launch process.


Finally, we zoom out: what 10× knowledge-work productivity could unlock across medicine, energy, and materials, how jobs adapt (complementarity over 1-for-1 replacement), and why the near term is likely a smooth ramp—fast, but not a discontinuity.


Julian Schrittwieser

Blog - https://www.julian.ac

X/Twitter - https://x.com/mononofu

Viral post: Failing to understand the exponential, again (9/27/2025)


Anthropic

Website - https://www.anthropic.com

X/Twitter - https://x.com/anthropicai


Matt Turck (Managing Director)

Blog - https://www.mattturck.com

LinkedIn - https://www.linkedin.com/in/turck/

X/Twitter - https://twitter.com/mattturck


FIRSTMARK

Website - https://firstmark.com

X/Twitter - https://twitter.com/FirstMarkCap


(00:00) Cold open — “We’re not seeing any slowdown.”

(00:32) Intro — who Julian is & what we cover

(01:09) The “exponential” from inside frontier labs

(04:46) 2026–2027: agents that work a full day; expert-level breadth

(08:58) Benchmarks vs reality: long-horizon work, GDP-Val, user value

(10:26) Move 37 — what actually happened and why it mattered

(13:55) Novel science: AlphaCode/AlphaTensor → when does AI earn a Nobel?

(16:25) Discontinuity vs smooth progress (and warning signs)

(19:08) Does pre-training + RL get us there? (AGI debates aside)

(20:55) Sutton’s “RL from scratch”? Julian’s take

(23:03) Julian’s path: Google → DeepMind → Anthropic

(26:45) AlphaGo (learn + search) in plain English

(30:16) AlphaGo Zero (no human data)

(31:00) AlphaZero (one algorithm: Go, chess, shogi)

(31:46) MuZero (planning with a learned world model)

(33:23) Lessons for today’s agents: search + learning at scale

(34:57) Do LLMs already have implicit world models?

(39:02) Why RL on LLMs took time (stability, feedback loops)

(41:43) Compute & scaling for RL — what we see so far

(42:35) Rewards frontier: human prefs, rubrics, RLVR, process rewards

(44:36) RL training data & the “flywheel” (and why quality matters)

(48:02) RL & Agents 101 — why RL unlocks robustness

(50:51) Should builders use RL-as-a-service? Or just tools + prompts?

(52:18) What’s missing for dependable agents (capability vs engineering)

(53:51) Evals & Goodhart — internal vs external benchmarks

(57:35) Mechanistic interpretability & “Golden Gate Claude”

(1:00:03) Safety & alignment at Anthropic — how it shows up in practice

(1:03:48) Jobs: human–AI complementarity (comparative advantage)

(1:06:33) Inequality, policy, and the case for 10× productivity → abundance

(1:09:24) Closing thoughts

The MAD Podcast with Matt Turck
The MAD Podcast with Matt Turck, is a series of conversations with leaders from across the Machine Learning, AI, & Data landscape hosted by leading AI & data investor and Partner at FirstMark Capital, Matt Turck.