Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/9d/c8/57/9dc857f4-309d-9457-0471-e56406d46de5/mza_15725793256597513042.jpg/600x600bb.jpg
muckrAIkers
Jacob Haimes and Igor Krawczuk
18 episodes
3 weeks ago
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.
Show more...
Technology
Science,
Mathematics
RSS
All content for muckrAIkers is the property of Jacob Haimes and Igor Krawczuk and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.
Show more...
Technology
Science,
Mathematics
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/9d/c8/57/9dc857f4-309d-9457-0471-e56406d46de5/mza_15725793256597513042.jpg/600x600bb.jpg
Understanding AI World Models w/ Chris Canal
muckrAIkers
3 hours 19 minutes
9 months ago
Understanding AI World Models w/ Chris Canal

Chris Canal, co-founder of EquiStamp, joins muckrAIkers as our first ever podcast guest! In this ~3.5 hour interview, we discuss intelligence vs. competencies, the importance of test-time compute, moving goalposts, the orthogonality thesis, and much more.

A seasoned software developer, Chris started EquiStamp as a way to improve our current understanding of model failure modes and capabilities in late 2023. Now a key contractor for METR, EquiStamp evaluates the next generation of LLMs from frontier model developers like OpenAI and Anthropic.

EquiStamp is hiring, so if you're a software developer interested in a fully remote opportunity with flexible working hours, join the EquiStamp Discord server and message Chris directly; oh, and let him know muckrAIkers sent you!


  • (00:00) - Recording date
  • (00:05) - Intro
  • (00:29) - Hot off the press
  • (02:17) - Introducing Chris Canal
  • (19:12) - World/risk models
  • (35:21) - Competencies + decision making power
  • (42:09) - Breaking models down
  • (01:05:06) - Timelines, test time compute
  • (01:19:17) - Moving goalposts
  • (01:26:34) - Risk management pre-AGI
  • (01:46:32) - Happy endings
  • (01:55:50) - Causal chains
  • (02:04:49) - Appetite for democracy
  • (02:20:06) - Tech-frame based fallacies
  • (02:39:56) - Bringing back real capitalism
  • (02:45:23) - Orthogonality Thesis
  • (03:04:31) - Why we do this
  • (03:15:36) - Equistamp!


Links

  • EquiStamp
  • Chris's Twitter
  • METR Paper - RE-Bench: Evaluating frontier AI R&D capabilities of language model agents against human experts
  • All Trades article - Learning from History: Preventing AGI Existential Risks through Policy by Chris Canal
  • Better Systems article - The Omega Protocol: Another Manhattan Project

Superintelligence & Commentary

  • Wikipedia article - Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
  • Reflective Altruism article - Against the singularity hypothesis (Part 5: Bostrom on the singularity)
  • Into AI Safety Interview - Scaling Democracy w/ Dr. Igor Krawczuk

Referenced Sources

  • Book - Man-made Catastrophes and Risk Information Concealment: Case Studies of Major Disasters and Human Fallibility
  • Artificial Intelligence Paper - Reward is Enough
  • Wikipedia article - Capital and Ideology by Thomas Piketty
  • Wikipedia article - Pantheon

LeCun on AGI

  • "Won't Happen" - Time article - Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk
  • "But if it does, it'll be my research agenda latent state models, which I happen to research" - Meta Platforms Blogpost - I-JEPA: The first AI model based on Yann LeCun’s vision for more human-like AI

Other Sources

  • Stanford CS Senior Project - Timing Attacks on Prompt Caching in Language Model APIs
  • TechCrunch article - AI researcher François Chollet founds a new AI lab focused on AGI
  • White House Fact Sheet - Ensuring U.S. Security and Economic Strength in the Age of Artificial Intelligence
  • New York Post article - Bay Area lawyer drops Meta as client over CEO Mark Zuckerberg’s ‘toxic masculinity and Neo-Nazi madness’
  • OpenEdition Academic Review of Thomas Piketty
  • Neural Processing Letters Paper - A Survey of Encoding Techniques for Signal Processing in Spiking Neural Networks
  • BFI Working Paper - Do Financial Concerns Make Workers Less Productive?
  • No Mercy/No Malice article - How to Survive the Next Four Years by Scott Galloway
muckrAIkers
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.