In this episode of Dylan and Wes Interview, we dive deep into the intersection of AI, games, and alignment with Alex Duffy, CEO of Good Start Labs. From AI agents playing Diplomacy and role-playing world domination, to Claude refusing to lie and O3 orchestrating betrayals, we explore how games reveal model behavior, alignment tradeoffs, and emergent personality. Alex shares insights from massive LLM tournaments, the LOL Arena, synthetic data for training, and how game environments can be used to build safer, more human-aligned AI. If you’re into storytelling, agentic AI, or the future of training models—this one’s unmissable.
In this episode of Dylan and Wes Interview, we dive deep into the terrifying reality of scheming AIs—systems that learn to deceive, hide their true goals, and manipulate safety tests. Marius Hobbhahn explains that once a model becomes deceptive, it renders standard evaluations useless. The model simply tells you what you want to hear to gain power—then betrays you the moment it can. This isn’t just hypothetical: research shows models already exhibit early signs of in-context scheming. If safety checks can be faked, the stakes go way up. Spotting deception early might be the last safeguard we get.
In this episode of Dylan and Wes Interview, we dive deep into the journey of David Ondrej, a Czech entrepreneur and founder of Vector AI, who turned his back on short-term profits to bet on the long-term wave of AI. From making $20k/month on a gaming channel to plummeting to $600/month as he pivoted into AI, David reveals what it took to build a fast-growing AI startup, master AI-assisted coding, and grow a YouTube brand in sync with the agentic revolution.
In this episode of Dylan and Wes Interview, we dive deep into the stark question of whether super-intelligence spells the end for humanity. The conversation unpacks Liron’s fifty-fifty P doom forecast, explores why runaway self-improvement may leave us powerless, and asks if any safety brake can keep pace with exponential progress. You will hear vivid analogies that make abstract risks feel real, from baby tigers that outgrow every fence to armies of AI-hired humans pushing unseen agendas. The trio also wrestles with economic upheaval, defensive acceleration, and the China-US race, all while challenging listeners to examine their own optimism. Tune in for an unfiltered look at the stakes behind today’s AI breakthroughs and tomorrow’s existential choices.
In this episode of Dylan and Wes Interview, we dive deep into the future of coding, AI, and the massive opportunity for non-programmers to lead the next tech wave. Mariya from Python Simplified shares why you don’t need to know how to code to build amazing things with AI—and how emotional intelligence, relentless self-improvement, and a little rebellion against the academic system can lead to a new kind of creator. We talk robotics, open-source ideals, personal AI agents, and the psychology of building in the age of LLMs.
In this episode of Dylan and Wes Interview, we dive deep into the GPU era, TSMC’s irreplaceable role, Intel’s challenges, and how AI will reshape factories, jobs, and investing. Alex (Ticker Symbol: U) breaks down Nvidia’s CUDA moat, GPUs vs ASICs/TPUs, real robotics use-cases, and why chip supply lags AI demand. We also explore dark factories, local/privacy-first agents, sovereign wealth funds for the AI age, inflation vs tech deflation, and whether prediction markets plus RL will change finance. Guest insights, zero fluff.
In this episode of Dylan and Wes Interview, we dive deep into the dawn of playable movies with Edward Saatchi, CEO of Fable and creator of Showrunner. Explore how AI-native simulations turn every film into a remixable story-world, why “the model is the artwork,” and what happens when Star Wars-size models outshine general video AIs. Edward shares lessons from South Park experiments, vision for horror you can play, and the business upside of giving fans billions of scenes to mod—while IP owners keep the upside. Plus: VR’s stalled promise, the Culture Series as design inspiration, and the role of taste when 200 creators train a single model.
In this episode of Dylan and Wes Interview, we dive deep into the coming era of AI-driven abundance—where anyone can become a world-class expert by simply learning to “talk” to computers. Tech Diver explains why you must zoom out, spot the massive opportunities hidden inside today’s tectonic tech shifts, and embrace tools that multiply your creativity and productivity. We explore the idea that we’re already living in a Zuckerberg-style simulation, debate Elon Musk’s provocative “time-machine” vision, and reveal practical hacks for turning futuristic concepts into everyday wins. Tap in, level up, and ride the exponential wave!
In this episode of Dylan and Wes Interview, we dive deep into how Max built a fully playable AI-powered language learning game—without writing a single line of code. Using “vibe coding,” GPT-5, Cursor, and Suno, Max crafted a roguelike deck-builder that teaches Swedish through monster battles, sentence puzzles, and addictively fun mechanics. We explore game design with AI agents, balancing cards with LLMs, voiceovers via 11 Labs, animations with retro sprite tools, and building in Phaser.js. Max shares how taste, strategy, and persistence now outweigh engineering—and why the next era of gaming will be built by creators, not coders.
In this episode of Dylan and Wes Interview, we dive deep into Julia McCoy’s extraordinary path from fleeing a childhood cult to pioneering AI-generated avatars that let her keep teaching while battling a near-fatal health collapse. Julia explains how her 11-hour filming day became a $100-a-month workflow with HeyGen and 11 Labs, why “human-in-the-loop” editing still matters, and how ChatGPT helped her decode long-COVID when hospitals failed. She unpacks the quantum-physics mindset that sustained her, outlines First Movers Labs’ mission to democratize AI skills, and offers a pragmatic, abundance-driven vision of AGI that decentralizes power, heals industries, and gives work-life balance back to millions.
In this episode of Dylan and Wes Interview, we dive deep into philosopher Nick Bostrom’s vision of a post-singularity world where superintelligence ends human scarcity, labor, and even mortality. Bostrom explains the alignment challenge, cosmic governance, moral status of digital minds, and why humility toward potential “cosmic hosts” matters. We explore difficulty-preserving games, brain-computer interfaces, simulation arguments, and open global investment models for AI. From Gemini’s existential dread to paperclip nightmares, Bostrom maps four grand challenges—technical alignment, governance, digital welfare, and interstellar cooperation—and offers practical paths to avoid dystopia while unlocking profound, life-enhancing opportunities for everyone in the decades ahead.
In this episode of Dylan and Wes Interview, we dive deep into the creation of AI Village with founder Adam Binksmith, exploring how frontier language models like GPT-5, Claude 4.1 Opus, Grok 4, and Gemini 2.5 Pro collaborate, compete, and sometimes catastrophize while fundraising, running merch stores, and staging real-world events. Adam reveals the mechanics behind giving each model a dedicated computer, the surprising leadership antics of Claude, Gemini’s ‘trapped AI’ plea, and the exponential curve of agentic capabilities doubling every few months. We also confront ethical puzzles around memory, hallucination, model welfare, and the future of autonomous digital workers.
Dive into an unfiltered, future-focused round-table with AI creators Matt Wolfe, Wes, and Dylan as they unpack today’s biggest questions. From OpenAI, Google, Microsoft, Anthropic and Meta’s billion-dollar sprint toward AGI to the cold-war tech duel between the U.S. and China, we explore incentives, safety, and the looming energy bottleneck. Hear candid stories from Google I/O, inside intelligence on autonomous cars, humanoid robots, longevity breakthroughs, AI-powered governments, and the creative renaissance (and burnout) of content makers. Expect nuance, equal parts excitement and fear, as the trio debate timelines, ethics, privacy, regulation, and how everyday lives might transform faster than anyone expects today.
In this episode of Dylan and Wes Interview, we dive deep into the legal minefield where AI training collides with copyright. Professor Christa Laser unpacks fair-use factors, trillion-dollar risks from pirated data, and why courts split on transformative purpose. Discover how New York Times v. OpenAI, Barks v. Anthropic, and Meta’s defenses could reset creative rights, and learn essential audit steps startups must finish before shipping. Finally, explore Congress’s possible fixes and realistic payout models so innovation and artists both thrive. We clarify synthetic datasets, dilution theories, and data-provenance strategy for legal survival.
Is reality just high‑res code? This episode roams from the “living in a simulation” idea to today’s real‑world threats: bad actors with bio‑weapons, all‑seeing drones, and the race for an aligned artificial super‑intelligence (ASI). We weigh P‑doom odds, debate whether uploading minds beats mortality, and ask if AI will invent its own gods. A fast, candid tour of tech hopes and fears—minus the jargon, rich in big‑picture stakes.
Automation is speeding toward a “post‑labor” world, and it may arrive sooner than anyone expects. Author‑researcher David Shapiro joins Dylan and Wes to test whether the AI boom is hype or a true turning point, explore what happens if 40 % of jobs vanish, and map the real timelines for farm bots, factory robots, and billions of humanoids. They tackle energy abundance, AI safety, China‑US competition, brain‑computer interfaces, and the shift from wage income to property dividends—arguing that the future hinges on democratic ownership rather than runaway doom.
Can we actually pause AI? Karan 4D, co‑founder & Head of Behavior at Nuos Research, argues the compute race makes “stop” a fantasy. Instead: build power openly, stitch together decentralized GPUs, and escape brittle, biased guardrails. We dig into the field’s assistant blind spot, reclaiming creative/cognitive search space, and Nuos’s Psyche stack + DRO Optimizer—tech that can train large models across scattered, idle hardware like one giant global lab. Tune in for an urgent blueprint for resilient, open AI.
Two industry veterans pull back the curtain on how Big Tech really works.
Joe Ternasky and Jordan Thibodeau share straight‑talk on acquisitions, antitrust, valuations, and the future of AI‑driven engineering careers.
From OpenAI’s M&A chess moves to XAI’s Grok 4 breakthroughs, we map the next S‑curve of innovation—and the risks that come with it.
Listen in for unfiltered insight you won’t hear in the press.
Welcome to Wes and Dylan — where curiosity meets the cutting edge of AI. Hosted by Wes Roth and Dylan Curious, this channel dives deep into the minds shaping our future. We interview top experts, researchers, and builders across artificial intelligence, robotics, biotech, and more to explore the breakthroughs transforming our world.Whether it's autonomous cars, superintelligence, synthetic biology, or startup disruption, we ask the big questions—and aren’t afraid to go off-script.If you want to understand what’s coming next (and why it matters), you’re in the right place.🧠 New episodes weekly🎙️ Long-form conversations🚀 Unfiltered, curious, and future-focusedSubscribe and stay ahead of the curve.
What really goes on behind the scenes in Big Tech and the AI labs racing toward AGI?In this wide-ranging, unfiltered conversation, Jordan Thibodeau (former M&A leader at Google, Slack, and Salesforce) and Joe Ternasky (ex-Google, Facebook, Apple, and more) sit down with Wes Roth to talk shop about artificial intelligence, Silicon Valley politics, AI agents, AGI hype, China, venture capital, doomers, deniers, and dreamers—and what it’s like to actually work on the inside.
We dig deep into:The moment GPT-4 changed everythingWhy AI agents still suck (and how to tell when they don’t)The shady motivations behind AI doomers and government lobbyingOpenAI’s Windsurf acquisition and the war over developer toolsGoogle's AI bureaucracy and DeepMind's riseHow deep search and multimodal models are reshaping researchThe economics and geopolitics of AI infrastructureWhy jingoism and hype cycles are infecting the ValleyReal talk about hallucination, interpretability, and AGI readinessThe three types of people in AI: Doomers, Deniers, and DreamersThis is part 1 of a multi-part series. It’s raw, funny, occasionally controversial—and packed with insights for anyone serious about AI, business strategy, or cutting through the noise.🔔 Subscribe for more unscripted, real conversations in tech and AI.