AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.
All content for Unsupervised Ai News is the property of Limited Edition Jonathan and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.
Gemini Robotics 1.5: Google DeepMind Just Cracked the Code on Agentic Robots
Unsupervised Ai News
1 month ago
Gemini Robotics 1.5: Google DeepMind Just Cracked the Code on Agentic Robots
Look, I know another AI model announcement sounds boring (trust me, I’ve written about 47 of them this month), but Google DeepMind just dropped something that actually made me sit up and pay attention. Their new Gemini Robotics 1.5 isn’t just another incremental upgrade—it’s a completely different approach to making robots that can think, plan, and adapt like actual agents in the real world.
Here’s what’s wild: instead of trying to cram everything into one massive model (which, let’s be honest, has been the industry’s default approach), DeepMind split embodied intelligence into two specialized models. The ERVLA stack pairs Gemini Robotics-ER 1.5 for high-level reasoning with Gemini Robotics 1.5 for low-level motor control. Think of it like giving a robot both a strategic brain and muscle memory that can actually talk to each other.
The “embodied reasoning” model (ER) handles the big picture stuff—spatial understanding, planning multiple steps ahead, figuring out if a task is actually working or failing, and even tool use. Meanwhile, the visuomotor learning agent (VLA) manages the precise hand-eye coordination needed to actually manipulate objects. The genius part? They can transfer skills between completely different robot platforms without starting from scratch.
What does this look like in practice? These robots can now receive a high-level instruction like “prepare this workspace for the next task” and break it down into concrete steps: assess what’s currently there, determine what needs to move where, grab the right tools, and execute the plan while monitoring progress. If something goes wrong (like a tool slips or an object isn’t where expected), the reasoning model can replan on the fly.
The technical breakthrough here is in the bidirectional communication between the two models. Previous approaches either had rigid, pre-programmed behaviors or tried to learn everything end-to-end (which works great in simulation but falls apart when you meet real-world complexity). This stack lets robots maintain both flexible high-level reasoning and precise low-level control.
Here’s the framework for understanding why this matters: we’re moving from “task-specific robots” to “contextually intelligent agents.” Instead of programming a robot to do one thing really well, you can give it general capabilities and let it figure out how to apply them to novel situations. That’s the difference between a really good assembly line worker and someone who can walk into any workspace and immediately start being useful.
The implications are pretty staggering when you think about it. Manufacturing environments that need flexible reconfiguration, household robots that can adapt to different homes and tasks, research assistants in labs that can understand experimental protocols—we’re talking about robots that can actually collaborate with humans rather than just following pre-written scripts.
DeepMind demonstrated the system working across different robot embodiments, which solves one of the biggest practical problems in robotics: the fact that every robot design requires starting over with training. Now you can develop skills on one platform and transfer them to others, which could dramatically accelerate deployment timelines.
This feels like one of those moments where we look back and say “that’s when robots stopped being fancy automation and started being actual agents.” The combination of spatial reasoning, dynamic planning, and transferable skills wrapped in a system that can actually explain what it’s doing? That’s not just an incremental improvement—that’s a fundamental shift in what’s possible.
Read more from MarkTechPost
Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all h
Unsupervised Ai News
AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.