AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.
All content for Unsupervised Ai News is the property of Limited Edition Jonathan and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.
OpenAI’s Chip Rebellion: Why It’s Breaking Free From Nvidia’s Grip
Unsupervised Ai News
2 weeks ago
OpenAI’s Chip Rebellion: Why It’s Breaking Free From Nvidia’s Grip
Here’s the thing about OpenAI’s new deal with Broadcom that actually matters: they’re not just buying different chips (boring), they’re fundamentally reshaping how AI companies think about hardware control. And honestly? This feels like the moment the industry collectively realized it was too dependent on one company’s supply chain. Let me back up. OpenAI just struck a deal to develop custom AI chips optimized specifically for inference—that’s the moment when ChatGPT actually runs and talks to you, as opposed to the brutal training phase where models learn from mountains of data. This isn’t exactly shocking news (companies diversify suppliers, tale as old as time), but the timing and the target reveal something genuinely strategic happening. Here’s the framework for understanding why this matters: Think of AI infrastructure in two phases. Training is the heavyweight championship—it demands absolute raw computational power, which is why Nvidia’s H100 GPUs have basically owned this space. Nvidia crushed it here because of CUDA, their software ecosystem, and years of engineering prowess. OpenAI isn’t ditching Nvidia for training (that would be insane). But inference Broadcom gets this. They’ve built custom silicon for hyperscalers before (Google’s TPUs ring a bell?), and they know how to engineer chips that do one job really, really well. The application-specific integrated circuits (ASICs) they’re building for OpenAI will be optimized to the point that they’re probably more efficient than general-purpose GPUs for inference workloads. Translation: same performance, lower power consumption, massive cost savings when you’re running ChatGPT for millions of users simultaneously. What’s wild is that Sam Altman has been publicly signaling this move for months. He’s talked openly about the need for “more hardware options” and “multiple chip architectures.” This wasn’t a secret—it was a warning to Nvidia that the moat was eroding. And look, I’m not here to bury Nvidia (their training dominance is still absurd), but the inference market is massive, and spreading that load across custom silicon? That’s rational infrastructure thinking. The broader context here: We’re watching the AI industry mature past its “just buy whatever Nvidia has” phase. Supply constraints have been real (remember when everyone was fighting over H100s?), costs are astronomical, and companies with billions at stake need redundancy. OpenAI, Google, Meta—they’re all building custom silicon because relying on one vendor during an arms race is actually reckless. No timeline or financials have been disclosed (because of course not), but this move signals something important: the race isn’t just about who builds the best models anymore. It’s about who controls the entire stack—models, software, and now hardware. That’s where the real competitive moat gets built. Watch this space. When custom inference chips start delivering real cost advantages, we’ll see more of this. And suddenly Nvidia’s dominance looks a little less inevitable. Source: The Wall Street Journal Want more than just the daily AI chaos roundup? I write deeper dives and hot takes on my Substack (because apparently I have Thoughts about where this is all heading): limitededitionjonathan on Substack
Unsupervised Ai News
AI-generated AI news (yes, really) I got tired of wading through apocalyptic AI headlines to find the actual innovations, so I made this. Daily episodes highlighting the breakthroughs, tools, and capabilities that represent real progress—not theoretical threats. It's the AI news I want to hear, and if you're exhausted by doom narratives too, you might like it here. This is Daily episodes covering breakthroughs, new tools, and real progress in AI—because someone needs to talk about what's working instead of what might kill us all. Short episodes, big developments, zero patience for doom narratives. Tech stack: n8n, Claude Sonnet 4, Gemini 2.5 Flash, Nano Banana, Eleven Labs, Wordpress, a pile of python, and Seriously Simple Podcasting.