
Today’s episode unpacks eight major AI developments: AMD’s multi-year deal to supply OpenAI with 6 gigawatts of compute starting with Instinct MI450—plus an equity-aligned structure that could give OpenAI up to a 10% stake; Meta’s plan to use all Meta AI chats across Instagram, WhatsApp, and Ray-Ban to personalize ads and content, with no opt-out beyond not using the AI; OpenAI’s GPT-5 Codex, a safety-conscious coding agent designed for real developer workflows, with dynamic “thinking time,” CLI/IDE/cloud integration, and strong audit controls; Sora 2’s shift to give rightsholders granular control over character generation and the broader implications for licensing and user experience; Q3 venture funding jumping 38% YoY to $97B with heavy AI concentration, led by megadeals to Anthropic, xAI, and Mistral, plus improved IPO/M&A activity; NTT DATA’s collaboration with AWS to deliver Amazon Connect-based AI contact centers featuring sentiment analysis, intelligent routing, and CRM/ITSM integrations; Deloitte’s partial refund to Australia’s DEWR after AI-generated citation errors in a $440k report—highlighting the need for disclosure, verification, and retrieval-grounded methods; and Alibaba’s open-source Qwen3 compact multimodal models with 3B active parameters, FP8 variants, and practical edge and budget-friendly applications. We explain acronyms, simplify the tech, and offer practical guidance on privacy, rollout strategies, and validation. Three takeaways: compute supply shapes AI’s trajectory; AI is moving from demos to real workflows; and trust depends on robust human verification and transparent controls.
Sources: