Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Health & Fitness
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/b2/6d/1f/b26d1f97-df12-27c0-2475-92cdd86952ab/mza_2778790634733235588.jpg/600x600bb.jpg
AI Hot Takes
Jay Hack
4 episodes
1 month ago
AI is seriously disrupting Software Development - but the hype cycles aren’t always the real story. Join Jay Hack, Mathemagician and Founder/CEO of Codegen, for weekly no-BS interviews for developers who want the truth on AI in coding from the founders, engineers and researchers who are building the future today. We’ll be digging into topics such as: -Why code review has become the new bottleneck (and how teams are solving it) -Human vs. Agent: Which tasks should never be automated? -Brutal truths from founders using their own AI tools -Which dev tools will survive when OpenAI/Anthropic inevitably copies them ..and many more AI hot takes our guests are dying to share & discuss. Warning: Contains unfiltered views on semicolons, merge queues, and the inevitable robot uprising in your IDE.
Show more...
Entrepreneurship
Business,
Careers
RSS
All content for AI Hot Takes is the property of Jay Hack and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AI is seriously disrupting Software Development - but the hype cycles aren’t always the real story. Join Jay Hack, Mathemagician and Founder/CEO of Codegen, for weekly no-BS interviews for developers who want the truth on AI in coding from the founders, engineers and researchers who are building the future today. We’ll be digging into topics such as: -Why code review has become the new bottleneck (and how teams are solving it) -Human vs. Agent: Which tasks should never be automated? -Brutal truths from founders using their own AI tools -Which dev tools will survive when OpenAI/Anthropic inevitably copies them ..and many more AI hot takes our guests are dying to share & discuss. Warning: Contains unfiltered views on semicolons, merge queues, and the inevitable robot uprising in your IDE.
Show more...
Entrepreneurship
Business,
Careers
Episodes (4/4)
AI Hot Takes
AI UX needs more innovation, right now it’s embarrassing - Jeff Huber, Founder and CEO of Chroma

Jeff Huber, Founder & CEO of Chroma, joins Jay to discuss why context engineering remains the core job of AI engineers, and how modern search infrastructure is evolving for AI-native applications. 

Jeff brings deep experience building Chroma into the leading AI application database, serving thousands of production AI agents with advanced search capabilities across code, dependencies, and unstructured data. 

The conversation centers around the reality of context rot in long-context models, why RegX vs semantic search debates miss the point, and how memory systems need to move beyond simple retrieval to enable true agent learning. 

Jeff explains why Chroma has indexed all major open source dependencies across NPM, PyPI, Cargo, and Go, enabling agents to search exact package versions instead of hallucinating APIs. 

Tune into the full episode to learn why context engineering remains the bottleneck for AI reliability and how search infrastructure will evolve beyond simple vector similarity!

HIGHLIGHTS: 
0:00 Intro
2:24 AI databases vs traditional databases
4:39 Context engineering as core AI developer job
6:21 Context rot research - million-token degradation
8:32 Long context performance vs marketing claims
12:15 Prior failures boost agent performance, successes hurt it
16:36 LLMs as query planners inside databases
19:15 Why coding became first dominant AI use case
21:06 RegX vs semantic search propaganda wars
24:21 Language servers on crack for code search
28:03 Multi-branch agent coordination
30:09 Code Collections - searching NPM/PyPI packages
31:21 Forkable collections enable 100ms Git indexing
34:09 Deep research agents are "incredibly mid"
38:17 Memory as context engineering vs weights
40:36 Agent task learning vs user personalization
43:18 Auditability problem with model-weight memory
47:21 Agents need apprenticeship models for reliability
49:54 Embarrassing lack of AI UX innovation

Connect with Jeff - https://www.linkedin.com/in/jeffchuber/

Connect with Jay - https://www.linkedin.com/in/jayhack/ or https://x.com/mathemagic1an 

Visit trychroma.com for AI application database infrastructure

Show more...
1 month ago
52 minutes

AI Hot Takes
Coding startups can't beat Anthropic at their own game – Harrison Chase, Co-Founder/CEO of LangChain

Harrison Chase (Co-founder & CEO of LangChain) joins Jay to discuss the evolution from LangChain's Twitter origins to becoming the infrastructure backbone for thousands of production agents. 

Harrison talks about how LangChain started on Twitter and quickly grew into a multi-product ecosystem including; LangChainLangSmithLangGraph, and LangGraph Platform. 

The conversation revolves around deep agents, permission models, the issue with memory, and what the future holds for the coding space.

Harrison explains why competing directly with model providers on their specialized domains (like Anthropic's Claude Code) is nearly impossible, but argues the real opportunity lies in UX innovation and bringing these capabilities into existing workflows.

Tune into the full episode to learn why memory isn't the bottleneck yet and how the bitter lesson applies to agent architecture!

HIGHLIGHTS: 
0:00 Intro 
1:24 LangChain's evolution from Twitter prototype to production platform 
3:23 Model capabilities progression from 2023 to today 
4:05 Deep agents - planning, subagents, and file systems for long-term tasks 
6:37 Why string replacement beats line-by-line editing for Claude 
8:14 The impossible challenge of competing with Claude Code directly 
11:28 UX differentiation and workflow integration as winning strategies 
13:55 Unix commands and composability for non-coding agents 
16:14 Sandboxing approaches - individual VMs vs shared environments 
20:03 Agent runtime primitives - streaming, human-in-loop, time travel 
22:55 Why CLI tools might beat MCP for agent interactions 
25:13 Why base performance matters more than persistence 
28:10 External vs model-weight memory systems for auditability 
30:12 Product admiration - from cooking to Cursor's UX mastery

Connect with Harrison - https://www.linkedin.com/in/harrison-chase-961287118/
Connect with Jay - https://www.linkedin.com/in/jayhack/ or https://x.com/mathemagic1an 
Visit https://langchain.com/ for agent development tools

Show more...
2 months ago
32 minutes

AI Hot Takes
Coding agents need orchestration, not specialization - Louis Knight-Webb, Co-Founder of bloop

Louis Knight-Webb (Co-founder of Bloop) joins Jay to discuss Vibe Kanban, the orchestration platform for running multiple coding agents in parallel. 

Louis brings experience from four years building developer tools, starting with enterprise code search, then COBOL modernization, and now agent orchestration as coding agents have become the new primitive. 

The conversation revolves around how Vibe Kanban solves the bottleneck of running coding agents sequentially by enabling parallel execution with proper sandboxing, task management, and review workflows. 

Louis explains how they've built 90% of Vibe Kanban using Vibe Kanban itself, creating the tightest feedback loop in tech history. The platform integrates Claude Code, Amp, Gemini CLI, and other agents with Git work trees for lightweight sandboxing, setup/cleanup scripts, and one-click dev servers. 

Tune into the full episode to learn why the future of coding is about orchestrating AI workers rather than building vertical-specific solutions!

HIGHLIGHTS: 
0:00 Intro 
2:32 Code search to COBOL modernization to agent orchestration 
5:01 Vibe Kanban demo - parallel coding agent execution 
8:07 Git work trees for lightweight sandboxing vs Docker 
10:25 Building Vibe Kanban with Vibe Kanban 
12:47 Human as daddy agent delegating to coding agents 
14:17 Why review and planning remain human-centric bottlenecks 
16:14 Why DocuSign clones work but new ideas don't 
18:40 Integrating GitHub, project management, and terminal
 21:23 Enterprise vs startup coding workflows and convergence 
25:19 Cloud version challenges and technical adjacent users 
27:33 Task types that work well with coding agents vs manual work 
30:37 The high watermark of current agent capabilities 
33:32 YOLO mode vs proper code review for velocity vs quality 
35:49 Agent logs and thought process documentation for better review

Connect with Louis -https://www.linkedin.com/in/knightwebb/
Connect with Jay - https://www.linkedin.com/in/jayhack/ or https://x.com/mathemagic1an
Visit https://vibe-kanban.com/ for agent orchestration


Show more...
2 months ago
39 minutes

AI Hot Takes
AI Code Review Hot Takes with Merrill Lutsky, CEO at Graphite

Merrill Lutsky (Co-founder & CEO of Graphite) joins Jay to discuss how AI code generation is breaking traditional development workflows and why code review has become the real bottleneck. 

Merrill brings 12 years of engineering experience from Square, Oscar Health, and YC-backed startups, plus insights from building Graphite into the leading code review platform for AI-generated code. 

The conversation explores how stacked pull requests - originally designed for thousand-engineer teams at Meta and Google - are now essential for every startup using AI agents that can generate code at unprecedented scale. Graphite combines deterministic workflows (merge queues, reviewer assignment) with their AI review agent "Diamond" that reviews every code change in seconds, handling the 10x increase in code volume from tools like Claude Code and Cursor. 

Tune into the full episode to learn how stacked PRs can 10x your AI development workflow and why code review is becoming more important than code generation!


HIGHLIGHTS: 
0:00 Intro
2:28 Why code review is the new bottleneck 
4:17 Stacked pull requests explained
7:08 How AI agents make every team face thousand-engineer scale problems 
8:23 The shift from writing code to reviewing code as the primary engineering job 
10:28 Why humans will focus on architecture while AI handles implementation details 
12:24 Diamond AI reviewer
13:27 UX changes needed for 10x code generation volume 
16:07 Auto-stacking PRs using AI to tell the story of code changes 
18:18 Why MCP beats CLI for complex agent workflows 
20:25 Tracking AI vs human contributions in Git metadata 
23:16 Using AI to analyze contributor patterns 
25:25 The evolution from copilot to composer to background agents 
28:10 Junior vs senior engineer job prospects in an AI-dominated future 
31:02 Agentic vs RAG approaches for code review at scale 
34:24 Agent experience design and the bitter lesson for tooling

Connect with Merrill - https://www.linkedin.com/in/merrill-lutsky/

Connect with the Host, Jay Hack - https://www.linkedin.com/in/jayhack/ or https://x.com/mathemagic1an

Visit https://graphite.dev/ for stacked PR workflows

Show more...
2 months ago
38 minutes

AI Hot Takes
AI is seriously disrupting Software Development - but the hype cycles aren’t always the real story. Join Jay Hack, Mathemagician and Founder/CEO of Codegen, for weekly no-BS interviews for developers who want the truth on AI in coding from the founders, engineers and researchers who are building the future today. We’ll be digging into topics such as: -Why code review has become the new bottleneck (and how teams are solving it) -Human vs. Agent: Which tasks should never be automated? -Brutal truths from founders using their own AI tools -Which dev tools will survive when OpenAI/Anthropic inevitably copies them ..and many more AI hot takes our guests are dying to share & discuss. Warning: Contains unfiltered views on semicolons, merge queues, and the inevitable robot uprising in your IDE.