Temporal’s co-founders join Barr Yaron and Lenny Pruss to unpack how durable execution became the backbone of modern distributed apps and why it’s a perfect fit for AI agents. Samar and Max trace the path from Amazon SWF to Uber’s Cadence to founding Temporal, dig into developer experience choices, hard lessons with Cassandra, and what “code that can’t crash” really means in practice. This episode also covers open source strategy, multi-agent orchestration, Nexus RPC, how startups and enterprises are adopting Temporal, and what scaling the company taught them as leaders.
If you ship backend systems, build AI agents, or care about reliability at scale, this one’s for you.
This episode is broken down into the following chapters:
Subscribe to the Barrchives newsletter: https://www.barrchives.com/
Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf
Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613
Twitter: https://x.com/barrnanas
LinkedIn: https://www.linkedin.com/in/barryaron/
Factory co-founder and CEO Matan Grinberg joins Barr Yaron to talk about the future of agent-driven development, why enterprise migrations are the perfect wedge for AI adoption, and how software engineering is moving toward a world where humans orchestrate instead of implement.
They dive into Factory’s origin story, the challenges of building AI systems for large organizations, and what the world might look like when millions of “droids” (AI agents) collaborate on software. Along the way, Matan shares surprising use cases, lessons from working with enterprises, and how his personal journey—from physics to burritos to building Factory—has shaped his leadership.
This episode is broken down into the following chapters:
00:00 – Intro and welcome
01:06 – Founding Factory: from ChatGPT experiments to AI engineers in every tab
04:05 – Early vision: autonomy for software engineering
06:14 – Why focus on the enterprise vs. indie developers
08:29 – Behavior change and technical challenges in large orgs
10:25 – Using painful migrations as a wedge for adoption
12:20 – The paradigm shift to agent-driven development
15:59 – Ubiquity: making droids available across IDEs, Slack, Jira, and more
17:16 – Why droids need the same context as human engineers
20:15 – Memory, configurability, and organizational learning
23:05 – How many droids? Specialization vs. general purpose agents
25:34 – Bespoke vs. common workflows across enterprises
27:06 – The hardest droid to build: coding itself
28:26 – Testing, costs, and scaling agentic workflows
30:29 – Why observability is essential for trustworthy agents
31:28 – Surprising use cases: PM adoption and GDPR audits
34:02 – Who Factory is building for: PMs, juniors, seniors, and beyond
36:09 – Systems thinking as the core engineering skill
38:09 – Building for enterprise trust: guardrails and governance
40:35 – What’s missing at the model layer today
42:43 – Migrations as a go-to wedge in go-to-market
43:53 – The thought experiment: what if 1M engineers collaborated?
46:07 – Scaling agent orgs: structure, monitoring, and observability
48:46 – Why everything must be recorded for droids to succeed
50:11 – Recruiting people obsessed with software development
51:37 – Burritos, routines, and how Matan has changed as a leader
53:41 – From coffee to Celsius, and why team culture matters most
54:20 – Closing thoughts: the future when agents are truly ubiquitous
Subscribe to the Barrchives newsletter:
Spotify:
https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf
Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613
Twitter: https://x.com/barrnanas
LinkedIn: https://www.linkedin.com/in/barryaron/
Datadog CEO Olivier Pomel joins Barr Yaron and Sunil Dhaliwal to discuss the evolution of observability, the role of AI inside Datadog, and how the future of software development will be shaped by agents, voice interfaces, and new approaches to monitoring.
From Datadog’s origins in the cloud shift to its latest AI-driven products like Bits AI, Olivier shares how the company is building for the future while staying deeply connected to its customers’ real problems.
Spotify:
https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf
Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613
Twitter: https://x.com/barrnanas
LinkedIn: https://www.linkedin.com/in/barryaron/
In this episode of Barrchives, Barr Yaron sits down with Simon Eskildsen, co-founder and CEO of turbopuffer, to explore how he went from infrastructure challenges at Shopify to launching a groundbreaking vector database company.
Simon shares his journey from recognizing the inefficiencies of traditional vector storage solutions to creating TurboPuffer, a revolutionary database designed specifically for AI-driven applications. He details key moments of insight—from working with startups struggling with prohibitive storage costs, to realizing the untapped potential of affordable object storage combined with modern vector indexing techniques.
This episode is broken down into the following chapters:
Subscribe to the Barrchives newsletter: https://www.barrchives.com/
Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf
Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613
Twitter: https://x.com/barrnanas
LinkedIn: https://www.linkedin.com/in/barryaron/
AI won’t fix healthcare unless it starts with the conversation.
In this episode, Zachary Lipton—Chief Technology & Science Officer at Abridge and Raj Reddy Associate Professor of Machine Learning at Carnegie Mellon University—joins Barr Yaron for a deep, technical, and emotional dive into how AI can truly transform clinical care.
From building a world-class ambient documentation system to tackling speech recognition in 28 languages, Zack shares what it takes to engineer trust into AI when the stakes are patient lives, not just clicks.
We cover:
This is one of the most in-depth looks at what it actually takes to build production-grade AI in medicine.
This episode is broken down into the following chapters:
00:00 – Intro
00:34 – What Abridge actually does (hint: it’s not just notes)
01:09 – Why documentation is killing the healthcare experience
03:05 – How we got to the current burnout crisis
04:16 – The key insight: healthcare is a conversation
07:33 – Building a digital scribe: the original vision for Abridge
09:15 – Why off-the-shelf models don’t cut it in clinical speech
11:36 – 28 languages, noisy ERs, and overlapping conversations
13:20 – Predicting what enters the medical lexicon next
14:21 – How Abridge adapts models for edge-case medical speech
15:18 – Beyond transcripts: the complexity of clinical note generation
17:10 – Foundation models are tools, not solutions
18:06 – The “Ship of Theseus” strategy of model orchestration
20:32 – Style transfer for doctors, patients, and payers
20:54 – Metrics: ASR evaluation vs. documentation quality
23:43 – Stratifying ASR performance by setting, language, and jargon
24:50 – Why eval is so hard when there’s no “gold note”
25:45 – The tension between personalization and general eval
28:05 – Lessons from machine translation: building robust eval pipelines
30:32 – Abridge’s “look at the f*cking data” (LFD) internal review
33:54 – Blinded clinical eval with linked evidence and audio
36:50 – Why human fallibility is just as real as AI hallucination
38:21 – What kind of CTO Zack actually is
40:32 – Why AI product development is its own discipline
42:44 – AI innovation now lies in the product-data-model loop
44:25 – Closing the loop: how design drives modeling
45:25 – How Abridge hires researchers who care about product
47:29 – The mission filter: if you’d be equally happy at Microsoft, go
49:35 – What’s next: the AI layer for healthcare, not point solutions
52:57 – Closing thoughts
Subscribe to the Barrchives newsletter: https://www.barrchives.com/
Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf
Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613
Twitter: https://x.com/barrnanas
LinkedIn: https://www.linkedin.com/in/barryaron/
What if deep learning isn’t the future of AI—but just part of it?
In this episode, Ori Goshen, Co-founder and Co-CEO at AI21 Labs, shares why his team set out to build reliable, deterministic AI systems—long before ChatGPT made language models mainstream.
We explore the launch of Wordtune, the development of Jamba, and the release of Maestro—AI21’s orchestration engine for enterprise agentic workflows. Ori opens up about what it takes to move beyond probabilistic systems, build trust with global enterprises, and balance research and product in one of the most competitive AI markets in the world.
If you want a masterclass in enterprise AI, model training, architecture tradeoffs, and scaling innovation out of Israel—this is it.
🔔 Subscribe for deep dives with the people shaping the future of AI.
This episode is broken down into the following chapters:
00:00 – Intro
00:47 – Why AI21 started with “deep learning is necessary but not sufficient”
02:34 – Building reliable AI systems from day one
03:46 – The risk of neural-symbolic hybrids and early bets on NLP
05:40 – Why Wordtune became the first product
08:14 – From B2C success to a pivot back into enterprise
09:43 – What AI21 learned from Wordtune for enterprise AI
11:15 – Defining “product algo fit”
12:27 – Training models before it was cool: Jurassic, Jamba, and beyond
13:38 – How to hire model-training engineers with no playbook
14:53 – Recruiting systems talent: what to look for
16:29 – How to orient your models around real enterprise needs
17:10 – Why Jamba was designed for long-context enterprise use cases
19:52 – What’s special about the Mamba + Transformer hybrid architecture
22:46 – Experimentation, ablations, and finding the right architecture
25:27 – Bringing Jamba to market: what enterprises actually care about
29:26 – The state of enterprise AI readiness in 2023 → 2025
31:41 – The biggest challenge: evaluation systems
32:10 – What most teams get wrong about evals
33:45 – Architecting reliable, non-deterministic systems
34:53 – What is Maestro and why build it now?
36:02 – Replacing “prompt and pray” with AI for AI systems
38:43 – Building interpretable and explicit agentic systems
41:09 – Balancing control and flexibility in orchestration
43:36 – What enterprise AI might actually look like in 5 years
47:03 – Why Israel is a global powerhouse for AI
49:44 – How Ori has evolved as a leader under extreme volatility
52:26 – Staying true to your mission through chaos
Subscribe to the Barrchives newsletter: https://www.barrchives.com/
Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf
Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613
Twitter: https://x.com/barrnanas
LinkedIn: https://www.linkedin.com/in/barryaron/
What does it take to reimagine the browser—one of the most commoditized technologies in the world—for the enterprise?
In this episode, Ofer Ben Noon, founder of Talon and now part of Palo Alto Networks, shares the wild journey from exploring digital health to building the world’s first enterprise-grade secure browser.
We dig into:
If you care about AI x cybersecurity, endpoint security, or enterprise infrastructure—this is a deep, real, and tactical look behind the curtain.
This episode is broken down into the following chapters:
00:00 – Intro
01:05 – Why Ofer originally wanted to build in digital health
02:15 – The pandemic shift to SaaS, hybrid work, and browser-first
04:44 – Why Chromium was the perfect technical unlock
05:27 – The insane complexity of compiling Chromium
07:10 – What makes an enterprise browser different from a consumer browser
09:36 – Browser isolation, web security, and file security
10:50 – Why Talon needed a massive seed round from day one
11:53 – What an MVP looked like for Talon
14:08 – Early skepticism from CISOs and how Talon earned trust
16:50 – Discovering new enterprise use cases over time
17:11 – How AI and Precision AI power Talon’s security engine
19:21 – Why Ofer chose to sell to Palo Alto Networks
21:06 – Petabytes of data, 30B+ attacks blocked daily
23:44 – The risks of LLMs and generative AI in the browser
24:24 – What Talon sees when users interact with AI tools
25:05 – The #1 risk: privacy and user error
26:43 – Why AI use must be governed like any other SaaS
27:22 – How Talon built secure enterprise access to ChatGPT
28:05 – Mapping 1,000+ GenAI tools and classifying risk
29:43 – Real-time blocking, DLP, and prompt visibility
31:25 – Why user mistakes are accelerating in the age of agents
32:04 – How autonomous AI agents amplify risk across the enterprise
33:55 – The browser as the new control layer for users and AI
36:57 – What AI is unlocking in cybersecurity orgs
39:36 – Why data volume will determine which security companies win
40:28 – Ofer’s leadership philosophy and staying grounded post-acquisition
42:40 – Closing reflections
Subscribe to the Barrchives newsletter: https://www.barrchives.com/
Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf
Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613
Twitter: https://x.com/barrnanas
LinkedIn: https://www.linkedin.com/in/barryaron/
Vanta helped create the automated security and compliance category—and now, they’re redefining it with AI.
In this episode, Christina Cacioppo (CEO & Co-Founder) and Iccha Sethi (VP of Engineering) join Barr Yaron to go deep on how AI is transforming the way Vanta builds products, evaluates models, and helps companies earn and demonstrate trust.
They cover:
- Why compliance is the perfect playground for AI
- How Vanta balances reliability, explainability, and scale
- What it takes to build golden datasets in high-stakes domains
- The real-world AI infrastructure behind Vanta AI
If you care about real AI product development—not just hype—this is a masterclass in doing it right.
🔔 Subscribe for more deep dives with leading AI builders and thinkers.
This episode is broken down into the following chapters:
00:00 – Intro
01:06 – Christina’s early entrepreneurial roots (Beanie Babies & all)
02:51 – From venture to founder: why Christina started Vanta
04:00 – What Vanta actually does
05:32 – Iccha on why she joined as VP of Engineering
07:09 – When Vanta started leaning into AI
08:33 – AI’s growing role in Vanta’s product roadmap
09:52 – How AI powers questionnaire automation
12:25 – Using LLMs to map policy docs to cloud configs
13:27 – Building trust: human-in-the-loop and explainability
16:03 – Vanta’s evaluation system for AI features
18:17 – How golden datasets are constructed (and maintained)
20:59 – Feedback loops: online eval from user behavior
22:43 – How model feedback informs product updates
23:38 – What Vanta wants from foundation models (but isn’t getting yet)
24:32 – Retrieval: how Vanta processes customer documents
27:13 – The hardest technical challenges in AI integration
29:41 – Internal adoption: how non-technical teams are using AI too
31:52 – Vanta’s centralized AI team & how other teams plug in
33:27 – Internal education: building AI intuition org-wide
34:31 – From prototype to production: experimentation culture
36:41 – Customer sentiment around AI in compliance workflows
38:22 – Enterprise buyers & the AI “kill switch”
39:06 – Personalized experiences as the future of trust
40:21 – How enterprises are approaching AI risk assessments
41:50 – What excites Iccha and Christina about the future of AI at Vanta
Subscribe to the Barrchives newsletter:
https://www.barrchives.com
Spotify:
https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf
Apple:
https://podcasts.apple.com/us/podcast/barrchives/id1774292613
Twitter:
https://x.com/barrnanas
LinkedIn: https://www.linkedin.com/in/barryaron/
What if AI could talk back instantly—and naturally?In this episode, Karan Goel, Co-founder & CEO of Cartesia, joins Barr Yaron to unpack the future of voice AI, state space models (SSMs), and why audio is the next frontier in AI.
Karan shares the founding story behind Cartesia, explains how alternate architectures like Mamba enable ultra-efficient, low-latency inference, and walks through how his team is building the fastest text-to-speech model in the world—while obsessing over every millisecond.
Whether you’re into model architectures, AI infrastructure, or the future of voice interfaces, this episode delivers technical depth, startup lessons, and a roadmap for what’s coming next.This episode is broken down into the following chapters:
00:00 – Intro
01:06 – Karan’s journey from CMU PhD to startup founder
03:56 – Why Cartesia is built around state space models
06:49 – What makes SSMs different from transformers
09:14 – Why compression matters for long-running AI systems
11:13 – What data types SSMs are best (and worst) for
13:39 – Scaling SSMs: What’s possible and what’s missing
15:31 – Hardware, GPUs & why SSMs work well on existing infra
18:46 – Landing on audio: Cartesia’s first core modality
22:38 – Navigating the model vs. market debate in AI startups
26:36 – How Cartesia built Sonic, their ultra-low latency TTS model
28:17 – Why latency is the #1 challenge in voice AI
30:46 – Tricks vs. model-first thinking: Baking it into the model
34:01 – How Cartesia balances fast execution with deep research
36:26 – Building with part-time academic co-founders
38:13 – Yes, every employee gets a personal Yoshi
40:02 – Where voice AI is being adopted first (telephony + beyond)
42:24 – Multilingual modeling & the long tail of language
45:02 – Voice as a new computing interface
46:26 – Why voice notes are the future (and Barr’s hot take)
49:56 – How Cartesia evaluates its models
52:44 – How Karan has grown as a founder and leader
Subscribe to the Barrchives newsletter: https://www.barrchives.com/
Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXf
Apple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613
Twitter: https://x.com/barrnanas
LinkedIn: https://www.linkedin.com/in/barryaron/
AI is changing the game for marketing, but how do businesses truly integrate AI into their workflows? In this episode, Kashish Gupta, Co-founder & Co-CEO of High Touch, joins Barr Yaron to break down the rise of Composable CDPs, the role of AI decisioning, and the future of automated marketing.They explore:- How High Touch pivoted into the Composable CDP space- Why marketers struggle with data democratization and AI adoption- How reinforcement learning (RL) agents are optimizing marketing campaigns- What AI can (and can’t) replace in modern marketing teams- The biggest technical challenges in building AI decisioning systemsIf you’re a marketer, data leader, or AI enthusiast, this episode is packed with insights into the future of AI-powered marketing.This episode is broken down into the following chapters:00:00 – Welcome & Introduction01:09 – The High Touch founding story & early pivots02:28 – Why Composable CDPs are the future of data-driven marketing03:42 – Overcoming market education challenges05:08 – The manual way of marketing vs. AI-powered decisioning09:21 – How AI decisioning works inside High Touch12:24 – Why reinforcement learning (RL) is the right AI model for marketing13:45 – How High Touch sets up reward functions for AI agents16:04 – How much data do you actually need for AI-driven marketing?17:27 – The biggest misconceptions about AI and marketing21:00 – Building an AI team at High Touch & hiring strategy24:27 – The biggest technical challenges of AI decisioning27:10 – How customers interact with AI-driven marketing models29:40 – What’s next for AI decisioning & autonomous marketing agents?34:26 – The future of AI & CDPs – where is the industry headed?37:42 – What remains uniquely human in marketing?41:23 – Advice for marketers considering AI adoption44:16 – The future of AI agents working togetherSubscribe to the Barrchives newsletter: https://www.barrchives.com/Spotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
This episode consists of the following chapters: 00:00 - Introduction to Jesse Zhang and Decagon02:33 - Why customer support emerged as a clear use case for AI05:00 - The importance of discovery and understanding customer value08:20 - The Decagon product architecture: core AI agent, routing, and human assistance11:01 - How enterprise logic is integrated into the AI agent15:45 - Shared frameworks across different customers and industries17:12 - How AI agents are changing organizational planning19:59 - Automatically identifying knowledge gaps to improve resolution rates22:57 - Handling routing across different modalities (text and voice)26:09 - The continued importance of humans in customer support30:17 - The evolving role of human agents: supervising, QA, and logic building36:57 - Value-based pricing tied to the work AI performs39:17 - How sophisticated buyers evaluate AI customer support solutionsSubscribe to the Barrchives newsletter: www.barrchives.comSpotify: https://open.spotify.com/show/37O8Pb0LgqpqTXo2GZiPXfApple: https://podcasts.apple.com/us/podcast/barrchives/id1774292613Twitter: https://x.com/barrnanasLinkedIn: https://www.linkedin.com/in/barryaron/
Regal co-founders Alex Levin and Rebecca Greene see a future where AI doesn't just assist human agents in contact centers - it replaces them entirely at the front lines. In this episode of Barrchives, we discussed how they're building voice AI technology, the unique challenges of working with audio models, and what the future of customer service looks like.
Most of what you hear about AI right now is in text, but Mikey Shulman (co-founder and CEO of Suno) would tell you that audio is a much more interesting medium to work with. How do you use AI to generate music? What makes audio data uniquely difficult to parse? And how do you build audio models that cater to unique, subjective human preferences on music? Suno is building a future where anyone can make great music. In this episode of Barrchives, I sat down with Mikey (who, like his co-founder, is a musician) to talk about how they do what they do, from why they chose a transformer-based architecture to how they test new models when outputs are so subjective.
AI and data teams in a sense, kind of, do the same thing: make decisions based on data. So how do you build AI that *helps* data teams do their best work? Hex was one of the first companies in their space to embrace language models and build code generation features into their data workspace. In this episode of Barrchives, I went deep with Hex’s co-founder and CEO, Barry McCardel, about Hex’s journey towards becoming an AI company.
When it comes to building cutting-edge technology companies, Erik Bernhardsson's journey with Modal exemplifies the mix of passion, technical ingenuity, and adaptability required to succeed. From tackling infrastructure challenges in the data and AI space to navigating the nuanced dynamics of scaling a business, Modal’s story provides a masterclass in modern tech entrepreneurship.
Building a company isn’t for the faint of heart. It’s messy, nonlinear, and full of surprises. Jennifer Smith, CEO of Scribe, knows this better than most. In a recent episode of Barrchives, she shared the hard-earned lessons that helped her navigate the twists and turns of finding product-market fit. From trusting her instincts when others doubted her to rethinking how humans and AI can work together, Jennifer’s journey offers powerful insights for anyone trying to build something transformative.
From his early work in physics and mathematical simulation to leading Vision Pro development at Apple, Amit Jain has consistently pushed the boundaries of how computers process and understand our world. Now as the founder and CEO of Luma AI, he's tackling an even bigger challenge: building what he calls a "universal imagination engine." In this episode of Barrchives, Amit shares his perspective on why unified AI models outperform specialized ones, how visual data transforms AI reasoning, and what it takes to build infrastructure capable of training on millions of videos. But underlying these technical insights is a broader vision about technology's role in human progress.
Naveen Rao discovered artificial intelligence through science fiction in middle school, devouring the works of Asimov while his peers in Kentucky were doing whatever kids in Kentucky normally did. Growing up in a family of doctors, he chose a less conventional path, following his early interests in programming and circuit building.
After establishing himself as an engineer, he stepped away to pursue a neuroscience PhD at Brown, finishing in record time. This combination of engineering expertise and biological understanding shaped his approach to AI hardware development at his companies Nervana (acquired by Intel) and MosaicML (acquired by Databricks).
In this conversation, the VP of AI at Databricks breaks down what's actually needed for meaningful progress in machine reasoning (hint: it's not just bigger models), and why deep tech development needs a different playbook than what we're used to.
Tristan Handy’s move to develop dbt Labs meant stepping away from steady consulting revenue to build a product. With support from an open-source community and interest from enterprise clients, dbt quickly found a growing user base. By 2019, it had become a go-to tool for data professionals looking to streamline workflows and prepare data for AI. Today, dbt Labs serves over 50,000 teams globally, with dbt Cloud helping organizations tackle modern analytics and AI needs through data transformation, observability, and orchestration. In this episode of Barrchives, Tristan discusses how dbt is adapting to support the evolving demands of today’s data ecosystems. He shares insights on how data teams can move beyond manual, repetitive tasks to create environments where data becomes a valuable, collaborative asset. From AI’s potential as an analytical “thought partner” to the emerging standards reshaping data access, Tristan explores the shifts making data infrastructure more adaptable and effective.
Cristóbal Valenzuela, CEO and co-founder of Runway, is using AI to open up new pathways of expressing creativity. Cristóbal founded Runway with the idea that AI could expand the boundaries of what is possible in storytelling. This vision transformed the company from a model marketplace into a unique toolkit for artists, filmmakers, and creators that brings their visions to life.