Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/d6/48/e6/d648e689-0335-66bf-edab-068d34c32a18/mza_14574415546237425666.jpg/600x600bb.jpg
Engineering Enablement by DX
DX
89 episodes
1 week ago
This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it. Topics include in-depth interviews with Platform and DevEx teams, as well as the latest research and approaches on measuring developer productivity. The EE podcast is hosted by Abi Noda, the founder and CEO of DX (getdx.com) and published researcher focused on developing measurement methods to help organizations improve developer experience and productivity.
Show more...
Technology
Business,
Management
RSS
All content for Engineering Enablement by DX is the property of DX and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it. Topics include in-depth interviews with Platform and DevEx teams, as well as the latest research and approaches on measuring developer productivity. The EE podcast is hosted by Abi Noda, the founder and CEO of DX (getdx.com) and published researcher focused on developing measurement methods to help organizations improve developer experience and productivity.
Show more...
Technology
Business,
Management
Episodes (20/89)
Engineering Enablement by DX
How Monzo runs data-driven AI experimentation

In this episode of Engineering Enablement, host Laura Tacho talks with Fabien Deshayes, who leads multiple platform engineering teams at Monzo Bank. Fabien explains how Monzo is adopting AI responsibly within a highly regulated industry, balancing innovation with structure, control, and data-driven decision-making.


They discuss how Monzo runs structured AI trials, measures adoption and satisfaction, and uses metrics to guide investment and training. Fabien shares why the company moved from broad rollouts to small, focused cohorts, how they are addressing existing PR review bottlenecks that AI has intensified, and what they have learned from empowering product managers and designers to use AI tools directly.


He also offers insights into budgeting and experimentation, the results Monzo is seeing from AI-assisted engineering, and his outlook on what comes next, from agent orchestration to more seamless collaboration across roles.

Where to find Fabien Deshayes: 

• LinkedIn: https://www.linkedin.com/in/fabiendeshayes


Where to find Laura Tacho: 

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• X: https://x.com/rhein_wein

• Website: https://lauratacho.com/

• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-course


In this episode, we cover:

(00:00) Intro  

(01:01) An overview of Monzo bank and Fabien’s role  

(02:05) Monzo’s careful, structured approach to AI experimentation  

(05:30) How Monzo’s AI journey began  

(06:26) Why Monzo chose a structured approach to experimentation and what criteria they used  

(09:21) How Monzo selected AI tools for experimentation  

(11:51) Why individual tool stipends don’t work for large, regulated organizations  

(15:32) How Monzo measures the impact of AI tools and uses the data  

(18:10) Why Monzo limits AI tool trials to small, focused cohorts  

(20:54) The phases of Monzo’s AI rollout and how learnings are shared across the organization  

(22:43) What Monzo’s data reveals about AI usage and spending  

(24:30) How Monzo balances AI budgeting with innovation  

(26:45) Results from DX’s spending poll and general advice on AI budgeting  

(28:03) What Monzo’s data shows about AI’s impact on engineering performance  

(29:50) The growing bottleneck in PR reviews and how Monzo is solving it with tenancies  

(33:54) How product managers and designers are using AI at Monzo  

(36:36) Fabien’s advice for moving the needle with AI adoption  

(38:42) The biggest changes coming next in AI engineering 


Referenced:

  • Monzo 
  • The Go Programming Language
  • Swift.org
  • Kotlin
  • GitHub Copilot in VS Code 
  • Cursor
  • Windsurf
  • Claude Code
  • Planning your 2026 AI tooling budget: guidance for engineering leaders
Show more...
1 week ago
41 minutes

Engineering Enablement by DX
Planning your 2026 AI tooling budget: guidance for engineering leaders

In this episode of Engineering Enablement, Laura Tacho and Abi Noda discuss how engineering leaders can plan their 2026 AI budgets effectively amid rapid change and rising costs. Drawing on data from DX’s recent poll and industry benchmarks, they explore how much organizations should expect to spend per developer, how to allocate budgets across AI tools, and how to balance innovation with cost control.

Laura and Abi also share practical insights on building a multi-vendor strategy, evaluating ROI through the right metrics, and ensuring continuous measurement before and after adoption. They discuss how to communicate AI’s value to executives, avoid the trap of cost-cutting narratives, and invest in enablement and training to make adoption stick.


Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda  

• Substack: ​​https://substack.com/@abinoda  


Where to find Laura Tacho: 

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• X: https://x.com/rhein_wein

• Website: https://lauratacho.com/

• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-course


In this episode, we cover:

(00:00) Intro: Setting the stage for AI budgeting in 2026

(01:45) Results from DX’s AI spending poll and early trends

(03:30) How companies are currently spending and what to watch in 2026

(04:52) Why clear definitions for AI tools matter and how Laura and Abi think about them

(07:12) The entry point for 2026 AI tooling budgets and emerging spending patterns

(10:14) Why 2026 is the year to prove ROI on AI investments

(11:10) How organizations should approach AI budgeting and allocation

(15:08) Best practices for managing AI vendors and enterprise licensing

(17:02) How to define and choose metrics before and after adopting AI tools

(19:30) How to identify bottlenecks and AI use cases with the highest ROI

(21:58) Key considerations for AI budgeting 

(25:10) Why AI investments are about competitiveness, not cost-cutting

(27:19) How to use the right language to build trust and executive buy-in

(28:18) Why training and enablement are essential parts of AI investment

(31:40) How AI add-ons may increase your tool costs

(32:47) Why custom and fine-tuned models aren’t relevant for most companies today

(34:00) The tradeoffs between stipend models and enterprise AI licenses


Referenced:

  • DX Core 4 Productivity Framework
  • Measuring AI code assistants and agents
  • 2025 State of AI Report: The Builder's Playbook
  • GitHub Copilot · Your AI pair programmer
  • Cursor
  • Glean
  • Claude Code
  • ChatGPT
  • Windsurf
  • Track Claude Code adoption, impact, and ROI, directly in DX
  • Measuring AI code assistants and agents with the AI Measurement Framework
  • Driving enterprise-wide AI tool adoption
  • Sentry
  • Poolside
Show more...
3 weeks ago
38 minutes

Engineering Enablement by DX
The evolving role of DevProd teams in the AI era

CEO Abi Noda is joined by DX CTO Laura Tacho to discuss the evolving role of Platform and DevProd teams in the AI era. Together, they unpack how AI is reshaping platform responsibilities, from evaluation and rollout to measurement, tool standardization, and guardrails. They explore why fundamentals like documentation and feedback loops matter more than ever for both developers and AI agents. They also share insights on reducing tool sprawl, hardening systems for higher throughput, and leveraging AI to tackle tech debt, modernize legacy code, and improve workflows across the SDLC.

Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda  

• Substack: ​​https://substack.com/@abinoda  


Where to find Laura Tacho: 

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• X: https://x.com/rhein_wein

• Website: https://lauratacho.com/

• Laura’s course (Measuring Engineering Performance and AI Impact): https://lauratacho.com/developer-productivity-metrics-course


In this episode, we cover:

(00:00) Intro: Why platform teams need to evolve

(02:34) The challenge of defining platform teams and how AI is changing expectations

(04:44) Why evaluating and rolling out AI tools is becoming a core platform responsibility

(07:14) Why platform teams need solid measurement frameworks to evaluate AI tools

(08:56) Why platform leaders should champion education and advocacy on measurement

(11:20) How AI code stresses pipelines and why platform teams must harden systems

(12:24) Why platform teams must go beyond training to standardize tools and create workflows

(14:31) How platform teams control tool sprawl

(16:22) Why platform teams need strong guardrails and safety checks

(18:41) The importance of standardizing tools and knowledge

(19:44) The opportunity for platform teams to apply AI at scale across the organization

(23:40) Quick recap of the key points so far

(24:33) How AI helps modernize legacy code and handle migrations

(25:45) Why focusing on fundamentals benefits both developers and AI agents

(27:42) Identifying SDLC bottlenecks beyond AI code generation

(30:08) Techniques for optimizing legacy code bases 

(32:47) How AI helps tackle tech debt and large-scale code migrations

(35:40) Tools across the SDLC


Referenced:

  • DX Core 4 Productivity Framework
  • Measuring AI code assistants and agents
  • Abi Noda's LinkedIn post
  • Measuring AI code assistants and agents with the AI Measurement Framework
  • The SPACE framework: A comprehensive guide to developer productivity
  • Common workflows - Anthropic
  • Enterprise Tech Leadership Summit Las Vegas 2025
  • Driving enterprise-wide AI tool adoption with Bruno Passos
  • Accelerating Large-Scale Test Migration with LLMs | by Charles Covey-Brandt | The Airbnb Tech Blog | Medium
  • Justin Reock - DX | LinkedIn
  • A New Tool Saved Morgan Stanley More Than 280,000 Hours This Year - Business Insider
Show more...
1 month ago
37 minutes

Engineering Enablement by DX
Lessons from Twilio’s multi-year platform consolidation

In this episode, host Laura Tacho speaks with Jesse Adametz, Senior Engineering Leader on the Developer Platform at Twilio. Jesse is leading Twilio’s multi-year platform consolidation, unifying tech stacks across large acquisitions and driving migrations at enterprise scale. He discusses platform adoption, the limits of Kubernetes, and how Twilio balances modernization with pragmatism. The conversation also explores treating developer experience as a product, offering “change as a service,” and Twilio’s evolving approach to AI adoption and platform support.

Where to find Jesse Adametz: 

• LinkedIn: https://www.linkedin.com/in/jesseadametz/

• X: https://x.com/jesseadametz

• Website: https://www.jesseadametz.com/


Where to find Laura Tacho:

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• X: https://x.com/rhein_wein

• Website: https://lauratacho.com/

• Laura’s course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-course


In this episode, we cover:

(00:00) Intro

(01:30) Jesse’s background and how he ended up at Twilio

(04:00) What SRE teaches leaders and ICs

(06:06) Where Twilio started the post-acquisition integration

(08:22) Why platform migrations can’t follow a straight-line plan

(10:05) How Twilio balances multiple strategies for migrations

(12:30) The human side of change: advocacy, training, and alignment

(17:46) Treating developer experience as a first-class product

(21:40) What “change as a service” looks like in practice

(24:57) A mandateless approach: creating voluntary adoption through value

(28:50) How Twilio demonstrates value with metrics and reviews

(30:41) Why Kubernetes wasn’t the right fit for all Twilio workloads 

(36:12) How Twilio decides when to expose complexity

(38:23) Lessons from Kubernetes hype and how AI demands more experimentation

(44:48) Where AI fits into Twilio’s platform strategy

(49:45) How guilds fill needs the platform team hasn’t yet met

(51:17) The future of platform in centralizing knowledge and standards

(54:32) How Twilio evaluates tools for fit, pricing, and reliability 

(57:53) Where Twilio applies AI in reliability, and where Jesse is skeptical

(59:26) Laura’s vibe-coded side project built on Twilio

(1:01:11) How external lessons shape Twilio’s approach to platform support and docs


Referenced:

  • The AI Measurement Framework
  • Experian
  • Transact-SQL - Wikipedia
  • Twilio
  • Kubernetes
  • Copilot
  • Claude Code
  • Windsurf
  • Cursor
  • Bedrock
Show more...
1 month ago
1 hour 6 minutes

Engineering Enablement by DX
Driving enterprise-wide AI tool adoption

In this episode of Engineering Enablement, host Laura Tacho talks with Bruno Passos, Product Lead for Developer Experience at Booking.com, about how the company is rolling out AI tools across a 3,000-person engineering team.


Bruno shares how Booking.com set ambitious innovation goals, why cultural change mattered as much as technology, and the education practices that turned hesitant developers into daily users. He also reflects on the early barriers, from low adoption and knowledge gaps to procurement hurdles, and explains the interventions that worked, including learning paths, hackathon-style workshops, Slack communities, and centralized procurement. The result is that Booking.com now sits in the top 25 percent of companies for AI adoption.

Where to find Bruno Passos:

• LinkedIn: https://www.linkedin.com/in/brpassos/

• X: https://x.com/brunopassos


Where to find Laura Tacho:

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• X: https://x.com/rhein_wein

• Website: https://lauratacho.com/

• Laura’s course (Measuring Engineering Performance and AI Impact) https://lauratacho.com/developer-productivity-metrics-course


In this episode, we cover:

(00:00) Intro

(01:09) Bruno’s role at Booking.com and an overview of the business 

(02:19) Booking.com’s goals when introducing AI tooling

(03:26) Why Booking.com made such an ambitious innovation ratio goal 

(06:46) The beginning of Booking.com’s journey with AI

(08:54) Why the initial adoption of Cody was low

(13:17) How education and enablement fueled adoption

(15:48) The importance of a top-down cultural change for AI adoption

(17:38) The ongoing journey of determining the right metrics

(21:44) Measuring the longer-term impact of AI 

(27:04) How Booking.com solved internal bottlenecks to testing new tools

(32:10) Booking.com’s framework for evaluating new tools

(35:50) The state of adoption at Booking.com and efforts to expand AI use

(37:07) What’s still undetermined about AI’s impact on PR/MR quality

(39:48) How Booking.com is addressing lagging adoption and monitoring churn

(43:24) How Booking.com’s Slack community lowers friction for questions and support

(44:35) Closing thoughts on what’s next for Booking.com’s AI plan


Referenced:

  • Measuring AI code assistants and agents
  • DX Core 4 Framework
  • Booking.com
  • Sourcegraph Search
  • Cody | AI coding assistant from Sourcegraph
  • Greyson Junggren - DX | LinkedIn
Show more...
2 months ago
46 minutes

Engineering Enablement by DX
Measuring AI code assistants and agents with the AI Measurement Framework

In this episode of Engineering Enablement, DX CTO Laura Tacho and CEO Abi Noda break down how to measure developer productivity in the age of AI using DX’s AI Measurement Framework. Drawing on research with industry leaders, vendors, and hundreds of organizations, they explain how to move beyond vendor hype and headlines to make data-driven decisions about AI adoption.


They cover why some fundamentals of productivity measurement remain constant, the pitfalls of over-relying on flawed metrics like acceptance rate, and how to track AI’s real impact across utilization, quality, and cost. The conversation also explores measuring agentic workflows, expanding the definition of “developer” to include new AI-enabled contributors, and avoiding second-order effects like technical debt and slowed PR throughput.

Whether you’re rolling out AI coding tools, experimenting with autonomous agents, or just trying to separate signal from noise, this episode offers a practical roadmap for understanding AI’s role in your organization—and ensuring it delivers sustainable, long-term gains.


Where to find Laura Tacho:

• X: https://x.com/rhein_wein

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• Website: https://lauratacho.com/

Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda 

• Substack: ​​https://substack.com/@abinoda 


In this episode, we cover:

(00:00) Intro

(01:26) The challenge of measuring developer productivity in the AI age

(04:17) Measuring productivity in the AI era — what stays the same and what changes

(07:25) How to use DX’s AI Measurement Framework 

(13:10) Measuring AI’s true impact from adoption rates to long-term quality and maintainability

(16:31) Why acceptance rate is flawed — and DX’s approach to tracking AI-authored code

(18:25) Three ways to gather measurement data

(21:55) How Google measures time savings and why self-reported data is misleading

(24:25) How to measure agentic workflows and a case for expanding the definition of developer

(28:50) A case for not overemphasizing AI’s role

(30:31) Measuring second-order effects 

(32:26) Audience Q&A: applying metrics in practice

(36:45) Wrap up: best practices for rollout and communication 


Referenced:

  • DX Core 4 Productivity Framework
  • Measuring AI code assistants and agents
  • AI is making Google engineers 10% more productive, says Sundar Pichai - Business Insider
Show more...
2 months ago
41 minutes

Engineering Enablement by DX
How to cut through the hype and measure AI’s real impact (Live from LeadDev London)

In this special episode of the Engineering Enablement podcast, recorded live at LeadDev London, DX CTO Laura Tacho explores the growing gap between AI headlines and the reality inside engineering teams—and what leaders can do to close it.


Laura shares data from nearly 39,000 developers across 184 companies, highlights the Core 4 and introduces the AI Measurement Framework, and offers a practical playbook for using data to improve developer experience, measure AI’s true impact, and build better software without compromising long-term performance.


Where to find Laura Tacho:

• X: https://x.com/rhein_wein

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• Website: https://lauratacho.com/


In this episode, we cover:

(00:00) Intro: Laura’s keynote from LDX3

(01:44) The problem with asking how much faster can we go with AI?

(03:02) How the disappointment gap creates barriers to AI adoption

(06:20) What AI adoption looks like at top-performing organizations

(07:53) What leaders must do to turn AI into meaningful impact

(10:50) Why building better software with AI still depends on fundamentals

(12:03) An overview of the DX Core 4 Framework

(13:22) Why developer experience is the biggest performance lever

(15:12) How Block used Core 4 and DXI to identify 500,000 hours in time savings

(16:08) How to get started with Core 4

(17:32) Measuring AI with the AI Measurement Framework

(21:45) Final takeaways and how to get started with confidence


Referenced:

  • LDX3 by LeadDev | The Festival of Software Engineering Leadership | London
  • Software engineering with LLMs in 2025: reality check
  • SPACE framework, PRs per engineer, AI research
  • The AI adoption playbook: Lessons from Microsoft's internal strategy
  • DX Core 4 Productivity Framework
  • Nicole Forsgren
  • Margaret-Anne Storey
  • Dropbox.com
  • Etsy
  • Pfizer
  • Drew Houston - Dropbox | LinkedIn
  • Block
  • Cursor
  • Dora.dev
  • Sourcegraph
  • Booking.com
Show more...
3 months ago
23 minutes

Engineering Enablement by DX
Unpacking METR’s findings: Does AI slow developers down?

In this episode of the Engineering Enablement podcast, host Abi Noda is joined by Quentin Anthony, Head of Model Training at Zyphra and a contributor at EleutherAI. Quentin participated in METR’s recent study on AI coding tools, which revealed that developers often slowed down when using AI—despite feeling more productive. He and Abi unpack the unexpected results of the study, which tasks AI tools actually help with, and how engineering teams can adopt them more effectively by focusing on task-level fit and developing better digital hygiene.


Where to find Quentin Anthony: 

• LinkedIn: https://www.linkedin.com/in/quentin-anthony/

• X: https://x.com/QuentinAnthon15


Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda 


In this episode, we cover:

(00:00) Intro

(01:32) A brief overview of Quentin’s background and current work

(02:05) An explanation of METR and the study Quentin participated in 

(11:02) Surprising results of the METR study 

(12:47) Quentin’s takeaways from the study’s results 

(16:30) How developers can avoid bloated code bases through self-reflection

(19:31) Signs that you’re not making progress with a model 

(21:25) What is “context rot”?

(23:04) Advice for combating context rot

(25:34) How to make the most of your idle time as a developer

(28:13) Developer hygiene: the case for selectively using AI tools

(33:28) How to interact effectively with new models

(35:28) Why organizations should focus on tasks that AI handles well

(38:01) Where AI fits in the software development lifecycle

(39:40) How to approach testing with models

(40:31) What makes models different 

(42:05) Quentin’s thoughts on agents 


Referenced:

  • DX Core 4 Productivity Framework
  • Zyphra
  • EleutherAI
  • METR
  • Cursor
  • Claude
  • LibreChat
  • Google Gemini
  • Introducing OpenAI o3 and o4-mini
  • METR’s study on how AI affects developer productivity
  • Quentin Anthony on X: "I was one of the 16 devs in this study."
  • Context rot from Hacker News
  • Tracing the thoughts of a large language model
  • Kimi
  • Grok 4 | xAI
Show more...
3 months ago
43 minutes

Engineering Enablement by DX
CarGurus’ journey building a developer portal and increasing AI adoption

In this episode, Abi Noda talks with Frank Fodera, Director of Engineering for Developer Experience at CarGurus. Frank shares the story behind CarGurus’ transition from a monolithic architecture to microservices, and how that journey led to the creation of their internal developer portal, Showroom. He outlines the five pillars of the IDP, how it integrates with infrastructure, and why they chose to build rather than buy. The conversation also explores how CarGurus is approaching AI tool adoption across the engineering team, from experiments and metrics to culture change and leadership buy-in.


Where to find Frank Fodera : 

• LinkedIn: https://www.linkedin.com/in/frankfodera/


Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda 


In this episode, we cover:

(00:00) Intro: IDPs (Internal Developer Portals) and AI 

(02:07) The IDP journey at CarGurus

(05:53) A breakdown of the people responsible for building the IDP

(07:05) The five pillars of the Showroom IDP

(09:12) How DevX worked with infrastructure

(11:13) The business impact of Showroom

(13:57) The transition from monolith to microservices and struggles along the way

(15:54) The benefits of building a custom IDP

(19:10) How CarGurus drives AI coding tool adoption 

(28:48) Getting started with an AI initiative

(31:50) Metrics to track 

(34:06) Tips for driving AI adoption


Referenced:

  • DX Core 4 Productivity Framework 
  • Internal Developer Portals: Use Cases and Key Components
  • Strangler Fig Pattern - Azure Architecture Center | Microsoft Learn
  • Spotify for Backstage
  • The AI adoption playbook: Lessons from Microsoft's internal strategy
Show more...
4 months ago
39 minutes

Engineering Enablement by DX
Snowflake’s playbook for operational excellence

In this episode, Abi Noda speaks with Gilad Turbahn, Head of Developer Productivity, and Amy Yuan, Director of Engineering at Snowflake, about how their team builds and sustains operational excellence. They break down the practices and principles that guide their work—from creating two-way communication channels to treating engineers as customers. The conversation explores how Snowflake fosters trust, uses feedback loops to shape priorities, and maintains alignment through thoughtful planning. You’ll also hear how they engage with teams across the org, convert detractors, and use Customer Advisory Boards to bring voices from across the company into the decision-making process.


Where to find Amy Yuan: 

• LinkedIn: https://www.linkedin.com/in/amy-yuan-a8ba783/


Where to find Gilad Turbahn:

• LinkedIn: https://www.linkedin.com/in/giladturbahn/


Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda 


In this episode, we cover:

(00:00) Intro: an overview of operational excellence

(04:13) Obstacles to executing with operational excellence

(05:51) An overview of the Snowflake playbook for operational excellence

(08:25) Who does the work of reaching out to customers

(09:06) The importance of customer engagement

(10:19) How Snowflake does customer engagement 

(14:13) The types of feedback received and the two camps (supporters and detractors)

(16:55) How to influence detractors and how detractors actually help 

(18:27) Using insiders as messengers

(22:48) An overview of Snowflake’s customer advisory board

(26:10) The importance of meeting in person (learnings from Warsaw and Berlin office visits)

(28:08) Managing up

(30:07) How planning is done at Snowflake

(36:25) Setting targets for OKRs, and Snowflake’s philosophy on metrics 

(39:22) The annual plan and how it’s shared 


Referenced:

  • CTO buy-in, measuring sentiment, and customer focus
  • Snowflake
  • Benoit Dageville - Snowflake Computing | LinkedIn
  • Thierry Cruanes - Snowflake Computing | LinkedIn
Show more...
4 months ago
45 minutes

Engineering Enablement by DX
The biggest obstacles preventing GenAI adoption — and how to overcome them

In this episode, Abi Noda speaks with DX CTO Laura Tacho about the real obstacles holding back AI adoption in engineering teams. They discuss why technical challenges are rarely the blocker, and how fear, unclear expectations, and inflated hype can stall progress. Laura shares practical strategies for driving adoption, including how to model usage from the top down, build momentum through champions and training programs, and measure impact effectively—starting with establishing a baseline before introducing AI tools.


Where to find Laura Tacho: 

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• Website: https://lauratacho.com/


Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda 


In this episode, we cover:

(00:00) Intro: The full spectrum of AI adoption

(03:02) The hype of AI

(04:46) Some statistics around the current state of AI coding tool adoption

(07:27) The real barriers to AI adoption

(09:31) How to drive AI adoption 

(15:47) Measuring AI’s impact 

(19:49) More strategies for driving AI adoption 

(23:54) The Methods companies are actually using to drive impact

(29:15) Questions from the chat 

(39:48) Wrapping up


Referenced:

  • DX Core 4 Productivity Framework
  • The AI adoption playbook: Lessons from Microsoft's internal strategy
  • Microsoft CEO says up to 30% of the company's code was written by AI | TechCrunch
  • Viral Shopify CEO Manifesto Says AI Now Mandatory For All Employees
  • DORA | Impact of Generative AI in Software Development
  • Guide to AI assisted engineering
  • Justin Reock - DX | LinkedIn
Show more...
5 months ago
42 minutes

Engineering Enablement by DX
DORA’s latest research on AI impact

In this episode, Abi Noda speaks with Derek DeBellis, lead researcher at Google’s DORA team, about their latest report on generative AI’s impact on software productivity.

They dive into how the survey was built, what it reveals about developer time and “flow,” and the surprising gap between individual and team outcomes. Derek also shares practical advice for leaders on measuring AI impact and aligning metrics with organizational goals.


Where to find Derek DeBellis: 

• LinkedIn: https://www.linkedin.com/in/derekdebellis/


Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda 


In this episode, we cover:

(00:00) Intro: DORA’s new Impact of Gen AI report

(03:24) The methodology used to put together the surveys DORA used for the report 

(06:44) An example of how a single word can throw off a question 

(07:59) How DORA measures flow 

(10:38) The two ways time was measured in the recent survey

(14:30) An overview of experiential surveying 

(16:14) Why DORA asks about time 

(19:50) Why Derek calls survey results ‘observational data’ 

(21:49) Interesting findings from the report 

(24:17) DORA’s definition of productivity 

(26:22) Why a 2.1% increase in individual productivity is significant 

(30:00) The report’s findings on decreased team delivery throughput and stability 

(32:40) Tips for measuring AI’s impact on productivity 

(38:20) Wrap up: understanding the data 


Referenced:

  • DORA | Impact of Generative AI in Software Development
  • The science behind DORA
  • Yale Professor Divulges Strategies for a Happy Life 
  • Incredible! Listening to ‘When I’m 64’ makes you forget your age
  • Slow Productivity: The Lost Art of Accomplishment without Burnout
  • DORA, SPACE, and DevEx: Which framework should you use?
  • SPACE framework, PRs per engineer, AI research
Show more...
5 months ago
40 minutes

Engineering Enablement by DX
Setting targets for developer productivity metrics

In this episode, Abi Noda is joined by Laura Tacho, CTO at DX, engineering leadership coach, and creator of the Core 4 framework. They explore how engineering organizations can avoid common pitfalls when adopting metrics frameworks like SPACE, DORA, and Core 4.

Laura shares a practical guide to getting started with Core 4—beginning with controllable input metrics that teams can actually influence. The conversation touches on Goodhart’s Law, why focusing too much on output metrics can lead to data distortion, and how leaders can build a culture of continuous improvement rooted in meaningful measurement.


Where to find Laura Tacho: 

• LinkedIn: https://www.linkedin.com/in/lauratacho/

• Website: https://lauratacho.com/


Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda 


In this episode, we cover:

(00:00) Intro: Improving systems, not distorting data

(02:20) Goal setting with the new Core 4 framework

(08:01) A quick primer on Goodhart’s law

(10:02) Input vs. output metrics—and why targeting outputs is problematic

(13:38) A health analogy demonstrating input vs. output

(17:03) A look at how the key input metrics in Core 4 drive output metrics 

(24:08) How to counteract gamification 

(28:24) How to get developer buy-in

(30:48) The number of metrics to focus on 

(32:44) Helping leadership and teams connect the dots to how input goals drive output

(35:20) Demonstrating business impact 

(38:10) Best practices for goal setting


Referenced:

  • DX Core 4 Productivity Framework
  • Engineering Enablement Podcast
  • DORA’s software delivery metrics: the four keys
  • The SPACE of Developer Productivity: There’s more to it than you think
  • DevEx: What Actually Drives Productivity
  • DORA, SPACE, and DevEx: Which framework should you use?
  • Goodhart's law 
  • Nicole Forsgren - Microsoft | LinkedIn
  • Campbell's law 
  • Introducing Core 4: The best way to measure and improve your product velocity
  • DX Core 4: Framework overview, key design principles, and practical applications
  • DX Core 4: 2024 benchmarks - by Abi Noda
Show more...
6 months ago
43 minutes

Engineering Enablement by DX
The AI adoption playbook: Lessons from Microsoft's internal strategy

Brian Houck from Microsoft returns to discuss effective strategies for driving AI adoption among software development teams. Brian shares his insights into why the immense hype around AI often serves as a barrier rather than a facilitator for adoption, citing skepticism and inflated expectations among developers. He highlights the most effective approaches, including leadership advocacy, structured training, and cultivating local champions within teams to demonstrate practical use cases. 

Brian emphasizes the importance of honest communication about AI's capabilities, avoiding over-promises, and ensuring that teams clearly understand what AI tools are best suited for. Additionally, he discusses common pitfalls, such as placing excessive pressure on individuals through leaderboards and unrealistic mandates, and stresses the importance of framing AI as an assistant rather than a replacement for developer skills. Finally, Brian explores the role of data and metrics in adoption efforts, offering practical advice on how to measure usage effectively and sustainably.

Where to find Brian Houck: 

• LinkedIn: https://www.linkedin.com/in/brianhouck/ 

• Website: https://www.microsoft.com/en-us/research/people/bhouck/ 


Where to find Abi Noda:

• LinkedIn: https://www.linkedin.com/in/abinoda 


In this episode, we cover:

(00:00) Intro: Why AI hype can hinder adoption among teams

(01:47) Key strategies companies use to successfully implement AI

(04:47) Understanding why adopting AI tools is uniquely challenging

(07:09) How clear and consistent leadership communication boosts AI adoption

(10:46) The value of team leaders ("local champions") demonstrating practical AI use

(14:26) Practical advice for identifying and empowering team champions

(16:31) Common mistakes companies make when encouraging AI adoption

(19:21) Simple technical reminders and nudges that encourage AI use

(20:24) Effective ways to track and measure AI usage through dashboards

(23:18) Working with team leaders and infrastructure teams to promote AI tools

(24:20) Understanding when to shift from adoption efforts to sustained use

(25:59) Insights into the real-world productivity impact of AI

(27:52) Discussing how AI affects long-term code maintenance

(29:02) Updates on ongoing research linking sleep quality to productivity


Referenced:

  • DX Core 4 Productivity Framework
  • Engineering Enablement Podcast
  • DORA Metrics
  • Dropbox Engineering Blog
  • Etsy Engineering Blog
  • Pfizer Digital Innovation
  • Brown Bag Sessions – A Guide
  • IDE Integration and AI Tools
  • Developer Productivity Dashboard Examples
Show more...
6 months ago
29 minutes

Engineering Enablement by DX
Gene Kim on developer experience and AI engineering

In this episode, we’re joined by author and researcher Gene Kim for a wide-ranging conversation on the evolution of DevOps, developer experience, and the systems thinking behind organizational performance. Gene shares insights from his latest work on socio-technical systems, the role of developer platforms, and how AI is reshaping the shape of engineering teams. We also explore the coordination challenges facing modern organizations, the limits of tooling, and the deeper principles that unite DevOps, lean, and platform engineering.


Mentions and links:

  • Phoenix Project
  • Decoding the DNA of the Toyota Production System
  • Wiring the Winning Organization
  • ETLS Vegas
  • Find Gene on LinkedIn

Discussion points:

  • (0:00) Introduction
  • (2:12) The evolving landscape of developer experience
  • (10:34) Option Value theory, and how GenAI helps developers
  • (13:45) The aim of developer experience work
  • (19:59) The significance of layer three changes
  • (23:23) Framing developer experience
  • (32:12) GenAI’s part in ‘the death of the stubborn developer”
  • (36:05) GenAI’s implications on the workforce
  • (38:05) Where Gene’s work is heading
Show more...
7 months ago
38 minutes

Engineering Enablement by DX
Getting Airbnb’s Platform team to drive more impact: Reorganizing, defining strategy, and metrics

In this episode, Airbnb Developer Productivity leader Anna Sulkina shares the story of how her team transformed itself and became more impactful within the organization. She starts by describing how the team previously operated, where teams were delivering but felt they needed more clarity and alignment across teams. Then, the conversation digs into the key changes they made, including reorganizing the team, clarifying team roles, defining strategy, and improving their measurement systems. 

Mentions and links

  • Follow Anna on LinkedIn
  • For A deeper look into how our Engineers and Data Scientists build a world of belonging, check out The Airbnb Tech Blog

Discussion points:

  • (0:00) Intro
  • (1:40) Skills that make a great developer productivity leader
  • (4:36) Challenges in how the team operated previously
  • (10:49) Changing the platform org’s focus and structure
  • (16:04) Clarifying roles for EM’s, PM’s, and tech leads
  • (20:22) How Airbnb defined its infrastructure org’s strategy
  • (28:23) Improvements they’ve seen to developer experience satisfaction
  • (32:13) The evolution of Airbnb’s developer experience survey
Show more...
8 months ago
32 minutes

Engineering Enablement by DX
You have developer productivity metrics. Now what?

Many teams struggle to use developer productivity data effectively because they don’t know how to use it to decide what to do next. We know that data is here to help us improve, but how do you know where to look? And even then, what do you actually do to put the wheels of change in motion? Listen to this conversation with  Abi Noda and Laura Tacho (CEO and CTO at DX) about data-driven management and how to take a structured, analytical approach to using data for improvement.

Mentions and Links:

  • Measuring developer productivity with the DX Core 4
  • Laura’s developer productivity metrics course

Discussion points:

  • (0:00) Intro
  • (2:07) The challenge we’re seeing
  • (6:53) Overview on using data
  • (8:58) Use cases for data-engineering organizations
  • (15:57) Use cases for data - engineering systems teams
  • (21:38) Two types of metrics - Diagnostics and Improvement
  • (38:09) Summary
Show more...
8 months ago
39 minutes

Engineering Enablement by DX
Leveraging sentiment data, driving org-wide action, and executive engagement

In this episode, David Betts, leader of Twilio’s developer platform team, shares how Twilio leverages developer sentiment data to drive platform engineering initiatives, optimize Kubernetes adoption, and demonstrate ROI for leadership. David details Twilio’s journey from traditional metrics to sentiment-driven insights, the innovative tools his teams have built to streamline CI/CD workflows, and the strategies they use to align platform investments with organizational goals.

Mentions and links:

  • Find David on LinkedIn
  • Measuring developer productivity with the DX Core 4
  • Ask Your Developer by Jeff Lawson, former CEO of Twilio

Discussion points:

  • (0:00) Introduction
  • (0:49) Twilio's developer platform team
  • (2:03) Twilio's approach to release engineering and CD
  • (4:10) How they use sentiment data and telemetry metrics
  • (7:27) Comparing sentiment data and telemetry metrics
  • (10:25) How to take action on sentiment data
  • (13:16) What resonates with execs
  • (15:44) Proving DX value: sentiment, efficiency, and ROI
  • (19:15) Balancing quarterly and real-time developer feedback
Show more...
9 months ago
24 minutes

Engineering Enablement by DX
Rethinking developer experience at T-Mobile: DevEx vs devprod, exec buy-in, and developer self-service

Chris Chandler is a Senior Member of the Technical Staff for Developer Productivity at T-Mobile. Chris has led several major initiatives to improve developer experience including their internal developer portal, Starter Kits (a patented developer platform that predates Backstage), and Workforce Transformation Bootcamps for onboarding developers faster.

Mentions and links:

  • Follow Chris on LinkedIn
  • Measuring developer productivity with the DX Core 4
  • Listen to Decoder with Nilay Patel.

Discussion points:

  • (0:47) From developer experience to developer productivity
  • (7:03) Getting executive buy-in for developer productivity initiatives
  • (13:54) What Chris’s team is responsible for
  • (17:02) How they’ve built relationships with other teams
  • (20:57) How they built and got funding for Dev Console and Starter Kits
  • (27:23) Homegrown solution vs Backstage
Show more...
9 months ago
31 minutes

Engineering Enablement by DX
DX Core 4: 2024 benchmarks

In this episode, Abi and Laura dive into the 2024 DX Core 4 benchmarks, sharing insights across data from 500+ companies. They discuss what these benchmarks mean for engineering leaders, how to interpret key metrics like the Developer Experience Index, and offer advice on how to best use benchmarking data in your organization.

Mentions and Links:

  • DX core 4 benchmarks
  • Measuring developer productivity with the DX Core 4
  • Developer experience index (DXI)
  • Will Larson’s article on the Core 4 and power of benchmarking data

Discussion points:

  • (0:42) What benchmarks are for
  • (3:44) Overview of the DX Core 4 benchmarks
  • (6:07) PR throughput data 
  • (11:05) Key insights related to startups and mobile teams 
  • (14:54) Change fail rate data 
  • (19:42) How to best use benchmarking data
Show more...
10 months ago
28 minutes

Engineering Enablement by DX
This is a weekly podcast focused on developer productivity and the teams and leaders dedicated to improving it. Topics include in-depth interviews with Platform and DevEx teams, as well as the latest research and approaches on measuring developer productivity. The EE podcast is hosted by Abi Noda, the founder and CEO of DX (getdx.com) and published researcher focused on developing measurement methods to help organizations improve developer experience and productivity.