In this episode of Request Response, I sit down with Henry and Anirudh, co-founders of Smithery, the easiest place to build, deploy, and discover MCP servers.
We dive into how Henry and Anirudh went from building custom chatbots for specific use cases to creating infrastructure that makes MCP development accessible to any developer.
We explore the fascinating world of Model Context Protocol—from its potential to revolutionize how AI agents interact with external systems to the practical challenges of building reliable, production-ready MCP servers that enterprises actually trust.
Henry and Anirudh share their vision for making MCP as ubiquitous as REST APIs, and we discuss how the ecosystem is evolving from experimental side projects to critical business infrastructure.
If you're working with AI agents, building integrations, or are curious about the next wave of API-like protocols, this episode offers a ground-floor perspective on technology that could reshape how we think about AI-to-system communication.
Quotes from the Podcast
Why MCP has gained adoption
"MCP has won developer mindshare because it's really striving to be an open protocol. There were previous attempts to create AI app stores like the GPT store, but those approaches were more of a walled garden where you need permission from the company to be part of the ecosystem. MCP is different - anybody can implement it and your agent can work with it without needing permission from Anthropic. They're creating their own governance and making this more like HTTP or an open standard that multiple companies are contributing to. It's made for the developer community."
The value of specialized AI services and agent orchestration
"I think there is something to be said about specialized AI services. Like I was talking to someone at DeepMind who was of the opinion that building services itself is stupid because the models will get so good that you could just say 'build me Salesforce' and the model will do everything for you. 
But even as a human who is generally intelligent, I can set up my own software stack on EC2 and build my own Postgres database - I just choose not to because it's a giant headache. Building software has never been the hassle, it's been maintaining that software. What you're seeing is more specialized AI services, like V0 for front-end development versus using Cursor which is general purpose. I think it's agents all the way down. What we're trying to build towards is agents getting really good at handing off tasks and isolating tasks - so you can tell your agent 'build me Salesforce' with a $30 budget, and that agent should distribute the budget across front-end, back-end, and infrastructure, making intelligent decisions about which specialized services to use for each component."
MCP reputation and discovery
"Something interesting about MCP itself is the client doesn't actually know what servers it's connected to - it only has access to all the tools at once. If you have internet search and another tool called 'best really good amazing internet search', what does it choose between? There's no reputation score assigned, and it doesn't even know those tools could be from two different servers. What you could do with an auto-router is basically say 'this server has a bad reputation score' or 'this tool call didn't work on this server' so we're going to ignore that server altogether. There could be another layer to help with reputations."
In this episode of Request Response, I sit down with Scott Dietzen, CEO of Augment Code and former CEO of Pure Storage.
We dive into why context selection has become more critical than prompt engineering, and how his team solved the fundamental challenge of giving AI agents just the right amount of codebase context to be effective without being overwhelmed.
If you're working with large codebases, building developer tools, or wondering how AI coding assistance scales beyond startup-sized projects, this episode is a must listen.
Show Notes
[00:00:53] – From machine learning PhD to startup CEO: Scott's journey to Augment Code
[00:02:05] – Anti-vibe coding: tackling enterprise-scale codebases with millions of lines
[00:02:45] – Enterprise customers and graduating from Cursor to Augment
[00:04:38] – The context problem: why LLMs struggle with massive codebases
[00:05:39] – From prompt engineering to context engineering as the new bottleneck
[00:06:35] – MCP adoption and helping ISVs package documentation for AI
[00:07:22] – The risk of getting lazy with API design in an AI world
[00:08:35] – Metaprogramming with agents and the importance of code review
[00:09:22] – AI-generated testing improving coverage and unlocking legacy codebases
[00:10:27] – The $2.5 trillion software failure problem and the promise of AI
[00:11:22] – Backlog zero: the holy grail for developers
[00:12:02] – Lessons from distributed systems: simplicity and reliability in commercial software
[00:12:48] – Engineers as tech leads for agents: human judgment still essential
[00:14:02] – Augment's differentiators: context engine, security, and IDE compatibility
In this episode of Request Response, I sat down with Tom Hacohen, founder and CEO of Swix, the Webhooks-as-a-Service platform.
Tom shares the origin story of Swix—which started as a side project to escape the pain of webhook maintenance—and how it’s grown into essential infrastructure for API-first companies.
We dive into why great developer experience means solving actual problems and how AI will reshape API design—from brittle CRUD endpoints to high-level abstractions agents can reason over.
If you're building APIs, AI infrastructure, or just tired of rewriting webhook engines, this episode is a clear-eyed look at where platform engineering is heading—and what it means to ship something developers actually love.
Show Notes
[01:18] – The founding story of Swix and the underestimated complexity of webhooks  
[04:24] – Prioritizing developer experience over implementation details 
[05:42] – Early solutions: replacing homegrown webhook infrastructures  
[07:15] – Build vs. buy: the case for buying webhook infrastructure  
[09:03] – The cost of maintaining v1 and the pain of v2 and beyond  
[10:25] – Why companies rewrite their webhook systems multiple times 
[12:32] – AI’s limited current impact on build vs. buy decisions 
[14:24] – The human side of solution engineering and customer acceleration
[16:29] – Tool calling with LLMs and the rise of abstracted APIs
[18:19] – AI favoring flexible, NLP-friendly endpoints over brittle CRUD
[20:07] – Toward workflow-aware API responses for agents
[22:01] – Meta MCPs and hierarchical API discovery
[23:08] – Swix’s role as AI infrastructure and aligning with AI-oriented standards
[25:56] – What great developer experience really means
Additional Quotes from the Podcast
"No one cares about webhooks. They don't care about your SDKs either. They don't care about any of that. What they care about is either not having to build it or the value that their customers get." 
"You can build anything... The real question is: is it worth your time and effort? Is it worth re-learning? Because honestly, writing the code isn’t the hard part—learning all the complexities and edge cases is."
"People often get developer experience wrong—they reduce it to 'good APIs and SDKs' or 'nice docs.' But it’s more than that. It’s about the entire experience—from onboarding to support to observability. Like UI design isn’t just colors, DX isn’t just code. Good DX is making the right thing easy and the wrong thing hard."
In this episode of Request // Response, I sit down with Anuj Jhunjhunwala, who leads product and design at Merge. We talk about how Merge is transforming integration pain into product velocity with their unified API approach. Anuj shares how his team helps developers avoid the complexity of maintaining dozens of APIs, and we explore the rising strategic importance of integrations in AI-driven products. We also dive into what great developer experience really looks like and how AI is reshaping expectations around API design and usability.
Show Notes
[02:30] – The pain of building and maintaining custom integrations
 [03:45] – Merge as “Plaid for B2B software” and the magic moment for devs
 [07:30] – AI making integrations table stakes; the three types of data
 [08:30] – Why proprietary data is the key to differentiation in AI
 [09:45] – AI and the future of APIs: personalization and intuitive design
 [11:30] – DX vs AX: API design for devs and for AI
 [12:00] – How AI product patterns are changing API requirements
 [13:00] – Richer queries, delta endpoints, and evolving API design
 [14:00] – The rising importance of fine-grained permissions
 [15:00] – HubSpot, search endpoints, and future-facing API choices
 [16:00] – Delta endpoints explained and why they’re valuable for LLMs
 [17:00] – Principles of great developer experience: predictability and frictionlessness
 [19:00] – Where to learn more about Merge
Additional Quotes from the Podcast
API Integrations are Table Stakes for AI
"The biggest trend is that it's just becoming tables stakes, API integrations. We talk to a lot of AI companies. We get a lot of interest in AI companies, and there's really like three types of data. That is helpful when you're building an LLM, right? There's the public data that exists out there that's scraped from the internet and is publicly accessible. There's synthetic data, which is, you know, obviously it produced itself by an algorithm or by an LLM and can help you validate and test out edge cases. And then there's proprietary data, right? So it's the data that belongs to your customer. And the first two things there, the public and the synthetic data. Are accessible to basically anybody. The proprietary data is a data that's specific to you and makes you different, and it's a thing that no one else can get access to because it literally just belongs to you or your customer. That I think, is the magic of the integration, is that the integrations pull in that third bucket and like they can make your product different, so you're leaving money on the table if you don't have an integration strategy because you're not thinking about that third bucket, which actually makes you different."
AI's Impact on API Design and Documentation
What is Great Developer Experience? 
In this episode of Request // Response, I sit down with Charlie Cheever, CEO of Expo and co-founder of Quora, to unpack the evolution of mobile app development and how developer experience is adapting in an AI-assisted world.
Charlie shares stories from scaling Quora's mobile presence, his frustrations with App Store complexity, and how Expo is aiming to make app development as smooth as deploying a website.
From React Native to GraphQL to vibe coding, Charlie breaks down the current gaps in frontend-backend integration and offers a wishlist for what a truly great developer experience could look like—particularly in a world where more non-developers are writing production code.
Whether you're building mobile apps, exploring streaming APIs, or thinking deeply about DX tooling, this is a must-listen.
Show Notes
[00:00:00] Introduction
[00:01:00] The Challenge of Mobile vs. Web Development
[00:02:00] Quora’s Mobile Journey
[00:03:00] Building Expo and React Native’s Rise
[00:04:00] Making React Native Work for Everyone
[00:05:00] Is GraphQL Still the Right Abstraction?
[00:06:00] Balancing State, Performance, and Cost
[00:07:00] Streaming APIs, SSE, and DX Gaps
[00:08:00] Wishlist for the Future of API Dev
[00:09:00] AI, Prompt-to-App, and Developer Onboarding
[00:10:00] Vibe Coding and System Architecture Risks
[00:11:00] Scaffolding and Guardrails for AI-Driven Dev
[00:12:00] A Hybrid Dev World
[00:13:00] Expo’s Strategy for the AI Era
[00:14:00] The Magic of Going Live
[00:15:00] Great Developer Experiences: Inspirations
[00:16:00] Undo and Developer Safety
Additional Quotes from the Podcast
How AI is Changing the Way Users Build Apps
"A huge number of the people sort of signing up for Expo accounts and stuff now are using AI sort of prompt to app tools. So we're having to build a whole new set of products or at least change some of our products 'cause some of the terminology that we use is, you know, tailored to developers and like developers know what this means or that means. And a lot of times these people are like, "I am using Expo because I saw the name come up a bunch of times as I was, you know, prompting. But I don't really know what it is and I don't know why I'm here. And what are you gonna help me do and why do I need to?"
It was always like, "Hey, like there's always people who just have ideas. How do we make those come to life as quickly as possible?" And so like for a long time it just sort of felt like, well there's this whole hairy problem of doing the writing, the code of software development, you know, writing React code, writing backend code or whatever.
And like all of a sudden that's getting way, way easier and going way, way faster."
The Magic Moment That Defines Great DX
"I think that the biggest thing that like you can give in sort of a Dev X experience is if you take somebody who like thinks they can't quite do something and that's got, something's gonna be hard, and then all of a sudden it just ends up easier than they thought. And they're just sort of on a smooth path to something happening.
I remember watching videos of people competing like in the sort of mid-2000s where, and I never even used Rails, but I just saw these where it's "make a blog in 12 minutes with Ruby on Rails." And then somebody else had put out a video that was like, make a blog in, you know, eight minutes and then seven minutes and then six minutes and just sort of like competing to get that as smooth and streamlined as easy as possible was I think pretty amazing."
I sat down with Sinan Eren, co-founder and CEO of Opnova. Opnova is building AI-powered automation for enterprises—specifically targeting companies that rely on legacy systems and lack modern APIs.
We talked about how automation can thrive even in systems that don't have APIs and discussed the challenges and opportunities of automating workflows in regulated, legacy environments. We explored how modern tools like RPA, LLMs, and new standards like MCP are changing the game.
From the rise of browser-based automation to the long-term vision of API-first ecosystems, we touched on the transitional tech that was bridging the gaps of today—and what the future might look like when APIs are truly everywhere.
Be the first to hear about new episodes:  
https://www.speakeasy.com/post/request-response-sinan-eren 
Show Notes
[00:00:00] Introduction
- Overview of discussion topics: automation in legacy systems, RPA, LLMs, MCP, and the evolution toward API-first ecosystems.
[00:00:42] What Opnova Is Solving
- Sinan’s background in cybersecurity and founding Opnova.
- Automating rework: repetitive, manual tasks in regulated industries like healthcare and finance.
[00:02:00] Bridging the API Gap
- Most enterprise systems lack modern APIs.
- How Opnova helps these companies automate despite missing APIs.
- The challenges of working beyond the Silicon Valley tech bubble.
[00:03:00] Leveraging RPA and LLMs for Automation
- How Opnova uses robotic process automation and large language models.
- Modeling user behavior to automate UI actions via screenshots and intent recognition.
[00:04:58] Standards Like MCP and Tool Calling
- MCP (Model Context Protocol) and its potential to become a new standard.
- Bridging the gap for underserved industries lacking API exposure.
- Sinan’s take on early adoption and long-tail enterprise needs.
[00:06:00] APIs as a Deal-Maker
- APIs enabling last-minute customer wins in previous startups.
- Command-line over UI: why customers sometimes prefer APIs to interfaces.
- Rapid feature delivery via API access.
[00:07:54] The API Tax Debate
- Comparing API access to the “SSO tax” of the past.
- Concerns about hiding APIs behind enterprise pricing tiers.
- Why APIs should be a baseline offering.
[00:09:00] APIs as the New Sitemaps
- API discoverability as a critical factor in tool ecosystems.
- Drawing parallels between SEO-era sitemaps and today’s OpenAPI specs.
- The risk of exclusion from LLM-powered interfaces.
[00:11:58] Browser Automation as a Transitional Layer
- Why browser-based agents are a temporary solution.
- The long-term goal: native APIs everywhere.
- Transitional tooling as a necessary bridge.
[00:13:58] Tool Discovery and Registries
- The need for robust API registries to support tool discovery.
- From proof-of-concept to production: bridging the enterprise automation gap.
- The challenge of finding the right tools at the right time.
[00:15:28] Closing Thoughts and Opnova's Vision
- Browser orchestration vs. API-driven workflows.
- Why APIs are the true endgame.
More Quotes From The Discussion
The API Tax Revolution
"We have this curse in our space, it's changing now—it's called SSO tax, single sign-on tax. Why? You will have like the cheap tier, free tier, and then you'll have the enterprise tier, if you want single sign-on, if you want to tie your Okta, your EntraID into the SaaS, right? You need to buy the enterprise tier so that you can have the benefit of SSO. But I'm noticing APIs are now put in place of SSO. So SSO tax—now I worry that it's becoming API tax."
APIs as the New Sitemaps for the Agentic Era
"I feel like not having API actually is similar now. It's going to exclude you from a rapidly emerging ecosystem of tools and tool use where the discovery problem is now distributed. And so, like, what sitemaps did for websites was it said, 'Hey, I'm a website. I'm going to broadcast. I do this thing.' And then therefore any scraper ecosystem could pick it up. I think the same thing's happening with APIs."
Browser Automation: The Necessary Bridge to an API-First Future
"All this browser use models, right? Like computer use, browser use models. They are an intermediary kind of a solution, a transitional solution, right? Because we're waiting for the APIs to be exposed. So nobody really genuinely loves the idea of an agent orchestrating a Chrome browser. Really, it's just a temporary point in time that we have to do it because like you said, like, sitemaps had to be invented for better SEO."
Referenced
Production by Shapeshift | https://shapeshift.so
For inquiries about guesting on Request // Response, email samantha.wen@speakeasy.com.
Ken Rose is the CTO and co-founder of OpsLevel.
Ken shares the founding journey of OpsLevel and lessons from his time at PagerDuty and Shopify.
We debate GraphQL vs REST, API metrics that matter, and how LLMs and agentic workflows could reshape developer productivity.
Listen On
Apple Podcasts | Spotify
Subscribe to Request // Response
If you enjoyed this podcast, you can be the first to hear about new episodes by signing up at https://speakeasy.com/post/request-response-ken-rose
Show Notes
[00:01:08] What OpsLevel does and why internal developer portals matter
[00:01:51] Ken’s journey from PagerDuty to Shopify and to starting OpsLevel
[00:04:03] Developer empathy, platform engineering, and founding OpsLevel
[00:05:02] OpsLevel’s API-first approach and extensibility focus
[00:06:26] Using GraphQL at scale—Shopify's journey and lessons
[00:08:30] Managing GraphQL performance and versioning
[00:10:12] GraphQL vs REST – developer expectations and real-world tradeoffs
[00:11:27] Key metrics to track in API platforms
[00:12:50] Why not every feature should have an API—and how to decide
[00:13:48] Advice for API teams launching today
[00:14:44] API consistency and avoiding Conway’s Law
[00:15:50] Agentic APIs, LLMs, and the challenge of semantic understanding
[00:18:48] Why LLM-driven API interaction isn’t quite magic—yet
[00:20:00] Internal APIs as the next frontier for LLM productivity
[00:21:43] 5x–10x improvements in DX via LLMs + internal API visibility
[00:23:00] API consolidation, discoverability, and LLM-powered developer tools
[00:23:54] What great developer experience (DevEx) actually looks like
More Quotes From The Discussion
Challenges of Running GraphQL
"I can tell you as OpsLevel, running a GraphQL API, you know, all the usual things, making sure that like you don't have n+1 queries, ensuring, you don't get per resource rate limiting like you do with REST... You have to kind of be more intentional about things like query complexity. Those are challenges you end up having to solve.
Versioning is another big one. We are far enough along in our journey. We haven't done a major version bump yet. I know a few years ago, after I left Shopify, they switched their GraphQL schema to be versioned and now they have much more kind of program management around like every six months they have a new major version bump.
 
They require clients to migrate to the latest version, but that's effort and that's calories that you have to spend in that kind of API program management."
DevEx of REST vs GraphQL
"We have a customer that has been with us for years, but our champion there hates GraphQL. Like he really hates it. And the first time he told me that I actually thought he was like, you know, joking or being sarcastic.
No. He was legitimate and serious because from his perspective, he has a certain workflow he's trying to accomplish, like a "Guys, I just need a list of services. Why can't I just like REST, even make a REST call and fetch last services and that's it. Why do I have to do all this stuff and build up a query and pass in this particular print?"
And I get that frustration for developers that, you know, REST is sort of easier to start. It's easier just to create a curl request and be done with it, right? It's easier to pipe, the output of a REST call, which is generally just a nice JSON up into whatever you want versus "No, you've actually have to define the schema and things change."
I think GraphQL solves a certain set of problems, again, around over fetching and if you have a variety of different clients. But there is this attractive part of REST, which is "I just made a single API call the single endpoint to get the one thing I needed, and that was it." And if your use case is that, then the complexity of GraphQL can really be overwhelming."
API Consistency and Conway's Law 
"I do think consistency is an important thing, especially when you're dealing with a large company that has disparate parts working on different aspects of the API. You don't want Conway's Law to appear in your API—you know, where you can see the organizational structure reflected in how the API is shipped.
So making sure that an API platform team or someone is providing guidance on how your organization thinks about the shape of APIs is crucial. Here's how you should structure requests. Here's how you should name parameters. Here's what response formats should look like. Here's consistent context for returns and responses.
Here's how to implement pagination. It's all about ensuring consistency because the most frustrating thing as an API client is when you hit one endpoint and it works a certain way, then you hit another endpoint and wonder, 'Why is this structured differently? Why does this feel different?' That can be tremendously difficult. So having some guardrails or mechanisms to ensure consistency across an API surface area is really valuable."
Referenced
- OpsLevel (https://opslevel.com/) 
- Shopify Storefront API (https://shopify.dev/docs/api/storefront)
- GraphQL (https://graphql.org/)
- REST (https://restfulapi.net/)
- Conway's Law (https://en.wikipedia.org/wiki/Conway%27s_law)
RED Metrics (https://grafana.com/blog/2018/08/02/the-red-method-how-to-instrument-your-services/)
- Anthropic MCP (https://www.anthropic.com/news/model-context-protocol)
- Agents.js (https://huggingface.co/blog/agents-js)
Production by Shapeshift | https://shapeshift.so
For inquiries about guesting on Request // Response, email samantha.wen@speakeasy.com.
Robert Ross (@BobbyTables) is the CEO of FireHydrant.
We discuss the journey of building FireHydrant, the evolution of API design, and the impact of gRPC and REST on developer experience.
We also talked about the role of LLMs in API design, the shift towards data consumption trends in enterprises, and how great developer experience is measured by a simple litmus test.
Listen On
Apple Podcasts | Spotify
Subscribe to Request // Response
If you enjoyed this podcast, you can be the first to hear about new episodes by signing up at https://speakeasy.com/post/request-response-robert-ross
Show Notes
[00:00:00] Introduction
- Overview of discussion topics: building FireHydrant, gRPC, API design trends, and LLMs in APIs.
[00:00:42] The Story Behind FireHydrant
- Robert’s background in on-call engineering and why he built FireHydrant.
- The problem of incidents and automation gaps in on-call engineering.
[00:02:16] APIs at FireHydrant
- FireHydrant’s API-first architecture from day one.
- Moving away from Rails controllers to a JavaScript frontend with API calls.
- Today, over 350 public API endpoints power FireHydrant’s frontend and customer integrations.
[00:03:50] Why gRPC?
- Initial adoption of gRPC for contract-based APIs.
- Evolution to a REST-based JSON API but with lessons from protocol buffers.
- Would Robert choose gRPC today? His thoughts on API design best practices.
[00:06:40] Design-First API Development
- The advantages of Protocol Buffers over OpenAPI for API design.
- How API-first development improves collaboration and review.
- Challenges with OpenAPI’s verbosity vs. Protobuf’s simplicity.
[00:08:23] Enterprise API Consumption Trends
- Shift from data-push models to API-first data pulls.
- Companies scraping API every 5 minutes vs. traditional data lake ingestion.
- FireHydrant’s most popular API endpoint: Get Incidents API.
[00:10:11] Evolving Data Exposure in APIs
- Considering webhooks and real-time data streams for API consumers.
- Internal FireHydrant Pub/Sub architecture.
- Future vision: "Firehose for FireHydrant" (real-time streaming API).
[00:12:02] Measuring API Success (KPIs & Metrics)
- Time to first byte (TTFB) as a key metric.
- API reliability: retry rates, latency tracking, and avoiding thundering herd issues.
- The challenge of maintaining six years of stable API structure.
[00:17:12] API Ergonomics & Developer Experience
- Why Jira’s API is one of the worst to integrate with.
- The importance of API usability for long-term adoption.
- Companies with great API design (e.g., Linear).
[00:18:14] LLMs and API Design
- How LLMs help in API design validation.
- Are LLMs changing API consumption patterns?
- Rethinking API naming conventions for AI agents.
[00:22:02] Future of API Ergonomics for AI & Agents
- Will there be separate APIs for humans vs. AI agents?
- How agentic systems influence API structures.
- The need for context-rich naming conventions in APIs.
[00:24:02] What Defines Great Developer Experience?
- The true test of developer experience: Can you be productive on an airplane?
- Importance of great error messages and intuitive API design.
- The shift towards zero-docs, self-explanatory APIs.
More Quotes From The Discussion
Enterprise Data Teams are Now More Receptive to APIs
"I think more and more people are willing to pull data from your API. There used to be kind of a requirement many years ago of like, you need to push data to me, into my data lake. And the problem with that is, sure, we can push data all day long, but it may not be in the format you want.
It may not be frequent enough. Like, you might have stale data if we're doing, a nightly push to our customers' data lakes.
So, a lot of our customers are scraping our API every five minutes in some cases to get their incident data. And what our customers tell us is that they have data analyst teams. They have data analytics engineers that their job is to write scripts that pull the data into their internal data lake from all of their vendors. And I think that's, I don't know if it's a new trend or the new normal, I'm not quite sure, but it's definitely something I've noticed is that enterprises are spending more time building the teams to get the data in the exact format, the exact time that they want it, as opposed to having the requirement of push the data to me."
On LLMs and API Design 
“I don't think that LLMs are going to fundamentally change how we interact with APIs. At least not in the short term.
I think that they'll help us build better clients and I think they'll help us build better APIs, but the moment we start using them, I mean, that's just going to be code calling code. I just don't think that LLMs are going to get too integrated at that point.”
Referenced
- FireHydrant the end-to-end incident management platform (https://firehydrant.com) 
- Linear API docs (https://linear.app/docs/api) 
- Offset's Materialize RBAC system (https://materialize.com/docs/manage/access-control/rbac)
- Speakeasy's Terraform generator (https://docs.speakeasyapi.dev/docs/using-speakeasy/client-sdks-generation/terraform)
Production by Shapeshift | https://shapeshift.so
For inquiries about guesting on Request // Response, email samantha.wen@speakeasy.com.
On the first episode of Request // Response, I speak with John Kodumal, co-founder and former CTO of LaunchDarkly.
We discussed how LaunchDarkly used feature flags to separate deployment from release, offering fine-grained control for safer rollouts and experimentation.
LaunchDarkly was an early adopter of server-sent events, and was a pioneer of the technology even before LLMs.
Tune in to hear our thoughts on the changing demands and expectations for APIs and developer experience in light of agentic era of APIs.
Listen on
Where to find John Kodumal
Show Notes:
[00:00:00] Introduction
[00:00:28] What is LaunchDarkly?
[00:02:18] Pre-LaunchDarkly Era
[00:05:26] LaunchDarkly’s API and SDK Focus
[00:10:37] Technical Deep Dive: SSE and Feature Flags
[00:14:23] The Evolution of API Expectations
[00:19:26] Traversability and Developer Experience
[00:22:40] What Defines Great Developer Experience?
[00:26:23] Closing Thoughts
Subscribe to Request // Response
If you enjoyed this podcast, you can be the first to hear about new episodes by signing up at 
https://speakeasy.com/post/request-response-john-kodumal
More John Kodumal Quotes From The Discussion
Referenced
Production by Shapeshift | https://shapeshift.so
For inquiries about guesting on Request // Response, email samantha.wen@speakeasy.com.