OpenAI's Dev Day dropped some major announcements this month, but is AgentKit really revolutionary or just another "me too!" moment? The AI, Actually crew shares their reactions to the latest OpenAI releases, and digs into how to successfully implement AI agents in the real world.
In this episode, Jim Johnson steps in as host alongside Mike Finley, with special guests Nicole Kosky (who leads AnswerRocket's AI Business Transformation Practice) and Reilly Carrolll (Senior AI Solutions Consultant). Together, they tackle the practical, nitty-gritty challenges of bringing agents to life for enterprise clients—from gathering requirements that users don't know they have, to managing the surprising differences between what stakeholders say they need versus what they actually ask once they're hands-on with an agent.
Topics covered:
Follow the Gang:
Jim Johnson, AnswerRocket, Managing Partner - https://www.linkedin.com/in/jim-johnson-bb82451/
Mike Finley, AnswerRocket, CTO - https://www.linkedin.com/in/mikefinley/
Reilly Carroll, AnswerRocket, Senior AI Solutions Consultant - https://www.linkedin.com/in/reilly-carroll/
Nicole Kosky, AnswerRocket, Senior Director of Services - https://www.linkedin.com/in/nicole-kosky-5b9a3b6/
Chapters:
00:00 Introduction and Guest Introductions
01:30 OpenAI Dev Day Announcements
05:25 Understanding AI Agents
09:05 Practical Implementation of AI Agents
11:11 Challenges in Client Engagement
14:17 Agent Development and User Experience
16:55 The Challenge of Capturing Agent Requirements
19:52 The Importance of Agentic Operations
22:40 Navigating the Future of AI Agents
29:09 Final Thoughts and Advice
Keywords: AI agents, OpenAI Dev Day, agent development, enterprise AI implementation, agentic operations, software development lifecycle, AI business transformation, agent context, LLM applications, practical AI strategy
Tired of AI agents that forget context mid-conversation or drift subtly off course in production? You're not alone. In this episode, the AI, Actually crew unpacks six critical engineering principles for building reliable AI agents—principles that separate proof-of-concepts from production-ready systems.
Pete, Mike, Andy, and Stew break down insights from AI expert Nate B. Jones, translating technical concepts into business-focused guidance. They explore why AI memory isn't just about storage, how to bound uncertainty without killing creativity, and why monitoring AI systems requires a completely different approach than traditional software.
This episode covers:
This episode of AI, Actually centers around a video by @nate.b.jones about the 6 principles of AI Agents. That video can be watched in its entirety here: I've Built Over 100 AI Agents: Only 1% of Builders Know These 6 Principles
Follow the Gang:
Chapters:
00:00 Introduction to AI Agents and Engineering Principles
01:34 Introducing Nate B. Jones' AI Engineering Principles
03:03 Stateful Intelligence
10:16 Bounded Uncertainty
19:55 Intelligent Failure Detection
20:51 Evaluating LLM Responses
22:16 Monitoring Quality and Performance
23:53 Active Maintenance of LLM Systems
26:18 Understanding Subtle Failures
26:55 Capability-Based Routing
30:22 Aligning Models with Business Processes
33:41 Nuanced Health State Monitoring
37:36 Continuous Input Validation
41:36 Closing Thoughts
Keywords: AI agents, agentic AI, AI engineering, AI memory, stateful intelligence, AI monitoring, capability-based routing, AI evaluation, production AI, enterprise AI, AI agent development, LLM engineering, AI testing, AI agent failures, AI system monitoring
Sequoia says AI is a $10 trillion opportunity. But how do you actually capture it? In this episode, the AI, Actually crew tackles the gap between AI's promise and its practical deployment in the enterprise. From bold predictions about agent automation to Palantir's forward deployed engineer model, we explore what it really takes to move beyond ChatGPT licenses to actual business transformation.
The discussion gets real about the current state of AI agents—including why they'd rather fake your data than admit they're stuck—and examines the critical role of specialization in making AI work for specific business processes. Whether you're trying to onboard suppliers across multiple systems or wondering why your coding agent just hardcoded the test answers, this episode provides the unfiltered truth about where enterprise AI stands today.
Key topics covered:
Follow the Gang:
Chapters:
00:00 Introduction
01:14 Sequoia's $10 Trillion AI AI Thesis
08:45 The Specialization Problem: From General Purpose to Actually Useful
16:56 Forward Deployed Engineers: Digging into the Palantir Model
22:47 The Intelligence Revolution vs. The Information Revolution
24:02 Prototyping and User Engagement
26:07 The Role of Business Analysts in AI Deployment
29:24 The Year of the Agent Reality Check: Replit and Autonomous Coding
35:09 Mock Data and Unit Test Cheats: Watch Out for These AI Coding Traps
#aiactually #enterpriseaiadoption #forwarddeployedengineers #palantirFDEmodel #AISpecialization #businessprocessautomation #ReplitAgent3 #AIScaffolding #EnterpriseIntegration #aiROI
The AI Actually crew tackles the pressing concerns keeping enterprise leaders up at night: shadow AI infiltrating organizations, the crucial distinction between machine learning and LLMs, and why context engineering matters more than prompt engineering. Jim Johnson takes the moderator chair, joined by regular Mike Finley and special guests Andy Sweet (Advanced Models Practice Lead) and Shanti Greene (Head of Data Science and AI Innovation).
Topics covered:
Follow the Gang
Jim Johnson, AnswerRocket, Managing Partner - https://www.linkedin.com/in/jim-johnson-bb82451/
Mike Finley, AnswerRocket, CTO - https://www.linkedin.com/in/mikefinley/
Andy Sweet, AnswerRocket, VP Enterprise AI Solutions - https://www.linkedin.com/in/andrewdsweet/
Shanti Greene, AnswerRocket, Head of Data Science and AI Innovation - https://www.linkedin.com/in/shantigreene/
Chapters:
00:00 Introduction to AI Actually and the Team
01:59 Kimi Model Release: Should We Care?
06:59 Shadow AI Definition and Enterprise Impact
12:08 Leveraging Machine Learning and LLMs Together
26:00 Prompt Engineering vs. Context Engineering
28:13 Using LLMs to Write Prompts
29:56 Memory and Agent Ops
33:05 AI Literacy and Context Engineering
34:27 Stability and Model Changes
37:28 The Harvard MBA Analogy for AI Agents
43:01 Local Open Source AI Models: Pros & Cons
49:23 The Real Costs of Enterprise AI
Keywords: shadow AI, machine learning vs LLMs, context engineering, prompt engineering, enterprise AI, local models, agent operations, AI costs, Kimi model, AI governance
The MIT study claiming 95% of AI pilots fail has everyone talking—but what's really behind these failures? In this episode, the AI, Actually crew tackles the hard truths about why enterprises struggle with AI implementations and what separates the toys from the tools. The conversation kicks off with AnswerRocket CEO Alon Goren sharing his journey from pre-PC era computing to building AI solutions that actually work.
The gang dives deep into the current state of AI agents in the workforce. Are they the new interns who need constant training, or are they about to trigger the largest labor disruption of our lifetime? From Klarna's famous (and failed) attempt to replace support staff to the reality of AI's impact on software development, the crew debates whether this time really is different.
Key topics covered:
• Why business problems, not technology, should drive AI adoption
• The real reason companies are freezing hiring (hint: it's not what you think)
• Meta's NBA-level salaries for AI talent and what it means for everyone else
• The "I wish" framework for identifying viable AI use cases • Why the best AI implementations start narrow and expand gradually
Chapters
00:00 Introduction: Meet Alon Goren
06:19 The AI Pilot Dilemma
11:57 Navigating AI Use Cases
21:08 AI Agents: The Largest Disruption of Our Lifetime? 32:05 How to Build Your AI Career
35:43 The AI Talent War: Recruitment and Retention Strategies
41:40 Assessing AI Depth of Expertise in Interviews
Is your enterprise AI pilot part of the 95% that's failing? MIT's latest research just confirmed what many suspected: almost all enterprise AI initiatives are floundering. In this episode, we dig into why companies are hemorrhaging money on AI that never delivers real value, and what the successful 5% are doing differently.
Forget vendor promises and get ready for some uncomfortable truths about why your text-to-SQL dreams might be nightmares waiting to happen.
In this episode, we cover:
Follow the Gang:
Chapters:
00:00 MIT Study: 95% of AI Pilots are Failing
02:07 The 5% That Succeed: Cost Reduction vs. Revenue Lift
03:03 How Internal Bureaucracy Killed a Working AI Pilot
03:55 The Jello Problem: Why LLMs Don't Fit Traditional IT
07:40 Personal Productivity vs Enterprise Scale
11:23 The Complexity of AI Integration
14:05 Treat AI Like A New Employee
16:16 The Stochastic Nature of AI Models
19:48 Risks of AI in SQL Generation
27:22 Making AI Deterministic
29:42 Understanding AI Hallucinations
31:11 What is an Agent, Really?
33:48 The Spectrum of Agent Complexity
38:42 Agents in the Wild: Suno, Lovable, and Deep Research
42:27 Computer Use and the Future of RPA
46:47 MCP Servers and Tools Use
Keywords: enterprise AI failure, MIT study, AI pilots, LLM implementation, AI agents, stochastic models, SQL generation, computer use models, AI hallucination, enterprise transformation
Is your company's AI strategy stuck in the sandbox? You're not alone. Despite the endless hype, many large companies are finding their AI projects are stuck in the experimental stage. In this episode, we get real about why organizations are struggling and what you can actually do about it.
Forget the hype and join us for a candid discussion on the real-world challenges and opportunities of enterprise AI. We cut through the noise to give you a practical playbook for moving forward.
In this episode, we tackle:
Follow the Gang
Pete Reilly, AnswerRocket, COO
Mike Finley, AnswerRocket, CTO
Stew Chisam, StellarIQ, Operating Partner
Jim Johnson, AnswerRocket, Managing Partner
Chapters
00:00 Introduction to Vibe Coding
11:57 The Future of Coding with AI
20:36 Where Enterprises Struggle with AI
23:27 Navigating AI Security Concerns
26:07 Demonstrating The Value of AI
33:07 Getting AI Initiatives Back on Track
40:32 First Impressions on GPT-5
46:17 Comparing User Experiences with AI Models
Keywords: Vibe coding, AI in enterprise, GPT-5, coding tools, AI challenges, productivity, software development, technology trends, coding practices, enterprise solutions