
The AI Actually crew tackles the pressing concerns keeping enterprise leaders up at night: shadow AI infiltrating organizations, the crucial distinction between machine learning and LLMs, and why context engineering matters more than prompt engineering. Jim Johnson takes the moderator chair, joined by regular Mike Finley and special guests Andy Sweet (Advanced Models Practice Lead) and Shanti Greene (Head of Data Science and AI Innovation).
Topics covered:
Follow the Gang
Jim Johnson, AnswerRocket, Managing Partner - https://www.linkedin.com/in/jim-johnson-bb82451/
Mike Finley, AnswerRocket, CTO - https://www.linkedin.com/in/mikefinley/
Andy Sweet, AnswerRocket, VP Enterprise AI Solutions - https://www.linkedin.com/in/andrewdsweet/
Shanti Greene, AnswerRocket, Head of Data Science and AI Innovation - https://www.linkedin.com/in/shantigreene/
Chapters:
00:00 Introduction to AI Actually and the Team
01:59 Kimi Model Release: Should We Care?
06:59 Shadow AI Definition and Enterprise Impact
12:08 Leveraging Machine Learning and LLMs Together
26:00 Prompt Engineering vs. Context Engineering
28:13 Using LLMs to Write Prompts
29:56 Memory and Agent Ops
33:05 AI Literacy and Context Engineering
34:27 Stability and Model Changes
37:28 The Harvard MBA Analogy for AI Agents
43:01 Local Open Source AI Models: Pros & Cons
49:23 The Real Costs of Enterprise AI
Keywords: shadow AI, machine learning vs LLMs, context engineering, prompt engineering, enterprise AI, local models, agent operations, AI costs, Kimi model, AI governance