Home
Categories
EXPLORE
True Crime
Society & Culture
Comedy
History
Sports
Business
Education
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
IS
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/f9/0b/47/f90b4706-53e2-c3f5-b013-432850b86a63/mza_4088872900365630701.jpg/600x600bb.jpg
The AI Fundamentalists
Dr. Andrew Clark & Sid Mangalik
35 episodes
3 weeks ago
Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes. Show notes: • Agentic AI systems require governance at every step: perception, reasoning, action, and learning • Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps •...
Show more...
Technology
Business,
News,
Tech News
RSS
All content for The AI Fundamentalists is the property of Dr. Andrew Clark & Sid Mangalik and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes. Show notes: • Agentic AI systems require governance at every step: perception, reasoning, action, and learning • Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps •...
Show more...
Technology
Business,
News,
Tech News
Episodes (20/35)
The AI Fundamentalists
Mechanism design: Building smarter AI agents from the fundamentals, Part 1
What if we've been approaching AI agents all wrong? While the tech world obsesses over larger language models (LLMs) and prompt engineering, there'a a foundational approach that could revolutionize how we build trustworthy AI systems: mechanism design. This episode kicks off an exciting series where we're building AI agents "the hard way"—using principles from game theory and microeconomics to create systems with predictable, governable behavior. Rather than hoping an LLM can magically handl...
Show more...
3 weeks ago
37 minutes

The AI Fundamentalists
Principles, agents, and the chain of accountability in AI systems
Dr. Michael Zargham provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities. Show highlights • Agents should be understood through the lens of the principal-agent relatio...
Show more...
1 month ago
46 minutes

The AI Fundamentalists
Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 2
Part 2 of this series could have easily been renamed "AI for science: The expert’s guide to practical machine learning.” We continue our discussion with Christoph Molnar and Timo Freiesleben to look at how scientists can apply supervised machine learning techniques from the previous episode into their research. Introduction to supervised ML for science (0:00) Welcome back to Christoph Molnar and Timo Freiesleben, co-authors of “Supervised Machine Learning for Science: How to Stop Worryi...
Show more...
2 months ago
41 minutes

The AI Fundamentalists
Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 1
Machine learning is transforming scientific research across disciplines, but many scientists remain skeptical about using approaches that focus on prediction over causal understanding. That’s why we are excited to have Christoph Molnar return to the podcast with Timo Freiesleben. They are co-authors of "Supervised Machine Learning for Science: How to Stop Worrying and Love your Black Box." We will talk about the perceived problems with automation in certain sciences and find out how sci...
Show more...
2 months ago
27 minutes

The AI Fundamentalists
The future of AI: Exploring modeling paradigms
Unlock the secrets to AI's modeling paradigms. We emphasize the importance of modeling practices, how they interact, and how they should be considered in relation to each other before you act. Using the right tool for the right job is key. We hope you enjoy these examples of where the greatest AI and machine learning techniques exist in your routine today. More AI agent disruptors (0:56) Proxy from London start-up Convergence AIAnother hit to OpenAI, this product is available for free, unli...
Show more...
3 months ago
33 minutes

The AI Fundamentalists
Agentic AI: Here we go again
Agentic AI is the latest foray into big-bet promises for businesses and society at large. While promising autonomy and efficiency, AI agents raise fundamental questions about their accuracy, governance, and the potential pitfalls of over-reliance on automation. Does this story sound vaguely familiar? Hold that thought. This discussion about the over-under of certain promises is for you. Show Notes The economics of LLMs and DeepSeek R1 (00:00:03) Reviewing recent developments in AI techn...
Show more...
4 months ago
30 minutes

The AI Fundamentalists
Contextual integrity and differential privacy: Theory vs. application with Sebastian Benthall
What if privacy could be as dynamic and socially aware as the communities it aims to protect? Sebastian Benthall, a senior research fellow from NYU’s Information Law Institute, shows us how privacy is complex. He uses Helen Nissenbaum’s work with contextual integrity and concepts in differential privacy to explain the complexity of privacy. Our talk explains how privacy is not just about protecting data but also about following social rules in different situations, from healthcare to edu...
Show more...
5 months ago
32 minutes

The AI Fundamentalists
Model documentation: Beyond model cards and system cards in AI governance
What if the secret to successful AI governance lies in understanding the evolution of model documentation? In this episode, our hosts challenge the common belief that model cards marked the start of documentation in AI. We explore model documentation practices, from their crucial beginnings in fields like finance to their adaptation in Silicon Valley. Our discussion also highlights the important role of early modelers and statisticians in advocating for a complete approach that includes the e...
Show more...
7 months ago
27 minutes

The AI Fundamentalists
New paths in AI: Rethinking LLMs and model risk strategies
Are businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn’t changed in the world of LLMs. Join us as we debunk the latest myths and emphasize the importance of robust risk management in AI integration. The good news is that many decisions about adoption have forced businesses to discuss their future and impact in the face of emerging technology. You won't want to miss this discussion. Intro and news:...
Show more...
8 months ago
39 minutes

The AI Fundamentalists
Complex systems: What data science can learn from astrophysics with Rachel Losacco
Our special guest, astrophysicist Rachel Losacco, explains the intricacies of galaxies, modeling, and the computational methods that unveil their mysteries. She shares stories about how advanced computational resources enable scientists to decode galaxy interactions over millions of years with true-to-life accuracy. Sid and Andrew discuss transferable practices for building resilient modeling systems. Prologue: Why it's important to bring stats back [00:00:03]Announcement from the Ame...
Show more...
9 months ago
41 minutes

The AI Fundamentalists
Preparing AI for the unexpected: Lessons from recent IT incidents
Can your AI models survive a big disaster? While a recent major IT incident with CrowdStrike wasn't AI related, the magnitude and reaction reminded us that no system no matter how proven is immune to failure. AI modeling systems are no different. Neglecting the best practices of building models can lead to unrecoverable failures. Discover how the three-tiered framework of robustness, resiliency, and anti-fragility can guide your approach to creating AI infrastructures that not only perform re...
Show more...
9 months ago
34 minutes

The AI Fundamentalists
Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall
Join us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making. S...
Show more...
10 months ago
41 minutes

The AI Fundamentalists
Data lineage and AI: Ensuring quality and compliance with Matt Barlin
Ready to uncover the secrets of modern systems engineering and the future of AI? Join us for an enlightening conversation with Matt Barlin, the Chief Science Officer of Valence. Matt's extensive background in systems engineering and data lineage sets the stage for a fascinating discussion. He sheds light on the historical evolution of the field, the critical role of documentation, and the early detection of defects in complex systems. This episode promises to expand your understanding of mode...
Show more...
11 months ago
28 minutes

The AI Fundamentalists
Differential privacy: Balancing data privacy and utility in AI
Explore the basics of differential privacy and its critical role in protecting individual anonymity. The hosts explain the latest guidelines and best practices in applying differential privacy to data for models such as AI. Learn how this method also makes sure that personal data remains confidential, even when datasets are analyzed or hacked. Show Notes Intro and AI news (00:00) Google AI search tells users to glue pizza and eat rocks Gary Marcus on break? (Maybe and X only break)...
Show more...
1 year ago
28 minutes

The AI Fundamentalists
Responsible AI: Does it help or hurt innovation? With Anthony Habayeb
Artificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur, As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI. Show notes Prologu...
Show more...
1 year ago
45 minutes

The AI Fundamentalists
Baseline modeling and its critical role in AI and business performance
Baseline modeling is a necessary part of model validation. In our expert opinion, it should be required before model deployment. There are many baseline modeling types and in this episode, we're discussing their use cases, strengths, and weaknesses. We're sure you'll appreciate a fresh take on how to improve your modeling practices. Show notes Introductions and news: why reporting and visibility is a good thing for AI 0:03 Spoiler alert: Providing visibility to AI bias audits does NOT mean ...
Show more...
1 year ago
36 minutes

The AI Fundamentalists
Information theory and the complexities of AI model monitoring
In this episode, we explore information theory and the not-so-obvious shortcomings of its popular metrics for model monitoring; and where non-parametric statistical methods can serve as the better option. Introduction and latest news 0:03 Gary Marcus has written an article questioning the hype around generative AI, suggesting it may not be as transformative as previously thought.This in contrast to announcements out of the NVIDIA conference during the same week.Information theory and it...
Show more...
1 year ago
21 minutes

The AI Fundamentalists
The importance of anomaly detection in AI
In this episode, the hosts focus on the basics of anomaly detection in machine learning and AI systems, including its importance, and how it is implemented. They also touch on the topic of large language models, the (in)accuracy of data scraping, and the importance of high-quality data when employing various detection methods. You'll even gain some techniques you can use right away to improve your training data and your models. Intro and discussion (0:03) Questions about Information Theory f...
Show more...
1 year ago
35 minutes

The AI Fundamentalists
What is consciousness, and does AI have it?
We're taking a slight detour from modeling best practices to explore questions about AI and consciousness. With special guest Michael Herman, co-founder of Monitaur and TestDriven.io, the team discusses different philosophical perspectives on consciousness and how these apply to AI. They also discuss the potential dangers of AI in its current state and why starting fresh instead of iterating can make all the difference in achieving characteristics of AI that might resemble consciousness...
Show more...
1 year ago
32 minutes

The AI Fundamentalists
Upskilling for AI: Roles, organizations, and new mindsets
Data scientists, researchers, engineers, marketers, and risk leaders find themselves at a crossroads to expand their skills or risk obsolescence. The hosts discuss how a growth mindset and "the fundamentals" of AI can help. Our episode shines a light on this vital shift, equipping listeners with strategies to elevate their skills and integrate multidisciplinary knowledge. We share stories from the trenches on how each role affects robust AI solutions that adhere to ethical standards, and how...
Show more...
1 year ago
41 minutes

The AI Fundamentalists
Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes. Show notes: • Agentic AI systems require governance at every step: perception, reasoning, action, and learning • Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps •...