Home
Categories
EXPLORE
History
Education
Religion & Spirituality
Society & Culture
Comedy
True Crime
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
HN
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/f9/0b/47/f90b4706-53e2-c3f5-b013-432850b86a63/mza_4088872900365630701.jpg/600x600bb.jpg
The AI Fundamentalists
Dr. Andrew Clark & Sid Mangalik
40 episodes
1 week ago
In the first episode of our series on metaphysics, Michael Herman joins us from Episode #14 on “What is consciousness?” to discuss reality. More specifically, the question of objects in reality. The team explores Plato’s forms, Aristotle’s realism, emergence, and embodiment to determine whether AI models can approximate from what humans uniquely experience. Defining objects via properties, perception, and persistenceBanana and circle examples for identity and idealsPlato versus Aristo...
Show more...
Technology
Business,
News,
Tech News
RSS
All content for The AI Fundamentalists is the property of Dr. Andrew Clark & Sid Mangalik and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In the first episode of our series on metaphysics, Michael Herman joins us from Episode #14 on “What is consciousness?” to discuss reality. More specifically, the question of objects in reality. The team explores Plato’s forms, Aristotle’s realism, emergence, and embodiment to determine whether AI models can approximate from what humans uniquely experience. Defining objects via properties, perception, and persistenceBanana and circle examples for identity and idealsPlato versus Aristo...
Show more...
Technology
Business,
News,
Tech News
Episodes (20/40)
The AI Fundamentalists
Metaphysics and modern AI: What is reality?
In the first episode of our series on metaphysics, Michael Herman joins us from Episode #14 on “What is consciousness?” to discuss reality. More specifically, the question of objects in reality. The team explores Plato’s forms, Aristotle’s realism, emergence, and embodiment to determine whether AI models can approximate from what humans uniquely experience. Defining objects via properties, perception, and persistenceBanana and circle examples for identity and idealsPlato versus Aristo...
Show more...
1 week ago
38 minutes

The AI Fundamentalists
Metaphysics and modern AI: What is thinking? - Series Intro
This episode is the intro to a special project by The AI Fundamentalists’ hosts and friends. We hope you're ready for a metaphysics mini‑series to explore what thinking and reasoning really mean and how those definitions should shape AI research. Join us for thought-provoking discussions as we tackle basic questions: What is metaphysics and its relevance to AI? What constitutes reality? What defines thinking? How do we understand time? And perhaps most importantly, should AI systems att...
Show more...
4 weeks ago
16 minutes

The AI Fundamentalists
AI in practice: Guardrails and security for LLMs
In this episode, we talk about practical guardrails for LLMs with data scientist Nicholas Brathwaite. We focus on how to stop PII leaks, retrieve data, and evaluate safety with real limits. We weigh managed solutions like AWS Bedrock against open-source approaches and discuss when to skip LLMs altogether. • Why guardrails matter for PII, secrets, and access control • Where to place controls across prompt, training, and output • Prompt injection, jailbreaks, and adversarial handling • RAG des...
Show more...
1 month ago
35 minutes

The AI Fundamentalists
AI in practice: LLMs, psychology research, and mental health
We’re excited to have Adi Ganesan, a PhD researcher at Stony Brook University, Penn University, and Vanderbilt, on the show. We’ll talk about how large language models LLMs) are being tested and used in psychology, citing examples from mental health research. Fun fact: Adi was Sid's research partner during his Ph.D. program. Discussion highlights Language models struggle with certain aspects of therapy including being over-eager to solve problems rather than building understandingCurrent mode...
Show more...
2 months ago
42 minutes

The AI Fundamentalists
LLM scaling: Is GPT-5 near the end of exponential growth?
The release of OpenAI GPT-5 marks a significant turning point in AI development, but maybe not the one most enthusiasts had envisioned. The latest version seems to reveal the natural ceiling of current language model capabilities with incremental rather than revolutionary improvements over GPT-4. Sid and Andrew call back to some of the model-building basics that have led to this point to give their assessment of the early days of the GPT-5 release. • AI's version of Moore's Law is slow...
Show more...
2 months ago
22 minutes

The AI Fundamentalists
AI governance: Building smarter AI agents from the fundamentals, part 4
Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes. Show notes: • Agentic AI systems require governance at every step: perception, reasoning, action, and learning • Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps •...
Show more...
3 months ago
37 minutes

The AI Fundamentalists
Linear programming: Building smarter AI agents from the fundamentals, part 3
We continue with our series about building agentic AI systems from the ground up and for desired accuracy. In this episode, we explore linear programming and optimization methods that enable reliable decision-making within constraints. Show notes: Linear programming allows us to solve problems with multiple constraints, like finding optimal flights that meet budget requirementsThe Lagrange multiplier method helps find optimal solutions within constraints by reformulating utility f...
Show more...
3 months ago
29 minutes

The AI Fundamentalists
Utility functions: Building smarter AI agents from the fundamentals, part 2
The hosts look at utility functions as the mathematical basis for making AI systems. They use the example of a travel agent that doesn’t get tired and can be increased indefinitely to meet increasing customer demand. They also discuss the difference between this structured, economic-based approach with the problems of using large language models for multi-step tasks. This episode is part 2 of our series about building smarter AI agents from the fundamentals. Listen to Part 1 about mechanism ...
Show more...
4 months ago
41 minutes

The AI Fundamentalists
Mechanism design: Building smarter AI agents from the fundamentals, Part 1
What if we've been approaching AI agents all wrong? While the tech world obsesses over larger language models (LLMs) and prompt engineering, there'a a foundational approach that could revolutionize how we build trustworthy AI systems: mechanism design. This episode kicks off an exciting series where we're building AI agents "the hard way"—using principles from game theory and microeconomics to create systems with predictable, governable behavior. Rather than hoping an LLM can magically handl...
Show more...
5 months ago
37 minutes

The AI Fundamentalists
Principles, agents, and the chain of accountability in AI systems
Dr. Michael Zargham provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities. Show highlights • Agents should be understood through the lens of the principal-agent relatio...
Show more...
6 months ago
46 minutes

The AI Fundamentalists
Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 2
Part 2 of this series could have easily been renamed "AI for science: The expert’s guide to practical machine learning.” We continue our discussion with Christoph Molnar and Timo Freiesleben to look at how scientists can apply supervised machine learning techniques from the previous episode into their research. Introduction to supervised ML for science (0:00) Welcome back to Christoph Molnar and Timo Freiesleben, co-authors of “Supervised Machine Learning for Science: How to Stop Worryi...
Show more...
7 months ago
41 minutes

The AI Fundamentalists
Supervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 1
Machine learning is transforming scientific research across disciplines, but many scientists remain skeptical about using approaches that focus on prediction over causal understanding. That’s why we are excited to have Christoph Molnar return to the podcast with Timo Freiesleben. They are co-authors of "Supervised Machine Learning for Science: How to Stop Worrying and Love your Black Box." We will talk about the perceived problems with automation in certain sciences and find out how sci...
Show more...
7 months ago
27 minutes

The AI Fundamentalists
The future of AI: Exploring modeling paradigms
Unlock the secrets to AI's modeling paradigms. We emphasize the importance of modeling practices, how they interact, and how they should be considered in relation to each other before you act. Using the right tool for the right job is key. We hope you enjoy these examples of where the greatest AI and machine learning techniques exist in your routine today. More AI agent disruptors (0:56) Proxy from London start-up Convergence AIAnother hit to OpenAI, this product is available for free, unli...
Show more...
8 months ago
33 minutes

The AI Fundamentalists
Agentic AI: Here we go again
Agentic AI is the latest foray into big-bet promises for businesses and society at large. While promising autonomy and efficiency, AI agents raise fundamental questions about their accuracy, governance, and the potential pitfalls of over-reliance on automation. Does this story sound vaguely familiar? Hold that thought. This discussion about the over-under of certain promises is for you. Show Notes The economics of LLMs and DeepSeek R1 (00:00:03) Reviewing recent developments in AI techn...
Show more...
9 months ago
30 minutes

The AI Fundamentalists
Contextual integrity and differential privacy: Theory vs. application with Sebastian Benthall
What if privacy could be as dynamic and socially aware as the communities it aims to protect? Sebastian Benthall, a senior research fellow from NYU’s Information Law Institute, shows us how privacy is complex. He uses Helen Nissenbaum’s work with contextual integrity and concepts in differential privacy to explain the complexity of privacy. Our talk explains how privacy is not just about protecting data but also about following social rules in different situations, from healthcare to edu...
Show more...
10 months ago
32 minutes

The AI Fundamentalists
Model documentation: Beyond model cards and system cards in AI governance
What if the secret to successful AI governance lies in understanding the evolution of model documentation? In this episode, our hosts challenge the common belief that model cards marked the start of documentation in AI. We explore model documentation practices, from their crucial beginnings in fields like finance to their adaptation in Silicon Valley. Our discussion also highlights the important role of early modelers and statisticians in advocating for a complete approach that includes the e...
Show more...
12 months ago
27 minutes

The AI Fundamentalists
New paths in AI: Rethinking LLMs and model risk strategies
Are businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn’t changed in the world of LLMs. Join us as we debunk the latest myths and emphasize the importance of robust risk management in AI integration. The good news is that many decisions about adoption have forced businesses to discuss their future and impact in the face of emerging technology. You won't want to miss this discussion. Intro and news:...
Show more...
1 year ago
39 minutes

The AI Fundamentalists
Complex systems: What data science can learn from astrophysics with Rachel Losacco
Our special guest, astrophysicist Rachel Losacco, explains the intricacies of galaxies, modeling, and the computational methods that unveil their mysteries. She shares stories about how advanced computational resources enable scientists to decode galaxy interactions over millions of years with true-to-life accuracy. Sid and Andrew discuss transferable practices for building resilient modeling systems. Prologue: Why it's important to bring stats back [00:00:03]Announcement from the Ame...
Show more...
1 year ago
41 minutes

The AI Fundamentalists
Preparing AI for the unexpected: Lessons from recent IT incidents
Can your AI models survive a big disaster? While a recent major IT incident with CrowdStrike wasn't AI related, the magnitude and reaction reminded us that no system no matter how proven is immune to failure. AI modeling systems are no different. Neglecting the best practices of building models can lead to unrecoverable failures. Discover how the three-tiered framework of robustness, resiliency, and anti-fragility can guide your approach to creating AI infrastructures that not only perform re...
Show more...
1 year ago
34 minutes

The AI Fundamentalists
Exploring the NIST AI Risk Management Framework (RMF) with Patrick Hall
Join us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making. S...
Show more...
1 year ago
41 minutes

The AI Fundamentalists
In the first episode of our series on metaphysics, Michael Herman joins us from Episode #14 on “What is consciousness?” to discuss reality. More specifically, the question of objects in reality. The team explores Plato’s forms, Aristotle’s realism, emergence, and embodiment to determine whether AI models can approximate from what humans uniquely experience. Defining objects via properties, perception, and persistenceBanana and circle examples for identity and idealsPlato versus Aristo...