In this episode, we welcome David Wolpert, a Professor at the Santa Fe Institute renowned for his groundbreaking work across multiple disciplines—from physics and computer science to game theory and complexity.
* Note: If you enjoy our podcast conversations, please join us for the Artificiality Summit on October 23-25 in Bend, Oregon for many more in person conversations like these! Learn more about the Summit at www.artificiality.world/summit.
We reached out to David to explore the mathematics of meaning—a concept that's becoming crucial as we live more deeply with artificial intelligences. If machines can hold their own mathematical understanding of meaning, how does that reshape our interactions, our shared reality, and even what it means to be human?
David takes us on a journey through his paper "Semantic Information, Autonomous Agency and Non-Equilibrium Statistical Physics," co-authored with Artemy Kolchinsky. While mathematically rigorous in its foundation, our conversation explores these complex ideas in accessible terms.
At the core of our discussion is a novel framework for understanding meaning itself—not just as a philosophical concept, but as something that can be mathematically formalized. David explains how we can move beyond Claude Shannon's syntactic information theory (which focuses on the transmission of bits) to a deeper understanding of semantic information (what those bits actually mean to an agent).
Drawing from Judea Pearl's work on causality, Schrödinger's insights on life, and stochastic thermodynamics, David presents a unified framework where meaning emerges naturally from an agent's drive to persist into the future. This approach provides a mathematical basis for understanding what makes certain information meaningful to living systems—from humans to single cells.
Our conversation ventures into:
We finish by talking about David's current work on a potentially concerning horizon: how distributed AI systems interacting through smart contracts could create scenarios beyond our mathematical ability to predict—a "distributed singularity" that might emerge in as little as five years. We wrote about this work here.
For anyone interested in artificial intelligence, complexity science, or the fundamental nature of meaning itself, this conversation offers rich insights from one of today's most innovative interdisciplinary thinkers.
About David Wolpert:
David Wolpert is a Professor at the Santa Fe Institute and one of the modern era's true polymaths. He received his PhD in physics from UC Santa Barbara but has made seminal contributions across numerous fields. His research spans machine learning (where he formulated the "No Free Lunch" theorems), statistical physics, game theory, distributed intelligence, and the foundations of inference and computation. Before joining SFI, Wolpert held positions at NASA, Stanford, and the Santa Fe Institute as a professor. His work consistently bridges disciplinary boundaries to address fundamental questions about complex systems, computation, and the nature of intelligence.
Thanks again to Jonathan Coulton for our music.
In this remarkable conversation, Michael Levin (Tufts University) and Blaise Agüera y Arcas (Google) examine what happens when biology and computation collide at their foundations. Their recent papers—arriving simultaneously yet from distinct intellectual traditions—illuminate how simple rules generate complex behaviors that challenge our understanding of life, intelligence, and agency.
Michael’s "Self-Sorting Algorithm" reveals how minimal computational models demonstrate unexpected problem-solving abilities resembling basal intelligence—where just six lines of deterministic code exhibit dynamic adaptability we typically associate with living systems. Meanwhile, Blaise's "Computational Life" investigates how self-replicating programs emerge spontaneously from random interactions in digital environments, evolving complexity without explicit design or guidance.
Their parallel explorations suggest a common thread: information processing underlies both biological and computational systems, forming an endless cycle where information → computation → agency → intelligence → information. This cyclical relationship transcends the traditional boundaries between natural and artificial systems.
The conversation unfolds around several interwoven questions:
- How does genuine agency emerge from simple rule-following components?
- Why might intelligence be more fundamental than life itself?
- How do we recognize cognition in systems that operate unlike human intelligence?
- What constitutes the difference between patterns and the physical substrates expressing them?
- How might symbiosis between humans and synthetic intelligence reshape both?
Perhaps most striking is their shared insight that we may already be surrounded by forms of intelligence we're fundamentally blind to—our inherent biases limiting our ability to recognize cognition that doesn't mirror our own. As Michael notes, "We have a lot of mind blindness based on our evolutionary firmware."
The timing of their complementary work isn't mere coincidence but reflects a cultural inflection point where our understanding of intelligence is expanding beyond anthropocentric models. Their dialogue offers a conceptual framework for navigating a future where the boundaries between biological and synthetic intelligence continue to dissolve, not as opposing forces but as variations on a universal principle of information processing across different substrates.
For anyone interested in the philosophical and practical implications of emergent intelligence—whether in cells, code, or consciousness—this conversation provides intellectual tools for understanding the transformed relationship between humans and technology that lies ahead.
Links:
------
Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.
Thanks again to Jonathan Coulton for our music.
In this episode, we welcome Maggie Jackson, whose latest book, Uncertain, has become essential reading for navigating today’s complex world. Known for her groundbreaking work on attention and distraction, Maggie now turns her focus to uncertainty—not as a problem to be solved, but as a skill to be cultivated.
Note: Uncertain won an Artificiality Book Award in 2024—check out our review here: https://www.artificiality.world/artificiality-book-awards-2024/
In the interview, we explore the neuroscience of uncertainty, the cultural biases that make us crave certainty, and why our discomfort with the unknown may be holding us back. Maggie unpacks the two core types of uncertainty—what we can’t know and what we don’t yet know—and explains why understanding this distinction is crucial for thinking well in the digital age.
Our conversation also explores the implications of AI—as technology increasingly mediates our reality, how do we remain critical thinkers? How do we resist the illusion of certainty in a world of algorithmically generated answers
Maggie’s insights challenge us to reframe uncertainty—not as fear, but as an opportunity for discovery, adaptability, and even creativity. If you’ve ever felt overwhelmed by ambiguity or pressured to always have the “right” answer, this episode offers a refreshing perspective on why being uncertain might be one of our greatest human strengths.
Links:
Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.
Thanks again to Jonathan Coulton for our music.
In this episode, we talk with Greg Epstein—humanist chaplain at Harvard and MIT, bestselling author, and a leading voice on the intersection of technology, ethics, and belief systems. Greg’s latest book, Tech Agnostic, offers a provocative argument: Silicon Valley isn’t just a powerful industry—it has become the dominant religion of our time.
We explore the deep parallels between big tech and organized religion, from sacred texts and prophets to digital congregations and AI-driven eschatology. The conversation explores digital Puritanism, the "unwitting worshipers" of tech's altars, and the theological implications of AI doomerism.
But this isn’t just a critique—it’s a call for a Reformation. Greg lays out a path toward a more humane and ethical future for technology, one that resists unchecked power and prioritizes human values over digital dogma.
Join us for a deep, thought-provoking conversation on faith, fear, and the future of being human in an age where technology doesn’t just shape our world—it defines what we believe in.
Do you enjoy our conversations like this one? Then subscribe on your favorite platform, subscribe to our emails (free) at Artificiality.world, and check out the Artificiality Summit—our mind-expanding retreat in Bend, Oregon at Artificiality.world/summit.
Thanks again to Jonathan Coulton for our music.
In this episode, we sit down with the ever-innovative Chris Messina—creator of the hashtag, top product hunter on Product Hunt, and trusted advisor to startups navigating product development and market strategy.
Recording from Ciel Media’s new studio in Berkeley, we explore the evolving landscape of generative AI and the widening gap between its immense potential and real-world usability. Chris introduces a compelling framework, distinguishing AI as a *tool* versus a *medium*, which helps explain the stark divide in how different users engage with these technologies.
Our conversation examines key challenges: How do we build trust in AI? Why is transparency in computational reasoning critical? And how might community collaboration shape the next generation of AI products?
Drawing from his deep experience in social media and emerging tech, Chris offers striking parallels between early internet adoption and today’s AI revolution, suggesting that meaningful integration will require both time and a generational shift in thinking.
What makes this discussion particularly valuable is Chris’s vision for the future of AI interaction—where technology moves beyond query-response models to become a truly collaborative medium, transforming how we create, problem-solve, and communicate.
Links:
Chris: https://chrismessina.me
Ciel Media: https://cielcreativespace.com
D. Graham Burnett will tell you his day job is as a professor of science history at Princeton University. He is also co-founder of the Strother School of Radical Attention and has been associated with the Friends of Attention since 2018. But none of those positions adequately describe Graham.
His bio says that he “works at the intersection of historical inquiry and artistic practice.” He writes, he performs, he makes things. He describes himself as an attention activist. Perhaps most importantly for us, Graham helps you see the world differently—and more clearly.
Graham has powerful views on the effect of technology on our attention. We often riff on his idea that technology has fracked our attention into little commoditizable bits. His work has highly influenced our concern about what might happen if the same extractive practices of the attention economy are applied to the future AI-powered intimacy economy.
We were thrilled to have Graham on the pod for a wide ranging conversation about attention, intimacy, and much more.
Links:
https://dgrahamburnett.net
https://www.schoolofattention.org
https://www.friendsofattention.net
---
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email: https://www.artificiality.world
Thanks to Jonathan Coulton for our music.
At the Artificiality Summit 2024, Michael Levin, distinguished professor of biology at Tufts University and associate at Harvard's Wyss Institute, gave a lecture about the emerging field of diverse intelligence and his frameworks for recognizing and communicating with the unconventional intelligence of cells, tissues, and biological robots. This work has led to new approaches to regenerative medicine, cancer, and bioengineering, but also to new ways to understand evolution and embodied minds. He sketched out a space of possibilities—freedom of embodiment—which facilitates imagining a hopeful future of "synthbiosis", in which AI is just one of a wide range of new bodies and minds. Bio: Michael Levin, Distinguished Professor in the Biology department and Vannevar Bush Chair, serves as director of the Tufts Center for Regenerative and Developmental Biology. Recent honors include the Scientist of Vision award and the Distinguished Scholar Award. His group's focus is on understanding the biophysical mechanisms that implement decision-making during complex pattern regulation, and harnessing endogenous bioelectric dynamics toward rational control of growth and form. The lab's current main directions are: - Understanding how somatic cells form bioelectrical networks for storing and recalling pattern memories that guide morphogenesis; - Creating next-generation AI tools for helping scientists understand top-down control of pattern regulation (a new bioinformatics of shape); and - Using these insights to enable new capabilities in regenerative medicine and engineering. www.artificiality.world/summit
Our opening keynote from the Imagining Summit held in October 2024 in Bend, Oregon. Join us for the next Artificiality Summit on October 23-25, 2025! Read about the 2024 Summit here: https://www.artificiality.world/the-imagining-summit-we-imagined-and-hoped-and-we-cant-wait-for-next-year-2/ And join us for the 2025 Summit here: https://www.artificiality.world/summit/
First:
- Apologies for the audio! We had a production error…
What’s new:
- DeepSeek has created breakthroughs in both: How AI systems are trained (making it much more affordable) and how they run in real-world use (making them faster and more efficient)
Details
- FP8 Training: Working With Less Precise Numbers
- Traditional AI training requires extremely precise numbers
- DeepSeek found you can use less precise numbers (like rounding $10.857643 to $10.86)
- Cut memory and computation needs significantly with minimal impact
- Like teaching someone math using rounded numbers instead of carrying every decimal place
- Learning from Other AIs (Distillation)
- Traditional approach: AI learns everything from scratch by studying massive amounts of data
- DeepSeek's approach: Use existing AI models as teachers
- Like having experienced programmers mentor new developers:
- Trial & Error Learning (for their R1 model)
- Started with some basic "tutoring" from advanced models
- Then let it practice solving problems on its own
- When it found good solutions, these were fed back into training
- Led to "Aha moments" where R1 discovered better ways to solve problems
- Finally, polished its ability to explain its thinking clearly to humans
- Smart Team Management (Mixture of Experts)
- Instead of one massive system that does everything, built a team of specialists
- Like running a software company with:
- 256 specialists who focus on different areas
- 1 generalist who helps with everything
- Smart project manager who assigns work efficiently
- For each task, only need 8 specialists plus the generalist
- More efficient than having everyone work on everything
- Efficient Memory Management (Multi-head Latent Attention)
- Traditional AI is like keeping complete transcripts of every conversation
- DeepSeek's approach is like taking smart meeting minutes
- Captures key information in compressed format
- Similar to how JPEG compresses images
- Looking Ahead (Multi-Token Prediction)
- Traditional AI reads one word at a time
- DeepSeek looks ahead and predicts two words at once
- Like a skilled reader who can read ahead while maintaining comprehension
Why This Matters
- Cost Revolution: Training costs of $5.6M (vs hundreds of millions) suggests a future where AI development isn't limited to tech giants.
- Working Around Constraints: Shows how limitations can drive innovation—DeepSeek achieved state-of-the-art results without access to the most powerful chips (at least that’s the best conclusion at the moment).
What’s Interesting
- Efficiency vs Power: Challenges the assumption that advancing AI requires ever-increasing computing power - sometimes smarter engineering beats raw force.
- Self-Teaching AI: R1's ability to develop reasoning capabilities through pure reinforcement learning suggests AIs can discover problem-solving methods on their own.
- AI Teaching AI: The success of distillation shows how knowledge can be transferred between AI models, potentially leading to compounding improvements over time.
- IP for Free: If DeepSeek can be such a fast follower through distillation, what’s the advantage of OpenAI, Google, or another company to release a novel model?
We’re excited to welcome writers and directors Hans Block and Moritz Riesewieck to the podcast. Their debut film, ‘The Cleaners,’ about the shadow industry of digital censorship premiered at the Sundance Film Festival in 2018 and has since won numerous international awards and been screened at more than 70 international film festivals.
We invited Hans and Moritz to the podcast to talk about their latest film, Eternal You, which examines the story of people who live on as digital replicants—and the people who keep them living on. We found the film to be quite powerful. At times inspiring and at others disturbing and distressing. Can a generative ghost help people through their grief or trap them in it? Is falling for a digital replica healthy or harmful? Are the companies creating these technologies benefitting their users or extracting from them?
Eternal You is a powerful and important film. We highly recommend taking the time to watch it—and allowing for time to digest and consider. Hans and Moritz have done a brilliant job exploring a challenging and delicate topic with kindness and care.
Bravo.
------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
Briefing: How AI Affects Critical Thinking and Cognitive Offloading
What This Paper Highlights
- The study explores the growing reliance on AI tools and its effects on critical thinking, specifically through cognitive offloading—delegating mental tasks to AI systems.
- Key finding: Frequent AI tool use is strongly associated with reduced critical thinking abilities, especially among younger users, as they increasingly rely on AI for decision-making and problem-solving.
- Cognitive offloading acts as a mediating factor, reducing opportunities for deep, reflective thinking. Why This Is Important
- Shaping Minds: Critical thinking is central to decision-making, problem-solving, and navigating misinformation. If AI reliance erodes these skills, it has profound implications for education, work, and citizenship.
- Generational Divide: Younger users show higher dependence on AI, suggesting that future generations may grow less capable of independent thought unless deliberate interventions are made.
- Education and Policy: There’s an urgent need for strategies to balance AI integration with fostering cognitive skills, ensuring users remain active participants rather than passive consumers.
What’s Curious and Interesting
- Cognitive Shortcuts: Participants increasingly trust AI to make decisions, yet this trust fosters "cognitive laziness," with many users skipping steps like verifying or analyzing information.
- AI’s Double-Edged Sword: While AI improves efficiency and provides tailored solutions, it also reduces engagement in activities that develop critical thinking, like analyzing arguments or synthesizing diverse viewpoints.
- Education as a Buffer: People with higher educational attainment are better at critically engaging with AI outputs, suggesting that education plays a key role in mitigating these risks. What This Tells Us About the Future
- Critical Thinking at Risk: AI tools will only grow more pervasive. Without proactive efforts to maintain cognitive engagement, critical thinking could erode further, leaving society more vulnerable to misinformation and manipulation.
- Educational Reforms Needed: Active learning strategies and media literacy are essential to counterbalance AI’s convenience, teaching people how to engage critically even when AI offers "easy answers."
- Shifting Cognitive Norms: As AI takes over more routine tasks, we may need to redefine what skills are critical for thriving in an AI-driven world, focusing more on judgment, creativity, and ethical reasoning.
AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking by Michael Gerlich
https://www.mdpi.com/2075-4698/15/1/6
We’re excited to welcome Craig Wheeler to the podcast. Craig is an astrophysicist and Professor at the University of Texas at Austin. Over his career, he has made significant contributions to our understanding of supernovae, black holes, and the nature of the universe itself.
Craig’s new book, The Path to Singularity: How Technology Will Challenge the Future of Humanity, offers an exploration of how exponential technological change could upend life as we know it. Drawing on his background as an astrophysicist, Craig examines how humanity’s current trajectory is shaped by forces like AI, robotics, neuroscience, and space exploration—all of which are advancing at speeds that may outpace our ability to adapt.
The book is an extension of a course Craig taught at UT Austin, where he challenged students to project humanity’s future over the next 100, 1,000, and even 100,000 years. His students explored ideas about AI, consciousness, and human evolution, ultimately shaping the themes that inspired the book. We found it fascinating, as he says in the interview, that the majority of the scenarios projected into the future were not positive for humanity.
We wonder: Who wants to live in a dystopian future? And, for those of us who don’t: What can we do about it? This led to our interest in talking with Craig.
We hope you enjoy our conversation with Craig Wheeler.
---------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
Science Briefing: What AI Agents Tell Us About the Future of Human Experience * What These Papers Highlight - AI agents are improving but far from capable of replacing human tasks. Even the best models fail at simple things humans find intuitive, like handling social interactions or navigating pop-ups. - One paper benchmarks agent performance in workplace-like tasks, showing just 24% success on even simple tasks. The other argues that agents alone aren’t enough—we need a broader system to make them useful. * Why This Matters - Human Compatibility: Agents don’t just need to complete tasks—they need to work in ways that humans trust and find relatable. - New Ecosystems: Instead of relying on better agents alone, we might need personalized digital “Sims” that act as go-betweens, understanding us and adapting to our preferences. - Humor in Failure: From renaming a coworker to "solve" a problem to endlessly struggling with pop-ups, these failures highlight how far AI still is from grasping human context. * What’s Interesting - Humans vs. Machines: AI performs better on coding than on “easier” tasks like scheduling or teamwork. Why? It’s great at structure, bad at messiness. - Sims as a Bridge: The idea of digital versions of ourselves (Sims) managing agents for us could change how we relate to technology, making it feel less like a tool and more like a collaborator. - Impact on Trust: The future of agents will hinge on whether they can align with human values, privacy, and quirks—not just perform better technically. *What’s Next for Agents - Can agents learn to navigate our complexity, like social norms or context-sensitive decisions? - Will ecosystems with Sims and Assistants make AI feel more human—and less robotic? - How will trust and personalization shape whether people actually adopt these systems? Product Briefing: Always On AI Wearables * What’s new: - New AI wearables launched at CES 2025 that continuously listen. From earbuds (HumanPods) to wristbands (Bee Pioneer) to stick-it-to-your-head pods (Omi), these cheap hardware devices are attempting to be your always-listening assistants. * Why This Matters - From Wake Words to Always-On: These devices listen passively—no activation required—requiring the user to opt-out by muting rather than opting in. - Privacy? Pfft: Not only are these devices small enough to hide and record without anyone knowing. The Omi only turns on a light when it is not recording. - Razor-Razorblade Model: With hardware prices below $100, these devices are priced to all for easy experimentation—the value is in the software subscription. * What’s Interesting - Mind-reading?: Omi claims to detect brain signals, allowing users to think their commands instead of speaking. - It’s About Apps: The app store is back as a business model. But are these startups ready for the challenge? - Memory Prosthetics: These devices record, transcribe, and summarize everything—generating to do lists and more. * The Human Experience - AI as a Second Self?: These devices don’t just assist; they remember, organize, and anticipate—how will that reshape how we interact with and recall our own experiences? - Can We Still Forget?: If everything in our lives is logged and searchable, do we lose the ability to let go? - Context Collapse: AI may summarize what it hears, but can it understand the complexity of human relationships, emotions, and social cues?
We’re excited to welcome Doyne Farmer to the podcast. Doyne is a pioneering complexity scientist and a leading thinker on economic systems, technological change, and the future of society. Doyne is a Professor of Complex Systems at the University of Oxford, an external professor at the Santa Fe Institute, and Chief Scientist at Macrocosm.
Doyne’s work spans an extraordinary range of topics, from agent-based modeling of financial markets to exploring how innovation shapes the long-term trajectory of human progress. At the heart of Doyne’s thinking is a focus on prediction—not in the narrow sense of forecasting next week’s market trends, but in understanding the deep, generative forces that shape the evolution of technology and society.
His new book, Making Sense of Chaos: A Better Economics for a Better World, is a reflection on the limitations of traditional economics and a call to embrace the tools of complexity science. In it, Doyne argues that today’s economic models often fall short because they assume simplicity where there is none. What’s especially compelling about Doyne’s perspective is how he uses complexity science to challenge conventional economic assumptions. While traditional economics often treats markets as rational and efficient, Doyne reveals the messy, adaptive, and unpredictable nature of real-world economies. His ideas offer a powerful framework for rethinking how we approach systemic risk, innovation policy, and the role of AI-driven technologies in shaping our future.
We believe Doyne’s ideas are essential for anyone trying to understand the uncertainties we face today. He doesn’t just highlight the complexity—he shows how to navigate it. By tracking the hidden currents that drive change, he helps us see the bigger picture of where we might be headed.
We hope you enjoy our conversation with Doyne Farmer.
------------------------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
We're excited to welcome Jamie Boyle to the podcast. Jamie is a law professor and author of the thought-provoking book The Line: AI and the Future of Personhood.
In The Line, Jamie challenges our assumptions about personhood and humanity, arguing that these boundaries are more fluid than traditionally believed. He explores diverse contexts like animal rights, corporate personhood, and AI development to illustrate how debates around personhood permeate philosophy, law, art, and morality.
Jamie uses fascinating examples from science fiction, legal history, and philosophy to illustrate the challenges we face in defining the rights and moral status of artificial entities. He argues that grappling with these questions may lead to a profound re-examination of human identity and consciousness.
What's particularly compelling about Jamie’s approach is how he frames this as a journey of moral expansion, drawing parallels to how we've expanded our circle of empathy in the past. He also offers surprising insights into legal history, revealing how corporate personhood emerged more by accident than design—a cautionary tale as we consider AI rights.
We believe this book is both ahead of its time and right on time. It sharpens our understanding of difficult concepts—namely, that the boundaries between organic and synthetic are blurring, creating profound existential challenges we need to prepare for now.
To quote Jamie from The Line: "Grappling with the question of synthetic others may bring about a reexamination of the nature of human identity and consciousness. I want to stress the potential magnitude of that reexamination. This process may offer challenges to our self conception unparalleled since secular philosophers declared that we would have to learn to live with a god shaped hole at the center of the universe."
Let's dive into our conversation with Jamie Boyle.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds
We're excited to welcome to the podcast Shannon Vallor, professor of ethics and technology at the University of Edinburgh, and the author of The AI Mirror.
In her book, Shannon invites us to rethink AI—not as a futuristic force propelling us forward, but as a reflection of our past, capturing both our human triumphs and flaws in ways that shape our present reality.
In The AI Mirror, Shannon uses the powerful metaphor of a mirror to illustrate the nature of AI. She argues that AI doesn’t represent a new intelligence; rather, it reflects human cognition in all its complexity, limitations, and distortions. Like a mirror, AI is backward-looking, constrained by the data we’ve already provided it. It amplifies our biases and misunderstandings, giving us back a shallow, albeit impressive, reflection of our intelligence.
We think this is one of the best books on AI for a general audience that has been published this year. Shannon’s mirror metaphor does more than just critique AI—it reassures. By casting AI as a reflection rather than an independent force, she validates a crucial distinction: AI may be an impressive tool, but it’s still just that—a mirror of our past. Humanity, Shannon suggests, remains something separate, capable of innovation and growth beyond the confines of what these systems can reflect. This insight offers a refreshing confidence amidst the usual AI anxieties: the real power, and responsibility, remains with us.
Let’s dive into our conversation with Shannon Vallor.
-----------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds
We're excited to welcome to the podcast Matt Beane, Assistant Professor at UC Santa Barbara and the author of the book "The Skill Code: How to Save Human Ability in an Age of Intelligent Machines."
Matt’s research investigates how AI is changing the traditional apprenticeship model, creating a tension between short-term performance gains and long-term skill development. His work has particularly focused on the relationship between junior and senior surgeons in the operating theater. As he told us, "In robotic surgery, I was seeing that the way technology was being handled in the operating room was assassinating this relationship." He observed that junior surgeons now often just set up the robot and watch the senior surgeon operate for hours, epitomizing a broader trend where AI and advanced technologies are reshaping how we transfer skills from experts to novices.
In "The Skill Code," Matt argues that three key elements are essential for developing expertise: challenge, complexity, and connection. He points out that real learning often involves discomfort, saying, "Everyone intuitively knows when you really learned something in your life. It was not exactly a pleasant experience, right?"
Matt's research shows that while AI can significantly boost productivity, it may be undermining critical aspects of skill development. He warns that the traditional model of "See one, do one, teach one" is becoming "See one, and if-you're-lucky do one, and not-on-your-life teach one." In our conversation, we explore these insights and discuss how we might preserve human ability in an age of intelligent machines.
Let’s dive into our conversation with Matt Beane on the future of human skill in an AI-driven world.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds
We're excited to welcome to the podcast Emily M. Bender, professor of computational linguistics at the University of Washington.
As our listeners know, we enjoy tapping expertise in fields adjacent to the intersection of humans and AI. We find Emily’s expertise in linguistics to be particularly important when understanding the capabilities and limitations of large language models—and that’s why we were eager to talk with her.
Emily is perhaps best known in the AI community for coining the term "stochastic parrots" to describe these models, highlighting their ability to mimic human language without true understanding. In her paper "On the Dangers of Stochastic Parrots," Emily and her co-authors raised crucial questions about the environmental, financial, and social costs of developing ever-larger language models. Emily has been a vocal critic of AI hype and her work has been pivotal in sparking critical discussions about the direction of AI research and development.
In this conversation, we explore the issues of current AI systems with a particular focus on Emily’s view as a computational linguist. We also discuss Emily's recent research on the challenges of using AI in search engines and information retrieval systems, and her description of large language models as synthetic text extruding machines.
Let's dive into our conversation with Emily Bender.
----------------------
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds
We're excited to welcome to the podcast John Havens, a multifaceted thinker at the intersection of technology, ethics, and sustainability. John's journey has taken him from professional acting to becoming a thought leader in AI ethics and human wellbeing.
In his 2016 book, "Heartificial Intelligence: Embracing Our Humanity to Maximize Machines," John presents a thought-provoking examination of humanity's relationship with AI. He introduces the concept of "codifying our values" - our crucial need as a species to define and understand our own ethics before we entrust machines to make decisions for us.
Through an interplay of fictional vignettes and real-world examples, the book illuminates the fundamental interplay between human values and machine intelligence, arguing that while AI can measure and improve wellbeing, it cannot automate it. John advocates for greater investment in understanding our own values and ethics to better navigate our relationship with increasingly sophisticated AI systems.
In this conversation, we dive into the key ideas from "Heartificial Intelligence" and their profound implications for the future of both human and artificial intelligence. We explore questions like:
Let's dive into our conversation with John Havens.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds
We’re excited to welcome to the podcast Leslie Valiant, a pioneering computer scientist and Turing Award winner renowned for his groundbreaking work in machine learning and computational learning theory. In his seminal 1983 paper, Leslie introduced the concept of Probably Approximately Correct or PAC learning, kick-starting a new era of research into what machines can learn.
Now, in his latest book, The Importance of Being Educable: A New Theory of Human Uniqueness, Leslie builds upon his previous work to present a thought-provoking examination of what truly sets human intelligence apart. He introduces the concept of "educability" - our unparalleled ability as a species to absorb, apply, and share knowledge.
Through an interplay of abstract learning algorithms and relatable examples, the book illuminates the fundamental differences between human and machine learning, arguing that while learning is computable, today's AI is still a far cry from human-level educability. Leslie advocates for greater investment in the science of learning and education to better understand and cultivate our species' unique intellectual gifts.
In this conversation, we dive deep into the key ideas from The Importance of Being Educable and their profound implications for the future of both human and artificial intelligence. We explore questions like:
Let’s dive into our conversation with Leslie Valiant.
If you enjoy our podcasts, please subscribe and leave a positive rating or comment. Sharing your positive feedback helps us reach more people and connect them with the world’s great minds.
Subscribe to get Artificiality delivered to your email
Learn about our book Make Better Decisions and buy it on Amazon
Thanks to Jonathan Coulton for our music
#artificiality #ai #artificialintelligence #generativeai #airesearch #complexity #intimacyeconomy #spaciality #consciousness #knowledge #mindforourminds