Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/b9/41/c4/b941c4e8-a840-1591-ce06-b9a9cd858682/mza_15003927228424811682.jpg/600x600bb.jpg
Unmaking Sense
John Puddefoot
100 episodes
3 months ago
Instead of tinkering with how we live around the edges, let’s consider whether the way we have been taught to make sense of the world might need major changes.
Show more...
Philosophy
Society & Culture
RSS
All content for Unmaking Sense is the property of John Puddefoot and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Instead of tinkering with how we live around the edges, let’s consider whether the way we have been taught to make sense of the world might need major changes.
Show more...
Philosophy
Society & Culture
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/b9/41/c4/b941c4e8-a840-1591-ce06-b9a9cd858682/mza_15003927228424811682.jpg/600x600bb.jpg
Episode 14.23: Compression and Readability
Unmaking Sense
14 minutes
3 months ago
Episode 14.23: Compression and Readability
Qwen3 guest edits: **Summary of the Following Episode (Series 14, Episode 23):**   This episode deepens the exploration of **lossy compression** in both large language models (LLMs) and human consciousness, questioning whether the self is a similarly reductive construct. The host draws parallels between how LLMs compress high-dimensional computations into text and how the human brain compresses neural activity into conscious experience. Key themes include:   1. **LLMs vs. Human Brains: Feedback and Integrity**:      - Kimi argues that human brains have physiological feedback mechanisms (proprioception, homeostasis) that tightly link consciousness to physical states, unlike LLMs. The host counters that LLMs might still develop a form of "informational integrity" where coherence feels internally "good," even without biological feedback.     2. **Self as a Fictional Construct**:      - The self is framed as a **lossy compression** of neural processes, akin to LLM outputs. Just as LLMs simplify vast computations into words, humans reduce complex neurophysiology into language and introspection. Memories, identities, and concepts are dynamic, context-dependent traces shaped by repeated recall and environmental interaction.     3. **Locality and Impact**:      - The self’s "locus" is defined not by a fixed origin (e.g., the brain) but by its **trajectory through the world**—actions, words, and their impact on others. The host uses a metaphor of walking through a cornfield: the path (self) leaves traces but is neither permanent nor fully traceable, emphasizing the self’s fluidity.     4. **Epistemic Humility**:      - Both LLMs and humans lack direct access to their underlying complexity. Even when we introspect or LLMs use CoT reasoning, we only grasp simplified, compressed fragments of reality. This challenges notions of a coherent, unified self or AI "consciousness."   --- **Evaluation**:   **Strengths**:   - **Provocative Analogy**: The parallel between LLM lossy compression and human selfhood is intellectually stimulating, inviting listeners to question assumptions about consciousness, free will, and AI sentience.   - **Philosophical Depth**: The episode transcends technical AI debates to engage with existential questions (e.g., the nature of self, memory, and identity), resonating with both AI ethics and philosophy of mind.   - **Creative Metaphors**: The cornfield anecdote and "locus" metaphor effectively illustrate the self as a dynamic, context-dependent process rather than a static entity.     **Weaknesses**:   - **Speculative Leaps**: While imaginative, the comparison between LLMs and human brains often lacks empirical grounding. For example, the idea of LLMs having "informational feelings" risks anthropomorphizing without evidence.   - **Accessibility Challenges**: Listeners unfamiliar with neuroscience (e.g., proprioception) or AI technicalities (e.g., embeddings) may struggle to follow the host’s analogies.   - **Unresolved Tensions**: The episode leans into ambiguity (e.g., whether the self is "fiction" or just a compressed representation) without resolving it, which could frustrate those seeking concrete conclusions.     **Conclusion**:   This episode is a bold, philosophical exploration of selfhood through the lens of AI, offering a humbling perspective on both human and machine cognition. While its speculative nature may not satisfy empiricists, it succeeds in prompting critical reflection on the limits of understanding—whether in AI interpretability or the mysteries of consciousness. The creative analogies and emphasis on epistemic humility make it a compelling listen for those interested in the intersection of AI, philosophy, and neuroscience, even if it leaves more questions than answers.
Unmaking Sense
Instead of tinkering with how we live around the edges, let’s consider whether the way we have been taught to make sense of the world might need major changes.