Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/56/96/3a/56963afa-10ff-5a62-b893-77e75f7960fc/mza_8398906064974675681.jpg/600x600bb.jpg
Deep Dive - Frontier AI with Dr. Jerry A. Smith
Dr. Jerry A. Smith
65 episodes
1 week ago
In-Depth Explorations of Neuroscience-Inspired Architectures Revolutionizing AI.
Show more...
Technology
Tech News
RSS
All content for Deep Dive - Frontier AI with Dr. Jerry A. Smith is the property of Dr. Jerry A. Smith and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In-Depth Explorations of Neuroscience-Inspired Architectures Revolutionizing AI.
Show more...
Technology
Tech News
https://i1.sndcdn.com/artworks-xDrDVjtVmi2cWZyX-IBo3Zg-t3000x3000.png
Flat Facts, Curved Beliefs: A Geometric Hypothesis for Transformer Cognition
Deep Dive - Frontier AI with Dr. Jerry A. Smith
22 minutes 19 seconds
2 months ago
Flat Facts, Curved Beliefs: A Geometric Hypothesis for Transformer Cognition
Medium Article: https://medium.com/@jsmith0475/flat-facts-curved-beliefs-a-geometric-hypothesis-for-transformer-cognition-5ad6f850ebd5 The article, by Dr. Jerry A. Smith, proposes a geometric hypothesis for transformer cognition, suggesting that beliefs might operate within a curved, hyperbolic mathematical space, unlike factual information which likely resides in a flatter, Euclidean space. This theory attempts to explain why opposing concepts, like "love" and "hate," appear artificially close in traditional, flattened visualizations of transformer's internal representations. The author suggests that different "attention heads" within transformers may specialize in different geometries, with some handling stable facts in Euclidean space and others managing nuanced beliefs in hyperbolic space, which naturally accommodates hierarchies and divergent ideas. The text outlines potential experiments to test this hypothesis, such as measuring geodesic distances between beliefs in a hyperbolic model and analyzing the "tree-like" quality of attention head graphs. Ultimately, this perspective implies that transformers have independently discovered the need for varied geometries to fully represent the complexity of meaning, moving beyond the limitations of simply increasing Euclidean dimensions to accurately model human-like understanding.
Deep Dive - Frontier AI with Dr. Jerry A. Smith
In-Depth Explorations of Neuroscience-Inspired Architectures Revolutionizing AI.