Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts126/v4/ce/f3/46/cef3464e-6703-0b9b-384a-9d39a3c47879/mza_5818024916783655147.jpg/600x600bb.jpg
Artificially Unintelligent
Artificially Unintelligent
52 episodes
1 week ago
Eavesdrop in on chats between Nicolay, William, and their savvy friends about the latest in AI: new architectures, developments, and tools. It's like chilling with your techie friends at the bar downing a few beers.
Show more...
Technology
RSS
All content for Artificially Unintelligent is the property of Artificially Unintelligent and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Eavesdrop in on chats between Nicolay, William, and their savvy friends about the latest in AI: new architectures, developments, and tools. It's like chilling with your techie friends at the bar downing a few beers.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/38000746/38000746-1693488628908-12840c9e01743.jpg
E36 Untangling the Decision Making Process of Neural Networks - A Paper Deep Dive of Zoom In: An Introduction to Circuits
Artificially Unintelligent
26 minutes 31 seconds
2 years ago
E36 Untangling the Decision Making Process of Neural Networks - A Paper Deep Dive of Zoom In: An Introduction to Circuits

Mechanistic interpretability refers to understanding a model by looking at how its internal components function and interact with each other. It's about breaking down the model into its smallest functional parts and explaining how these parts come together to produce the model's outputs.

Neural networks are complex, making it hard to make broad, factual statements about their behavior. However, focusing on small, specific parts of neural networks, known as "circuits", might offer a way to rigorously investigate them. These "circuits" can be edited and analyzed in a falsifiable manner, making them potential foundations for understanding interpretability.

Zoom In looks at neural networks from a biological perspective looking at features and circuits to untangle their behaviors.

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Artificially Unintelligent
Eavesdrop in on chats between Nicolay, William, and their savvy friends about the latest in AI: new architectures, developments, and tools. It's like chilling with your techie friends at the bar downing a few beers.