Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/1f/36/7f/1f367f09-e676-41a7-526b-b6240cc74868/mza_11866948750715891878.png/600x600bb.jpg
Training Data
Sequoia Capital
69 episodes
1 week ago
Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society. The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.
Show more...
Technology
Business
RSS
All content for Training Data is the property of Sequoia Capital and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society. The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.
Show more...
Technology
Business
https://megaphone.imgix.net/podcasts/cf8b451e-5b67-11f0-8e44-2339eea2f29f/image/cdfabfe9a60345b5248204b077f9693e.png?ixlib=rails-4.3.1&max-w=3000&max-h=3000&fit=crop&auto=format,compress
Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability
Training Data
47 minutes
3 months ago
Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability
Eric Ho is building Goodfire to solve one of AI’s most critical challenges: understanding what’s actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society. Hosted by Sonya Huang and Roelof Botha, Sequoia Capital Mentioned in this episode: Mech interp: Mechanistic interpretability, list of important papers here Phineas Gage: 19th century railway engineer who lost most of his brain’s left frontal lobe in an accident. Became a famous case study in neuroscience. Human Genome Project: Effort from 1990-2003 to generate the first sequence of the human genome which accelerated the study of human biology Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs Zoom In: An Introduction to Circuits: First important mechanistic interpretability paper from OpenAI in 2020 Superposition: Concept from physics applied to interpretability that allows neural networks to simulate larger networks (e.g. more concepts than neurons) Apollo Research: AI safety company that designs AI model evaluations and conducts interpretability research Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. 2023 Anthropic paper that uses a sparse autoencoder to extract interpretable features; followed by Scaling Monosemanticity Under the Hood of a Reasoning Model: 2025 Goodfire paper that interprets DeepSeek’s reasoning model R1 Auto-interpretability: The ability to use LLMs to automatically write explanations for the behavior of neurons in LLMs Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model. (see episode with Arc co-founder Patrick Hsu) Paint with Ember: Canvas interface from Goodfire that lets you steer an LLM’s visual output  in real time (paper here) Model diffing: Interpreting how a model differs from checkpoint to checkpoint during finetuning Feature steering: The ability to change the style of LLM output by up or down weighting features (e.g. talking like a pirate vs factual information about the Andromeda Galaxy) Weight based interpretability: Method for directly decomposing neural network parameters into mechanistic components, instead of using features The Urgency of Interpretability: Essay by Anthropic founder Dario Amodei On the Biology of a Large Language Model: Goodfire collaboration with Anthropic
Training Data
Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society. The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.