Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/35/0e/ea/350eea4b-dc4c-8299-6bf7-39c4c41aca90/mza_1860621988665580564.jpg/600x600bb.jpg
How AI Is Built
Nicolay Gerold
63 episodes
6 days ago
Real engineers. Real deployments. Zero hype. We interview the top engineers who actually put AI in production. Learn what the best engineers have figured out through years of experience. Hosted by Nicolay Gerold, CEO of Aisbach and CTO at Proxdeal and Multiply Content.
Show more...
Technology
RSS
All content for How AI Is Built is the property of Nicolay Gerold and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Real engineers. Real deployments. Zero hype. We interview the top engineers who actually put AI in production. Learn what the best engineers have figured out through years of experience. Hosted by Nicolay Gerold, CEO of Aisbach and CTO at Proxdeal and Multiply Content.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/44001690/44001690-1755086018755-66ca705b5fc8b.jpg
#054 Building Frankenstein Models with Model Merging and the Future of AI
How AI Is Built
1 hour 6 minutes 55 seconds
3 months ago
#054 Building Frankenstein Models with Model Merging and the Future of AI

Nicolay here,most AI conversations focus on training bigger models with more compute. This one explores the counterintuitive world where averaging weights from different models creates better performance than expensive post-training.

Today I have the chance to talk to Maxime Labonne, who's a researcher at Liquid AI and the architect of some of the most popular open source models on Hugging Face.

He went from researching neural networks for cybersecurity to building "Frankenstein models" through techniques that shouldn't work but consistently do.

Key Insight: Model Merging as a Free LunchThe core breakthrough is deceptively simple: take two fine-tuned models, average their weights layer by layer, and often get better performance than either individual model. Maxime initially started writing an article to explain why this couldn't work, but his own experiments convinced him otherwise.

The magic lies in knowledge compression and regularization. When you train a model multiple times on similar data, each run creates slightly different weight configurations due to training noise. Averaging these weights creates a smoother optimization path that avoids local minima. You can literally run model merging on a CPU - no GPUs required.

In the podcast, we also touch on:

  • Obliteration: removing safety refusal mechanisms without retraining
  • Why synthetic data now comprises 90%+ of fine-tuning datasets
  • The evaluation crisis and automated benchmarks missing real-world performance
  • Chain of thought compression techniques for reasoning models

💡 Core Concepts

  • Model Merging: Averaging weights across layers from multiple fine-tuned models to create improved performance without additional training
  • Obliteration: Training-free method to remove refusal directions from models by computing activation differences
  • Linear Merging: The least opinionated merging technique that simply averages weights with optional scaling factors
  • Refusal Direction: The activation pattern that indicates when a model will output a safety refusal

📶 Connect with Maxime:

  • X / Twitter: https://x.com/maximelabonne
  • LinkedIn: https://www.linkedin.com/in/maxime-labonne/
  • Company: https://www.liquid.ai/

📶 Connect with Nicolay:

  • LinkedIn: https://www.linkedin.com/in/nicolay-gerold/
  • X / Twitter: https://x.com/nicolaygerold
  • Website: https://www.nicolaygerold.com/

⏱ Important Moments

  • Model Merging Discovery Process: [00:00:30] Maxime explains how he started writing an article to debunk model merging
  • Two Main Merging Use Cases: [11:04] Clear distinction between merging checkpoints versus combining different task-specific capabilities
  • Linear Merging as Best Practice: [21:00] Why simple weight averaging consistently outperforms more complex techniques
  • Layer Importance Hierarchy: [21:18] First and last layers have the most influence on model behavior
  • Obliteration Technique Explained: [36:07] How to compute and subtract refusal directions from model activations
  • Synthetic Data Dominance: [50:00] Modern fine-tuning uses 90%+ synthetic data

🛠 Tools & Tech Mentioned

  • MergeKit: https://github.com/cg123/mergekit
  • Transformer Lens: https://github.com/TransformerLensOrg/TransformerLens
  • Hugging Face Transformers: https://github.com/huggingface/transformers
  • PyTorch: https://pytorch.org/

📚 Recommended Resources

  • Maxime's Model Merging Articles: https://huggingface.co/blog/merge
  • Model Soups Paper: https://arxiv.org/abs/2203.05482
  • Will Brown's Rubric Engineering: https://x.com/willccbb/status/1883611121577517092


How AI Is Built
Real engineers. Real deployments. Zero hype. We interview the top engineers who actually put AI in production. Learn what the best engineers have figured out through years of experience. Hosted by Nicolay Gerold, CEO of Aisbach and CTO at Proxdeal and Multiply Content.