Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
Β© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/52/ab/cb/52abcb67-3575-0960-7313-79789f23ad70/mza_547998439152404077.jpg/600x600bb.jpg
LlamaCast
Shahriar Shariati
49 episodes
4 months ago
Daily podcast about the published articles in the LLM field.
Show more...
Technology
News,
Tech News,
Science,
Mathematics
RSS
All content for LlamaCast is the property of Shahriar Shariati and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Daily podcast about the published articles in the LLM field.
Show more...
Technology
News,
Tech News,
Science,
Mathematics
https://d3wo5wojvuv7l.cloudfront.net/t_rss_itunes_square_1400/images.spreaker.com/original/879177db874692a5aa0e7ad0353a362c.jpg
Scaling Laws for Precision
LlamaCast
18 minutes
11 months ago
Scaling Laws for Precision
βš–οΈ Scaling Laws for Precision

This research paper investigates the impact of precision in training and inference on the performance of large language models. The authors explore how precision affects the effective parameter count and propose scaling laws that predict performance degradation due to low-precision training and post-training quantization. They find that overtrained models are more sensitive to post-training quantization, and that training larger models in lower precision might be computationally optimal. Their unified scaling law accounts for both training and post-training effects and predicts loss in varied precision settings, ultimately suggesting that the standard practice of training models in 16-bit might be suboptimal.

πŸ“Ž Link to paper
🌐 Read their Tweet
LlamaCast
Daily podcast about the published articles in the LLM field.