Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/02/48/c7/0248c77f-89f6-dd8d-5fa6-cd84f09f2013/mza_16356865612797649208.jpg/600x600bb.jpg
Austrian Artificial Intelligence Podcast
Manuel Pasieka
72 episodes
2 days ago
Guest Interviews, discussing the possibilities and potential of AI in Austria. Question or Suggestions, write to austrianaipodcast@pm.me
Show more...
Technology
RSS
All content for Austrian Artificial Intelligence Podcast is the property of Manuel Pasieka and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Guest Interviews, discussing the possibilities and potential of AI in Austria. Question or Suggestions, write to austrianaipodcast@pm.me
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/12587253/12587253-1717677023054-931208bcf5d61.jpg
57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2
Austrian Artificial Intelligence Podcast
46 minutes 38 seconds
1 year ago
57. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 2/2

Hello and welcome back to the AAIP


This is the second part of my interview with Eldar Kurtic and his research on how to optimiz inference of deep neural networks.


In the first part of the interview, we focused on sparsity and how high unstructured sparsity can be achieved without loosing model accuracy on CPU's and in part on GPU's.


In this second part of the interview, we are going to focus on quantization. Quantization tries to reduce model size by finding ways to represent the model in numeric representations with less precision while retaining model performance. This means that a model that for example has been trained in a standard 32bit floating point representation is during post training quantization converted to a representation that is only using 8 bits. Reducing the model size to one forth.


We will discuss how current quantization method can be applied to quantize model weights down to 4 bits while retaining most of the models performance and why doing so with the models activation is much more tricky.


Eldar will explain how current GPU architectures, create two different type of bottlenecks. Memory bound and compute bound scenarios. Where in the case of memory bound situations, the model size causes most of the inference time to be spend in transferring model weights. Exactly in these situations, quantization has its biggest impact and reducing the models size can accelerate inference.


Enjoy.


## AAIP Community

Join our discord server and ask guest directly or discuss related topics with the community.

https://discord.gg/5Pj446VKNU


### References

Eldar Kurtic: https://www.linkedin.com/in/eldar-kurti%C4%87-77963b160/

Neural Magic: https://neuralmagic.com/

IST Austria Alistarh Group: https://ist.ac.at/en/research/alistarh-group/

Austrian Artificial Intelligence Podcast
Guest Interviews, discussing the possibilities and potential of AI in Austria. Question or Suggestions, write to austrianaipodcast@pm.me