Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/02/48/c7/0248c77f-89f6-dd8d-5fa6-cd84f09f2013/mza_16356865612797649208.jpg/600x600bb.jpg
Austrian Artificial Intelligence Podcast
Manuel Pasieka
72 episodes
1 day ago
Guest Interviews, discussing the possibilities and potential of AI in Austria. Question or Suggestions, write to austrianaipodcast@pm.me
Show more...
Technology
RSS
All content for Austrian Artificial Intelligence Podcast is the property of Manuel Pasieka and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Guest Interviews, discussing the possibilities and potential of AI in Austria. Question or Suggestions, write to austrianaipodcast@pm.me
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/12587253/12587253-1717676676016-a7b4ae91ca00d.jpg
56. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 1/2
Austrian Artificial Intelligence Podcast
51 minutes 59 seconds
1 year ago
56. Eldar Kurtic - Efficient Inference through sparsity and quantization - Part 1/2

Hello and welcome back to the AAIP


If you are an active Machine Learning engineer or are simply interested in Large Language models, I am sure you have seen the discussions around quantized models and all kind of new frameworks that have appeared recently and achieve astonishing inference performance of LLM's on consumer devices.


If you are curious how modern Large Language Models with their billions of parameters can run on a simple laptop or even an embedded device, than this episode is for you.


Today I am talking to Eldar Kurtic, researcher in the Alistarh group at the IST in lower Austrian and senior research engineer at the American startup Neural Magic.


Eldar's research focuses on optimizing Inference of Deep Neural Networks. On the show he is going to explain in depth show sparsity and quantization works, and how they can be applied to accelerate inference of big models, like LLM's on devices with limited resources.


Because of the length of the interview, I decided to split it into two parts.


This one, the first part, is going to focus on sparsity to reduce model size and enable faster inference by reducing the amount of memory and compute that is needed to store and run models.

The second part is going to focus on quantization as a mean to find representations of models with lower numeric precision that require less memory to store and process, while retaining accuracy.


In this first part about sparsity, Eldar will explain fundamental concepts like structured and unstructured sparsity. How and why they work and how currently we can achieve performant inference of unstructured sparsity only on CPU's and far less on GPU's.


We will discuss how to achieve crazy numbers of up to 95% unstructured sparsity while retaining the accuracy of models, but why it is difficult to leverage this under quoutes, reduction in model size, to actually accelerate model inference.


Enjoy.


## AAIP Community

Join our discord server and ask guest directly or discuss related topics with the community.

https://discord.gg/5Pj446VKNU


### References

Eldar Kurtic: https://www.linkedin.com/in/eldar-kurti%C4%87-77963b160/

Neural Magic: https://neuralmagic.com/

IST Austria Alistarh Group: https://ist.ac.at/en/research/alistarh-group/

Austrian Artificial Intelligence Podcast
Guest Interviews, discussing the possibilities and potential of AI in Austria. Question or Suggestions, write to austrianaipodcast@pm.me