Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts126/v4/ce/f3/46/cef3464e-6703-0b9b-384a-9d39a3c47879/mza_5818024916783655147.jpg/600x600bb.jpg
Artificially Unintelligent
Artificially Unintelligent
52 episodes
6 days ago
Eavesdrop in on chats between Nicolay, William, and their savvy friends about the latest in AI: new architectures, developments, and tools. It's like chilling with your techie friends at the bar downing a few beers.
Show more...
Technology
RSS
All content for Artificially Unintelligent is the property of Artificially Unintelligent and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Eavesdrop in on chats between Nicolay, William, and their savvy friends about the latest in AI: new architectures, developments, and tools. It's like chilling with your techie friends at the bar downing a few beers.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_nologo/38000746/38000746-1693488628908-12840c9e01743.jpg
E33 A Path to Unlimited Generation? A First Look at Attention Sinks
Artificially Unintelligent
21 minutes 43 seconds
2 years ago
E33 A Path to Unlimited Generation? A First Look at Attention Sinks

Welcome to today's episode, where we're about to embark on an exciting journey into the latest research. In this episode, we'll be delving into a groundbreaking paper that has the potential to revolutionize the field of natural language processing. The paper introduces us to the concept of "Attention Sinks," a novel approach to improving the efficiency of inference with Large Language Models (LLMs) and extending their memory through a Key-Value (KV) Cache.

Traditionally, LLMs have faced challenges when it comes to handling large amounts of data and maintaining contextual information efficiently. However, the concept of Attention Sinks proposes a solution by introducing a mechanism to selectively store and retrieve relevant information during the inference process

Do you still want to hear more from us? Follow us on the Socials:

  • Nicolay: LinkedIn | X (formerly known as Twitter)
  • William: LinkedIn
Artificially Unintelligent
Eavesdrop in on chats between Nicolay, William, and their savvy friends about the latest in AI: new architectures, developments, and tools. It's like chilling with your techie friends at the bar downing a few beers.