Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/56/e8/4a/56e84a0e-66ba-f1fb-1a11-7a58f949ec83/mza_17573251377408707633.jpg/600x600bb.jpg
Annotote TLDR
Anthony Bardaro
17 episodes
2 days ago
This is an AI-generated exploration about whatever subjects pique my interest at any given time. Episodes usually straddle tech, media, finance, business, entrepreneurship, history, philosophy, and psychology – for which I feed the machine research and analysis from various sources to produce this synthesis. If business and writing are the outlets of my intellectual curiosity, then consumption and reading are the inputs. As my audio professor and scrapbook, this show falls somewhere in between those ends, so I might as well share it with you!
Show more...
Technology
RSS
All content for Annotote TLDR is the property of Anthony Bardaro and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
This is an AI-generated exploration about whatever subjects pique my interest at any given time. Episodes usually straddle tech, media, finance, business, entrepreneurship, history, philosophy, and psychology – for which I feed the machine research and analysis from various sources to produce this synthesis. If business and writing are the outlets of my intellectual curiosity, then consumption and reading are the inputs. As my audio professor and scrapbook, this show falls somewhere in between those ends, so I might as well share it with you!
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/42253674/42253674-1729547132793-376fc9f69aa33.jpg
AI accuracy: New models and progress on hallucinations
Annotote TLDR
14 minutes 17 seconds
1 year ago
AI accuracy: New models and progress on hallucinations
What's the latest with artificial intelligence and models' accuracy? Three objectives: Compare and contrast traditional autoregressive LLMs (Gemini/chatGPT/Claude/LLaMA) vs nonautoregressive AI (NotebookLM) vs chain of thought reasoning models (Strawberry o1) – including the benefits, detriments, and tradeoffs of each Error rates for NotebookLM vs traditional LLMs vs CoT reasoning models – focusing on accuracy benefits from a smaller corpus of curated source materials that users feed NotebookLM when promoting (vs an LLM trained on and inferencing from the entire web) Which model (or combination of models) will be the future standard? See also:🧵https://x.com/AnthPB/status/1848186962856865904 This podcast is AI-generated with NotebookLM, using the following sources, research, and analysis: An Investigation of Language Model Interpretability via Sentence Editing (OSU, Stevens, 2021.04) Are Auto-Regressive Large Language Models Here to Stay? (Medium, Bettencourt, 2023.12.28) Attention Is All You Need (Google Brain, Vaswani/Shazeer/Parmar/Uszkoreit/Jones/Gomez/Kaiser/Polosukhin, 2017.06.12) BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension (Facebook AI, Lewis/Liu/Goyal/Ghazvininejad/Mohamed/Levy/Stoyanov/Zettlemoyer, 2019.10.29) Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs (Zhang/Du/Pang/Liu/Gao/Lin, 2024.06.13) Contra LeCun on "Autoregressive LLMs are doomed" (LessWrong, rotatingpaguro, 2023.04.10) Do large language models need sensory grounding for meaning and understanding? (LeCun, 2023.03.24) Experimenting with Power Divergences for Language Modeling (Labeau/Cohen, 2019) Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer (Raffel/Shazeer/Roberts/Lee/Narang/Matena/Zhou/Li/Liu, 2023.09.19) Improving Non-Autoregressive Translation Models Without Distillation (Huang/Perez/Volkovs, 2022.01.28) Non-Autoregressive Neural Machine Translation (Gu/Bradbury/Xiong/Li/Socher, 2017.11.27) On the Learning of Non-Autoregressive Transformers (Huang/Tao/Zhou/Li/Huang, 2022.06.13) Towards Better Chain-of-Thought Prompting Strategies: A Survey (Yu/He/Wu/Dai/Chen, 2023.10.08).pdf XLNet: Generalized Autoregressive Pretraining for Language Understanding (Yang/Dai/Yang/Carbonell/Salakhutdinov/Le, 2020.01.22) Not investment advice; do your own due diligence! # tech technology machine learning ML
Annotote TLDR
This is an AI-generated exploration about whatever subjects pique my interest at any given time. Episodes usually straddle tech, media, finance, business, entrepreneurship, history, philosophy, and psychology – for which I feed the machine research and analysis from various sources to produce this synthesis. If business and writing are the outlets of my intellectual curiosity, then consumption and reading are the inputs. As my audio professor and scrapbook, this show falls somewhere in between those ends, so I might as well share it with you!