
Meta's upcoming Llama 3.1 models could outperform the current state-of-the-art closed-source LLM model, OpenAI's GPT-4o.
OpenAI is planning to develop its own AI chip to optimize performance and potentially supercharge their progress towards AGI.
Apple's SlowFast-LLaVA is a new training-free video large language model that captures both detailed spatial semantics and long-range temporal context in video without exceeding the token budget of commonly used LLMs.
Google's Conditioned Language Policy (CLP) framework is a general framework that builds on techniques from multi-task training and parameter-efficient finetuning to develop steerable models that can trade-off multiple conflicting objectives at inference time.
Contact: sergi@earkind.com
Timestamps:
00:34 Introduction
01:28 LLAMA 405B Performance Leaked
03:01 OpenAI Wants Its Own AI Chips
04:25 Towards more cooperative AI safety strategies
06:01 Fake sponsor
07:35 SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models
09:17 AssistantBench: Can Web Agents Solve Realistic and Time-Consuming Tasks?
10:56 Conditioned Language Policy: A General Framework for Steerable Multi-Objective Finetuning
12:46 Outro