Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/9d/c8/57/9dc857f4-309d-9457-0471-e56406d46de5/mza_15725793256597513042.jpg/600x600bb.jpg
muckrAIkers
Jacob Haimes and Igor Krawczuk
18 episodes
3 weeks ago
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.
Show more...
Technology
Science,
Mathematics
RSS
All content for muckrAIkers is the property of Jacob Haimes and Igor Krawczuk and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.
Show more...
Technology
Science,
Mathematics
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/9d/c8/57/9dc857f4-309d-9457-0471-e56406d46de5/mza_15725793256597513042.jpg/600x600bb.jpg
OpenAI's o1 System Card, Literally Migraine Inducing
muckrAIkers
1 hour 16 minutes
10 months ago
OpenAI's o1 System Card, Literally Migraine Inducing

The idea of model cards, which was introduced as a measure to increase transparency and understanding of LLMs, has been perverted into the marketing gimmick characterized by OpenAI's o1 system card. To demonstrate the adversarial stance we believe is necessary to draw meaning from these press-releases-in-disguise, we conduct a close read of the system card. Be warned, there's a lot of muck in this one.

Note: All figures/tables discussed in the podcast can be found on the podcast website at https://kairos.fm/muckraikers/e009/


  • (00:00) - Recorded 2024.12.08
  • (00:54) - Actual intro
  • (03:00) - System cards vs. academic papers
  • (05:36) - Starting off sus
  • (08:28) - o1.continued
  • (12:23) - Rant #1: figure 1
  • (18:27) - A diamond in the rough
  • (19:41) - Hiding copyright violations
  • (21:29) - Rant #2: Jacob on "hallucinations"
  • (25:55) - More ranting and "hallucination" rate comparison
  • (31:54) - Fairness, bias, and bad science comms
  • (35:41) - System, dev, and user prompt jailbreaking
  • (39:28) - Chain-of-thought and Rao-Blackwellization
  • (44:43) - "Red-teaming"
  • (49:00) - Apollo's bit
  • (51:28) - METR's bit
  • (59:51) - Pass@???
  • (01:04:45) - SWE Verified
  • (01:05:44) - Appendix bias metrics
  • (01:10:17) - The muck and the meaning


Links
  • o1 system card
  • OpenAI press release collection - 12 Days of OpenAI


Additional o1 Coverage

  • NIST + AISI [report] - US AISI and UK AISI Joint Pre-Deployment Test
  • Apollo Research's paper - Frontier Models are Capable of In-context Scheming
  • VentureBeat article - OpenAI launches full o1 model with image uploads and analysis, debuts ChatGPT Pro
  • The Atlantic article - The GPT Era Is Already Ending


On Data Labelers

  • 60 Minutes article + video - Labelers training AI say they're overworked, underpaid and exploited by big American tech companies
  • Reflections article - The hidden health dangers of data labeling in AI development
  • Privacy International article = Humans in the AI loop: the data labelers behind some of the most powerful LLMs' training datasets


Chain-of-Thought Papers Cited

  • Paper - Measuring Faithfulness in Chain-of-Thought Reasoning
  • Paper - Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting
  • Paper - On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
  • Paper - Faithfulness vs. Plausibility: On the (Un)Reliability of Explanations from Large Language Models


Other Mentioned/Relevant Sources

  • Andy Jones blogpost - Rao-Blackwellization
  • Paper - Training on the Test Task Confounds Evaluation and Emergence
  • Paper - Best-of-N Jailbreaking
  • Research landing page - SWE Bench
  • Code Competition - Konwinski Prize
  • Lakera game = Gandalf
  • Kate Crawford's Atlas of AI
  • BlueDot Impact's course - Intro to Transformative AI


Unrelated Developments

  • Cruz's letter to Merrick Garland
  • AWS News Blog article - Introducing Amazon Nova foundation models: Frontier intelligence and industry leading price performance
  • BleepingComputer article - Ultralytics AI model hijacked to infect thousands with cryptominer
  • The Register article - Microsoft teases Copilot Vision, the AI sidekick that judges your tabs
  • Fox Business article - OpenAI CEO Sam Altman looking forward to working with Trump admin, says US must build best AI infrastructure
muckrAIkers
Join us as we dig a tiny bit deeper into the hype surrounding "AI" press releases, research papers, and more. Each episode, we'll highlight ongoing research and investigations, providing some much needed contextualization, constructive critique, and even a smidge of occasional good will teasing to the conversation, trying to find the meaning under all of this muck.