In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!
All content for Consistently Candid is the property of Sarah Hastings-Woodhouse and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!
#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
Consistently Candid
1 hour 46 minutes
8 months ago
#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year! We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more! Follow Nathan on Twitter Listen to The Cognitive Revolution My Twitter & Substack
Consistently Candid
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!