Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/17/bd/b5/17bdb5a1-b0d0-1b26-9cfa-2a8cae8c8842/mza_17542379518000041042.jpg/600x600bb.jpg
Consistently Candid
Sarah Hastings-Woodhouse
19 episodes
2 months ago
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!
Show more...
Technology
Society & Culture,
Philosophy
RSS
All content for Consistently Candid is the property of Sarah Hastings-Woodhouse and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!
Show more...
Technology
Society & Culture,
Philosophy
Episodes (19/19)
Consistently Candid
#20 Frances Lorenz on the emotional side of AI x-risk, being a woman in a male-dominated online space & more
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!
Show more...
6 months ago
51 minutes

Consistently Candid
#19 Gabe Alfour on why AI alignment is hard, what it would mean to solve it & what ordinary people can do about existential risk
Gabe Alfour is a co-founder of Conjecture and an advisor to Control AI, both organisations working to reduce risks from advanced AI. We discussed why AI poses an existential risk to humanity, what makes this problem very hard to solve, why Gabe believes we need to prevent the development of superintelligence for at least the next two decades, and more. Follow Gabe on Twitter Read The Compendium and A Narrow Path
Show more...
7 months ago
1 hour 36 minutes

Consistently Candid
#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year! We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more! Follow Nathan on Twitter Listen to The Cognitive Revolution My Twitter & Substack
Show more...
8 months ago
1 hour 46 minutes

Consistently Candid
#17 Fun Theory with Noah Topper
The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we having fun yet' and 'could we be having more fun?'. It tries to answer some of the philosophical quandries we might encounter when envisioning a post-AGI utopia. In this episode, I discussed Fun Theory with Noah Topper, who loyal listeners will remember from episode 7, in which we tackled EY's equally interesting but less fun essay, A List of ...
Show more...
1 year ago
1 hour 25 minutes

Consistently Candid
#16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies
John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience. In this episode, we compared our experiences of encountering AI safety arguments for the first time and the psychological experience of being aware of x-risk, as well as what messaging strategies the AI safety community should be using to engage more people. Listen & subscribe to the For Humanity Podcast on YouTube and follow John on Twitter!
Show more...
1 year ago
52 minutes

Consistently Candid
#15 Should we be engaging in civil disobedience to protest AGI development?
StopAI are a non-profit aiming to achieve a permanent ban on the development of AGI through peaceful protest. In this episode, I chatted with three of founders of StopAI – Remmelt Ellen, Sam Kirchner and Guido Reichstadter. We talked about what protest tactics StopAI have been using, and why they want a stop (and not just a pause!) in the development of AGI. Follow Sam, Remmelt and Guido on TwitterMy Twitter
Show more...
1 year ago
1 hour 18 minutes

Consistently Candid
#14 Buck Shlegeris on AI control
Buck Shlegeris is the CEO of Redwood Research, a non-profit working to reduce risks from powerful AI. We discussed Redwood's research into AI control, why we shouldn't feel confident that witnessing an AI escape attempt would persuade labs to undeploy dangerous models, lessons from the vetoing of SB1047, the importance of lab security and more. Posts discussed:The case for ensuring that powerful AIs are controlledWould catching your AIs trying to escape convince AI developers to slow dow...
Show more...
1 year ago
50 minutes

Consistently Candid
#13 Aaron Bergman and Max Alexander debate the Very Repugnant Conclusion
In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate. They discuss the Very Repugnant Conclusion, which, in the words of Claude, "posits that a world with a vast population living lives barely worth living could be considered ethically inferior to a world with an even larger population, where most people have extremely high quality lives, but a significant minority endure extreme suffering." Listen to the end to hea...
Show more...
1 year ago
1 hour 53 minutes

Consistently Candid
#12 Deger Turan on all things forecasting
Deger Turan is the CEO of forecasting platform Metaculus and president of the AI Objectives Institute. In this episode, we discuss how forecasting can be used to help humanity coordinate around reducing existential risks, Deger's advice for aspiring forecasters, the future of using AI for forecasting and more!Enter Metaculus's Q3 AI Forecasting Benchmark TournamentGet in touch with Deger: deger@metaculus.com
Show more...
1 year ago
54 minutes

Consistently Candid
#11 Katja Grace on the AI Impacts survey, the case for slowing down AI & arguments for and against x-risk
Katja Grace is the co-founder of AI Impacts, a non-profit focused on answering key questions about the future trajectory of AI development, which is best known for conducting the world's largest survey of machine learning researchers. We talked about the most interesting results from the survey, Katja's views on whether we should slow down AI progress, the best arguments for and against existential risk from AI, parsing the online AI safety debate and more! Follow Katja on Twitter Katja'...
Show more...
1 year ago
1 hour 16 minutes

Consistently Candid
#10 Nathan Labenz on the current AI state-of-the-art, the Red Team in Public project, reasons for hope on AI x-risk & more
Nathan Labenz is the founder of AI content-generation platform Waymark and host of The Cognitive Revolution Podcast, who now works full-time on tracking and analysing developments in AI. We chatted about where we currently stand with state-of-art AI capabilities, whether we should be advocating for a pause on scaling frontier models, Nathan's Red Team in Public project, and some reasons not be a hardcore doomer!Follow Nathan on TwitterListen to The Cognitive Revolution
Show more...
1 year ago
1 hour 54 minutes

Consistently Candid
#9 Sneha Revanur on founding Encode Justice, California's SB-1047, and youth advocacy for safe AI development
Sheha Revanur is a the founder of Encode Justice, an international, youth-led network campaigning for the responsible development of AI, which was among the sponsors of California's proposed AI bill SB-1047. We chatted about why Sheha founded Encode Justice, the importance of youth advocacy in AI safety, and what the movement can learn from climate activism. We also dug into the details of SB-1047 and answered some common criticisms of the bill!Follow Sneha on Twitter: https://twitter.co...
Show more...
1 year ago
49 minutes

Consistently Candid
#8 Nathan Young on forecasting, AI risk & regulation, and how not to lose your mind on Twitter
Nathan Young is a forecaster, software developer and tentative AI optimist. In this episode, we discussed how Nathan approaches forecasting, why his p(doom) is 2-9%, whether we should pause AGI research, and more!Follow Nathan on Twitter: Nathan 🔍 (@NathanpmYoung) / X (twitter.com) Nathan's substack: Predictive Text | Nathan Young | SubstackMy Twitter: sarah ⏸️ (@littIeramblings) / X (twitter.com)
Show more...
1 year ago
1 hour 28 minutes

Consistently Candid
#7 Noah Topper helps me understand Eliezer Yudkowsky
A while back, my self-confessed inability to fully comprehend the writings of Eliezer Yudkowsky elicited the sympathy of the author himself. In an attempt to more completely understand why AI is going to kill us all, I enlisted the help of Noah Topper, recent Computer Science Masters graduate and long-time EY fan, to help me break down A List of Lethalities (which, for anyone unfamiliar, is a fun list of 43 reasons why we're all totally screwed). Follow Noah on Twitter: Noah Topper 🔍⏸️ (@Noah...
Show more...
1 year ago
1 hour 28 minutes

Consistently Candid
#6 Holly Elmore on pausing AI, protesting, warning shots & more
Holly Elmore is an AI pause advocate and Executive Director of PauseAI US. We chatted about the case for pausing AI, her experience of organising protests against frontier AGI research, the danger of relying on warning shots, the prospect of techno-utopia, possible risks of pausing and more!Follow Holly on Twitter: Holly ⏸️ Elmore (@ilex_ulmus) / X (twitter.com)Official PauseAI US Twitter account: PauseAI US ⏸️ (@pauseaius) / X (twitter.com)My Twitter: sarah ⏸️ (@littIeramblings) / X (twitter...
Show more...
1 year ago
1 hour 48 minutes

Consistently Candid
#5 Joep Meindertsma on founding PauseAI and strategies for communicating AI risk
In this episode, I talked with Joep Meindertsma, founder of PauseAI, about how he discovered AI safety, the emotional experience of internalising existential risks, strategies for communicating AI risk, his assessment of recent AI policy developments and more!Find out more about PauseAI at www.pauseai.info
Show more...
1 year ago
46 minutes

Consistently Candid
#4 Émile P. Torres and I discuss where we agree and disagree on AI safety
Émile P. Torres is a philosopher and historian known for their research on the history and ethical implications of human extinction. They are also an outspoken critic of Effective Altruism, longtermism and the AI safety movement. In this episode, we chatted about why Émile opposes both the 'doomer' and accelerationist factions, and identified some or our agreements and disagreements about AI safety.
Show more...
1 year ago
1 hour 47 minutes

Consistently Candid
#3 Darren McKee on explaining AI risk to the public & navigating the AI safety debate
Darren McKee is an author, speaker and policy advisor who has recently penned a beginner-friendly introduction to AI Safety named Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. We chatted about the best arguments for worrying about AI, responses to common objections, how to navigate the online AI safety space as an non-expert, and more.Buy Darren's book on Amazon: https://www.amazon.co.uk/Uncontrollable-Threat-Artificial-Superintelligence-World-eboo...
Show more...
1 year ago
51 minutes

Consistently Candid
#1 Aaron Bergman and Max Alexander argue about moral realism while I smile and nod
In this inaugural episode of Consistently Candid, Aaron Bergman and Max Alexander each try to convince me of their position on moral realism, and I settle the issue once and for all. Featuring occasional interjections from the sat-nav in the Uber Aaron was taking at the time.My Twitter: https://twitter.com/littIeramblings Max's Twitter: https://twitter.com/absurdlymaxAaron's Twitter: https://twitter.com/AaronBergman18
Show more...
1 year ago
1 hour 8 minutes

Consistently Candid
In this episode, I chatted with Frances Lorenz, events associate at the Centre for Effective Altruism. We covered our respective paths into AI safety, the emotional impact of learning about x-risk, what it's like to be female in a male-dominated community and more! Follow Frances on Twitter Subscribe to her Substack Apply for EAG London!