Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/e5/52/02/e55202a8-c38b-5923-0c43-31be33fb2f29/mza_9674206490288509054.jpg/600x600bb.jpg
AI Safety Fundamentals
BlueDot Impact
173 episodes
1 month ago
By Samuel Hammond Source: https://www.secondbest.ca/p/ai-and-leviathan-part-i A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
Technology
Society & Culture,
Philosophy,
News,
Politics
RSS
All content for AI Safety Fundamentals is the property of BlueDot Impact and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
By Samuel Hammond Source: https://www.secondbest.ca/p/ai-and-leviathan-part-i A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
Technology
Society & Culture,
Philosophy,
News,
Politics
Episodes (20/173)
AI Safety Fundamentals
AI and Leviathan: Part I
By Samuel Hammond Source: https://www.secondbest.ca/p/ai-and-leviathan-part-i A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
1 month ago
14 minutes

AI Safety Fundamentals
d/acc: One Year Later
By Vitalik Buterin Ethereum founder Vitalik Buterin describes how democratic, defensive and decentralised technologies could distribute AI's power across society rather than concentrating it, offering a middle path between unchecked technical acceleration and authoritarian control. Source: https://vitalik.eth.limo/general/2025/01/05/dacc2.html A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
1 month ago
43 minutes

AI Safety Fundamentals
AI Emergency Preparedness: Examining the Federal Government's Ability to Detect and Respond to AI-Related National Security Threats
By Akash Wasil et al. This paper uses scenario planning to show how governments could prepare for AI emergencies. The authors examine three plausible disasters: losing control of AI, AI model theft, and bioweapon creation. They then expose gaps in current preparedness systems, and propose specific government reforms including embedding auditors inside AI companies and creating emergency response units. Source: https://arxiv.org/pdf/2407.17347 A podcast by BlueDot Impact. Learn more on the AI...
Show more...
1 month ago
9 minutes

AI Safety Fundamentals
A Playbook for Securing AI Model Weights
By Sella Nevo et al. In this report, RAND researchers identify real-world attack methods that malicious actors could use to steal AI model weights. They propose a five-level security framework that AI companies could implement to defend against different threats, from amateur hackers to nation-state operations. Source: https://www.rand.org/pubs/research_briefs/RBA2849-1.html A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
1 month ago
19 minutes

AI Safety Fundamentals
The Project: Situational Awareness
By Leopold Aschenbrenner A former OpenAI researcher argues that private AI companies cannot safely develop superintelligence due to security vulnerabilities and competitive pressures that override safety. He argues that a government-led 'AGI Project' is inevitable and necessary to prevent adversaries stealing the AI systems, or losing human control over the technology. Source: https://situational-awareness.ai/the-project/?utm_source=bluedot-impact A podcast by BlueDot Impact. Learn more on t...
Show more...
1 month ago
32 minutes

AI Safety Fundamentals
Resilience and Adaptation to Advanced AI
By Jamie Bernardi Jamie Bernardi argues that we can't rely solely on model safeguards to ensure AI safety. Instead, he proposes "AI resilience": building society's capacity to detect misuse, defend against harmful AI applications, and reduce the damage caused when dangerous AI capabilities spread beyond a government or company's control. Source: https://airesilience.substack.com/p/resilience-and-adaptation-to-advanced?utm_source=bluedot-impact A podcast by BlueDot Impact. Learn more on the ...
Show more...
1 month ago
13 minutes

AI Safety Fundamentals
Introduction to AI Control
By Sarah Hastings-Woodhouse AI Control is a research agenda that aims to prevent misaligned AI systems from causing harm. It is different from AI alignment, which aims to ensure that systems act in the best interests of their users. Put simply, aligned AIs do not want to harm humans, whereas controlled AIs can't harm humans, even if they want to. Source: https://bluedot.org/blog/ai-control A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
1 month ago
10 minutes

AI Safety Fundamentals
Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI Offer a Safer Path?
By Yoshua Bengio et al. This paper argues that building generalist AI agents poses catastrophic risks, from misuse by bad actors to a potential loss of human control. As an alternative, the authors propose “Scientist AI,” a non-agentic system designed to explain the world through theory generation and question-answering rather than acting in it. They suggest this path could accelerate scientific progress, including in AI safety, while avoiding the dangers of agency-driven AI. Source: https://...
Show more...
1 month ago
21 minutes

AI Safety Fundamentals
The Intelligence Curse
By Luke Drago and Rudolf Laine This section explores how the arrival of AGI could trigger an “intelligence curse,” where automation of all work removes incentives for states and companies to care about ordinary people. It frames the trillion-dollar race toward AGI as not just an economic shift, but a transformation in power dynamics and human relevance. Source: https://intelligence-curse.ai/?utm_source=bluedot-impact A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals websi...
Show more...
1 month ago
2 hours 19 minutes

AI Safety Fundamentals
AI and the Evolution of Biological National Security Risks
By Bill Drexel and Caleb Withers This report considers how rapid AI advancements could reshape biosecurity risks, from bioterrorism to engineered superviruses, and assesses which interventions are needed today. It situates these risks in the history of American biosecurity and offers recommendations for policymakers to curb catastrophic threats. Source: https://www.cnas.org/publications/reports/ai-and-the-evolution-of-biological-national-security-risks A podcast by BlueDot Impact. Learn more...
Show more...
1 month ago
16 minutes

AI Safety Fundamentals
AI Is Reviving Fears Around Bioterrorism. What’s the Real Risk?
By Kyle Hiebert The global spread of large language models is heightening concerns that extremists could leverage AI to develop or deploy biological weapons. While some studies suggest chatbots only marginally improve bioterror capabilities compared to internet searches, other assessments show rapid year-on-year gains in AI systems’ ability to advise on acquiring and formulating deadly agents. Policymakers now face an urgent question: how real and imminent is the threat of AI-enabled bioterro...
Show more...
1 month ago
8 minutes

AI Safety Fundamentals
The Intelligence Curse (Sections 1-3)
By Luke Drago and Rudolf Laine This piece explores key arguments from sections 3 and 4 of The Intelligence Curse, continuing the authors’ analysis of how increasing intelligence can create paradoxical disadvantages, tradeoffs, and coordination challenges. Source: https://intelligence-curse.ai/ A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
1 month ago
44 minutes

AI Safety Fundamentals
The Most Important Time in History Is Now
By Tomas Pueyo This blog post traces AI's rapid leap from high school to PhD-level intelligence in just two years, examines whether physical bottlenecks like computing power can slow this acceleration, and argues that recent efficiency breakthroughs suggest we're approaching an intelligence explosion. Source: https://unchartedterritories.tomaspueyo.com/p/the-most-important-time-in-history-agi-asi A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
2 months ago
38 minutes

AI Safety Fundamentals
Why Do People Disagree About When Powerful AI Will Arrive?
By Sarah Hastings-Woodhouse Most experts agree that AGI is possible. They also agree that it will have transformative consequences. There is less consensus about what these consequences will be. Some believe AGI will usher in an age of radical abundance. Others believe it will likely lead to human extinction. One thing we can be sure of is that a post-AGI world would look very different to the one we live in today. So, is AGI just around the corner? Or are there still hard problems in front o...
Show more...
2 months ago
22 minutes

AI Safety Fundamentals
Governance of Superintelligence
By Sam Altman, Greg Brockman, Ilya Sutskever OpenAI's leadership outline how humanity might govern superintelligence, proposing international oversight with inspection powers similar to nuclear regulation. They argue the AI systems arriving this decade will be "more powerful than any technology yet created" and their control cannot be left to individual companies alone. Source: https://openai.com/index/governance-of-superintelligence/ A podcast by BlueDot Impact. Learn more on the AI Safety ...
Show more...
2 months ago
5 minutes

AI Safety Fundamentals
Scaling: The State of Play in AI
By Ethan Mollick This post explains the "scaling laws" that drive rapid AI progress: when you make AI models bigger and train them with more computing power, they get smarter at most tasks. The piece also introduces a second scaling law, where AI performance improves by spending more time "thinking" before responding. Source: |https://www.oneusefulthing.org/p/scaling-the-state-of-play-in-ai A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
2 months ago
24 minutes

AI Safety Fundamentals
Measuring AI Ability to Complete Long Tasks
By Thomas Kwa et al. We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. Source: https://metr.org/blog/2025-03-19-measuring-ai-abi...
Show more...
2 months ago
15 minutes

AI Safety Fundamentals
The AI Revolution: The Road to Superintelligence
By Tim Urban Tim Urban uses historical analogies to show why AI progress might accelerate much faster than we expect, and how AI systems could rapidly self-improve from human-level to superintelligent capabilities. Source: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
2 months ago
48 minutes

AI Safety Fundamentals
"Long" Timelines to Advanced AI Have Gotten Crazy Short
By Helen Toner Helen Toner, former OpenAI board member, reveals how the AI timeline debate has compressed: even conservative experts who once dismissed advanced AI concerns now predict human-level systems within decades. Rapid AI progress has shifted from a fringe prediction to mainstream expert consensus. Source: https://helentoner.substack.com/p/long-timelines-to-advanced-ai-have A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
2 months ago
9 minutes

AI Safety Fundamentals
The Gentle Singularity
By Sam Altman This blog post offers a vivid, optimistic vision of rapid AI progress from the CEO of OpenAI. Altman suggests that the accelerating technological change will feel "impressive but manageable," and that there are serious challenges to confront/ Source: https://blog.samaltman.com/the-gentle-singularity A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.
Show more...
2 months ago
10 minutes

AI Safety Fundamentals
By Samuel Hammond Source: https://www.secondbest.ca/p/ai-and-leviathan-part-i A podcast by BlueDot Impact. Learn more on the AI Safety Fundamentals website.