Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
TV & Film
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts126/v4/a3/15/83/a3158359-cec1-2e22-70f4-24ddff6a0b0d/mza_8858101694383933955.jpg/600x600bb.jpg
AI Safety Newsletter
Center for AI Safety
71 episodes
2 weeks ago
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai
Show more...
Technology
Society & Culture,
Philosophy
RSS
All content for AI Safety Newsletter is the property of Center for AI Safety and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai
Show more...
Technology
Society & Culture,
Philosophy
Episodes (20/71)
AI Safety Newsletter
AISN #65: Measuring Automation and Superintelligence Moratorium Letter
2 weeks ago
6 minutes

AI Safety Newsletter
AISN #63: New AGI Definition and Senate Bill Would Establish Liability for AI Harms
1 month ago
10 minutes

AI Safety Newsletter
AISN #63: California’s SB-53 Passes the Legislature
1 month ago
9 minutes

AI Safety Newsletter
AISN #62: Big Tech Launches $100 Million pro-AI Super PAC
2 months ago
10 minutes

AI Safety Newsletter
AISN #61: OpenAI Releases GPT-5
3 months ago
9 minutes

AI Safety Newsletter
AISN #60: The AI Action Plan
3 months ago
15 minutes

AI Safety Newsletter
AISN #59: EU Publishes General-Purpose AI Code of Practice
4 months ago
9 minutes

AI Safety Newsletter
AISN #58: Senate Removes State AI Regulation Moratorium
4 months ago
9 minutes

AI Safety Newsletter
AISN #57: The RAISE Act
5 months ago
7 minutes

AI Safety Newsletter
AISN #56: Google Releases Veo 3
5 months ago
8 minutes

AI Safety Newsletter
AISN #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States
5 months ago
9 minutes

AI Safety Newsletter
AISN #54: OpenAI Updates Restructure Plan
6 months ago
8 minutes

AI Safety Newsletter
AISN #53: An Open Letter Attempts to Block OpenAI Restructuring
6 months ago
10 minutes

AI Safety Newsletter
AISN #52: An Expert Virology Benchmark
6 months ago
10 minutes

AI Safety Newsletter
AISN #51: AI Frontiers
7 months ago
12 minutes

AI Safety Newsletter
AISN #50: AI Action Plan Responses
7 months ago
12 minutes

AI Safety Newsletter
AISN #49: AI Action Plan Responses
7 months ago
12 minutes

AI Safety Newsletter
AISN
8 months ago
11 minutes

AI Safety Newsletter
Superintelligence Strategy: Expert Version
8 months ago

AI Safety Newsletter
Superintelligence Strategy: Standard Version
8 months ago

AI Safety Newsletter
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai