Home
Categories
EXPLORE
Society & Culture
True Crime
History
Education
Technology
Comedy
Science
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
RS
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts126/v4/a3/15/83/a3158359-cec1-2e22-70f4-24ddff6a0b0d/mza_8858101694383933955.jpg/600x600bb.jpg
AI Safety Newsletter
Center for AI Safety
67 episodes
1 week ago
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai
Show more...
Technology
Society & Culture,
Philosophy
RSS
All content for AI Safety Newsletter is the property of Center for AI Safety and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai
Show more...
Technology
Society & Culture,
Philosophy
Episodes (20/67)
AI Safety Newsletter
AISN #61: OpenAI Releases GPT-5
1 week ago
9 minutes

AI Safety Newsletter
AISN #60: The AI Action Plan
3 weeks ago
15 minutes

AI Safety Newsletter
AISN #59: EU Publishes General-Purpose AI Code of Practice
1 month ago
9 minutes

AI Safety Newsletter
AISN #58: Senate Removes State AI Regulation Moratorium
1 month ago
9 minutes

AI Safety Newsletter
AISN #57: The RAISE Act
2 months ago
7 minutes

AI Safety Newsletter
AISN #56: Google Releases Veo 3
2 months ago
8 minutes

AI Safety Newsletter
AISN #55: Trump Administration Rescinds AI Diffusion Rule, Allows Chip Sales to Gulf States
3 months ago
9 minutes

AI Safety Newsletter
AISN #54: OpenAI Updates Restructure Plan
3 months ago
8 minutes

AI Safety Newsletter
AISN #53: An Open Letter Attempts to Block OpenAI Restructuring
3 months ago
10 minutes

AI Safety Newsletter
AISN #52: An Expert Virology Benchmark
4 months ago
10 minutes

AI Safety Newsletter
AISN #51: AI Frontiers
4 months ago
12 minutes

AI Safety Newsletter
AISN #50: AI Action Plan Responses
4 months ago
12 minutes

AI Safety Newsletter
AISN #49: AI Action Plan Responses
4 months ago
12 minutes

AI Safety Newsletter
AISN
5 months ago
11 minutes

AI Safety Newsletter
Superintelligence Strategy: Expert Version
5 months ago

AI Safety Newsletter
Superintelligence Strategy: Standard Version
5 months ago

AI Safety Newsletter
AISN #48: Utility Engineering and EnigmaEval
6 months ago
8 minutes

AI Safety Newsletter
AISN #47: Reasoning Models
6 months ago
9 minutes

AI Safety Newsletter
AISN #46: The Transition
7 months ago
11 minutes

AI Safety Newsletter
AISN #45: Center for AI Safety 2024 Year in Review
8 months ago
11 minutes

AI Safety Newsletter
Narrations of the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. This podcast also contains narrations of some of our publications. ABOUT US The Center for AI Safety (CAIS) is a San Francisco-based research and field-building nonprofit. We believe that artificial intelligence has the potential to profoundly benefit the world, provided that we can develop and use it safely. However, in contrast to the dramatic progress in AI, many basic problems in AI safety have yet to be solved. Our mission is to reduce societal-scale risks associated with AI by conducting safety research, building the field of AI safety researchers, and advocating for safety standards. Learn more at https://safe.ai