Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
News
Sports
TV & Film
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/6e/e3/95/6ee39578-d477-b358-f5b9-cec7ad10f081/mza_8422032406383167466.jpg/600x600bb.jpg
Into AI Safety
Jacob Haimes
24 episodes
2 weeks ago
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/
Show more...
Technology
Science,
Mathematics
RSS
All content for Into AI Safety is the property of Jacob Haimes and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/
Show more...
Technology
Science,
Mathematics
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/6e/e3/95/6ee39578-d477-b358-f5b9-cec7ad10f081/mza_8422032406383167466.jpg/600x600bb.jpg
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)
Into AI Safety
54 minutes
1 year ago
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded ⁠StakeOut.AI, a non-profit focused on making AI go well for humans. 00:54 - Intro03:15 - Dr. Park, x-risk, and AGI08:55 - StakeOut.AI12:05 - Governance scorecard19:34 - Hollywood webinar22:02 - Regulations.gov comments23:48 - Open letters 26:15 - EU AI Act35:07 - Effective accelerationism40:50 - Divide and conquer dynamics45:40 - AI "art"53:09 - Outro Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. StakeOut.AI AI Governance Scorecard (go to Pg. 3) Pause AI Regulations.gov USCO StakeOut.AI Comment OMB StakeOut.AI Comment AI Treaty open letter TAISC Alpaca: A Strong, Replicable Instruction-Following Model References on EU AI Act and Cedric O Tweet from Cedric O EU policymakers enter the last mile for Artificial Intelligence rulebook AI Act: EU Parliament’s legal office gives damning opinion on high-risk classification ‘filters’ EU’s AI Act negotiations hit the brakes over foundation models The EU AI Act needs Foundation Model Regulation BigTech’s Efforts to Derail the AI Act Open Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulation Divide-and-Conquer Dynamics in AI-Driven Disempowerment
Into AI Safety
The Into AI Safety podcast aims to make it easier for everyone, regardless of background, to get meaningfully involved with the conversations surrounding the rules and regulations which should govern the research, development, deployment, and use of the technologies encompassed by the term "artificial intelligence" or "AI" For better formatted show notes, additional resources, and more, go to https://kairos.fm/intoaisafety/