Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/a5/99/d4/a599d497-7fbd-6b9f-6c88-52296afa5ffe/mza_47727185309592116.jpg/600x600bb.jpg
Artificial General Intelligence (AGI) Show with Soroush Pour
Soroush Pour
15 episodes
7 months ago
We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI (CHAI) at Berkeley. His work focuses on better understanding the internal workings of AI models (better known as “interpretability”), making them robust to various kinds of adversarial attacks, and ca...
Show more...
Technology
Education,
Society & Culture
RSS
All content for Artificial General Intelligence (AGI) Show with Soroush Pour is the property of Soroush Pour and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI (CHAI) at Berkeley. His work focuses on better understanding the internal workings of AI models (better known as “interpretability”), making them robust to various kinds of adversarial attacks, and ca...
Show more...
Technology
Education,
Society & Culture
Episodes (15/15)
Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 14 - Interp, latent robustness, RLHF limitations w/ Stephen Casper (PhD AI researcher, MIT)
We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI (CHAI) at Berkeley. His work focuses on better understanding the internal workings of AI models (better known as “interpretability”), making them robust to various kinds of adversarial attacks, and ca...
Show more...
1 year ago
2 hours 42 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 13 - AI researchers expect AGI sooner w/ Katja Grace (Co-founder & Lead Researcher, AI Impacts)
We speak with Katja Grace. Katja is the co-founder and lead researcher at AI Impacts, a research group trying to answer key questions about the future of AI — when certain capabilities will arise, what will AI look like, how it will all go for humanity. We talk to Katja about: * How AI Impacts latest rigorous survey of leading AI researchers shows they've dramatically reduced their timelines to when AI will successfully tackle all human tasks & occupations. * The survey's methodology an...
Show more...
1 year ago
1 hour 20 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 12 - Education & advocacy for AI safety w/ Rob Miles (YouTube host)
We speak with Rob Miles. Rob is the host of the “Robert Miles AI Safety” channel on YouTube, the single most popular AI alignment video series out there — he has 145,000 subscribers and his top video has ~600,000 views. He goes much deeper than many educational resources out there on alignment, going into important technical topics like the orthogonality thesis, inner misalignment, and instrumental convergence. Through his work, Robert has educated thousands on AI safety, including many now ...
Show more...
1 year ago
1 hour 21 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 11 - Technical alignment overview w/ Thomas Larsen (Director of Strategy, Center for AI Policy)
We speak with Thomas Larsen, Director for Strategy at the Center for AI Policy in Washington, DC, to do a "speed run" overview of all the major technical research directions in AI alignment. A great way to quickly learn broadly about the field of technical AI alignment. In 2022, Thomas spent ~75 hours putting together an overview of what everyone in technical alignment was doing. Since then, he's continued to be deeply engaged in AI safety. We talk to Thomas to share an updated overview to h...
Show more...
1 year ago
1 hour 37 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 10 - Accelerated training to become an AI safety researcher w/ Ryan Kidd (Co-Director, MATS)
We speak with Ryan Kidd, Co-Director at ML Alignment & Theory Scholars (MATS) program, previously "SERI MATS". MATS (https://www.matsprogram.org/) provides research mentorship, technical seminars, and connections to help new AI researchers get established and start producing impactful research towards AI safety & alignment. Prior to MATS, Ryan completed a PhD in Physics at the University of Queensland (UQ) in Australia. We talk about: * What the MATS program is * Who should apply ...
Show more...
2 years ago
1 hour 16 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 9 - Scaling AI safety research w/ Adam Gleave (CEO, FAR AI)
We speak with Adam Gleave, CEO of FAR AI (https://far.ai). FAR AI’s mission is to ensure AI systems are trustworthy & beneficial. They incubate & accelerate research that's too resource-intensive for academia but not ready for commercialisation. They work on everything from adversarial robustness, interpretability, preference learning, & more. We talk to Adam about: * The founding story of FAR as an AI safety org, and how it's different from the big commercial labs (e.g. OpenAI)...
Show more...
2 years ago
1 hour 19 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 8 - Getting started in AI safety & alignment w/ Jamie Bernardi (AI Safety Lead, BlueDot Impact)
We speak with Jamie Bernardi, co-founder & AI Safety Lead at not-for-profit BlueDot Impact, who host the biggest and most up-to-date courses on AI safety & alignment at AI Safety Fundamentals (https://aisafetyfundamentals.com/). Jamie completed his Bachelors (Physical Natural Sciences) and Masters (Physics) at the U. Cambridge and worked as an ML Engineer before co-founding BlueDot Impact. The free courses they offer are created in collaboration with people on the cutting edge of AI ...
Show more...
2 years ago
1 hour 7 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 7 - Responding to a world with AGI - Richard Dazeley (Prof AI & ML, Deakin University)
In this episode, we speak with Prof Richard Dazeley about the implications of a world with AGI and how we can best respond. We talk about what he thinks AGI will actually look like as well as the technical and governance responses we should put in today and in the future to ensure a safe and positive future with AGI. Prof Richard Dazeley is the Deputy Head of School at the School of Information Technology at Deakin University in Melbourne, Australia. He’s also a senior member of the Internat...
Show more...
2 years ago
1 hour 10 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 6 - Will we see AGI this decade? Our AGI predictions & debate w/ Hunter Jay (CEO, Ripe Robotics)
In this episode, we have back on the show Hunter Jay, CEO Ripe Robotics, our co-host on Ep 1. We synthesise everything we've heard on AGI timelines from experts in Ep 1-5, take in more data points, and use this to give our own forecasts for AGI, ASI (i.e. superintelligence), and "intelligence explosion" (i.e. singularity). Importantly, we have different takes on when AGI will likely arrive, leading to exciting debates on AGI bottlenecks, hardware requirements, the need for sequential reinforc...
Show more...
2 years ago
1 hour 20 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 5 - Accelerating AGI timelines since GPT-4 w/ Alex Browne (ML Engineer)
In this episode, we have back on our show Alex Browne, ML Engineer, who we heard on Ep2. He got in contact after watching recent developments in the 4 months since Ep2, which have accelerated his timelines for AGI. Hear why and his latest prediction. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp LinkedIn: https://www.linkedin.com/in/soroushjp/ == Show links == -- About Alex Browne -- * Bio: Alex is a software engineer & tech founder with...
Show more...
2 years ago
38 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 4 - When will AGI arrive? - Ryan Kupyn (Data Scientist & Forecasting Researcher @ Amazon AWS)
In this episode, we speak with forecasting researcher & data scientist at Amazon AWS, Ryan Kupyn, about his timelines for the arrival of AGI. Ryan was recently ranked the #1 forecaster in Astral Codex Ten's 2022 Prediction contest, beating out 500+ other forecasters and proving himself to be a world-class forecaster. He has also done work in ML & works as a forecaster for Amazon AWS. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp Linke...
Show more...
2 years ago
1 hour 3 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 3 - When will AGI arrive? - Jack Kendall (CTO, Rain.AI, maker of neural net chips)
In this episode, we speak with Rain.AI CTO Jack Kendall about his timelines for the arrival of AGI. He also speaks to how we might get there and some of the implications. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp LinkedIn: https://www.linkedin.com/in/soroushjp/ Show links Jack KendallBio: Jack invented a new method for connecting artificial silicon neurons using coaxial nanowires at the U. Florida before starting Rain as co-founder and ...
Show more...
2 years ago
1 hour 1 minute

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 2 - When will AGI arrive? - Alex Browne (Machine Learning Engineer)
In this episode, we speak with ML Engineer Alex Browne about his forecasted timelines for the potential arrival of AGI. He also speaks to how we might get there and some of the implications. Hosted by Soroush Pour. Follow me for more AGI content: Twitter: https://twitter.com/soroushjp LinkedIn: https://www.linkedin.com/in/soroushjp/ == Show links == -- About Alex Browne -- * Bio: Alex is a software engineer & tech founder with 10 years of experience. Alex and I (Soroush) have worked to...
Show more...
2 years ago
58 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 1 - When will AGI arrive? - Logan Riggs Smith (AGI alignment researcher)
We speak with AGI alignment researcher Logan Riggs Smith about his timelines for AGI. He also speaks to how we might get there and some of the implications. Hosted by Hunter Jay and Soroush Pour Show links Further writings from Logan Riggs SmithCotra report on AGI timelines:Original report (very long)Scott Alexander analysis of this report
Show more...
2 years ago
1 hour 10 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
Ep 0 - Intro - What's the AGI Show?
What can you expect to hear and learn on "The Artificial General Intelligence (AGI) Show with Soroush Pour"? Hosted by Soroush Pour
Show more...
2 years ago
8 minutes

Artificial General Intelligence (AGI) Show with Soroush Pour
We speak with Stephen Casper, or "Cas" as his friends call him. Cas is a PhD student at MIT in the Computer Science (EECS) department, in the Algorithmic Alignment Group advised by Prof Dylan Hadfield-Menell. Formerly, he worked with the Harvard Kreiman Lab and the Center for Human-Compatible AI (CHAI) at Berkeley. His work focuses on better understanding the internal workings of AI models (better known as “interpretability”), making them robust to various kinds of adversarial attacks, and ca...