Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
Sports
Technology
History
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/d9/ae/19/d9ae191f-7475-d88c-1642-ecad7032a1d8/mza_10253602047016163599.jpg/600x600bb.jpg
Skeptically Curious
Ryan Rutherford
17 episodes
19 hours ago
In each episode I endeavour to know more and think better by interviewing knowledgeable guests about fascinating topics. I have an insatiable curiosity about many areas, including politics, economics, philosophy, history, literature, psychology, religion, and different branches of science such as neuroscience, biology, and physics. Regardless of subject matter, I hope to promote critical thinking, Enlightenment values, and the scientific method. Please join me on this journey as we engage and broaden our skeptical curiosity.
Show more...
Natural Sciences
Science
RSS
All content for Skeptically Curious is the property of Ryan Rutherford and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
In each episode I endeavour to know more and think better by interviewing knowledgeable guests about fascinating topics. I have an insatiable curiosity about many areas, including politics, economics, philosophy, history, literature, psychology, religion, and different branches of science such as neuroscience, biology, and physics. Regardless of subject matter, I hope to promote critical thinking, Enlightenment values, and the scientific method. Please join me on this journey as we engage and broaden our skeptical curiosity.
Show more...
Natural Sciences
Science
https://d3t3ozftmdmh3i.cloudfront.net/production/podcast_uploaded_nologo400/13389705/13389705-1623059376016-803878543c4cb.jpg
Episode 11 - AI Risk with Roman Yampolskiy
Skeptically Curious
1 hour 22 minutes 22 seconds
4 years ago
Episode 11 - AI Risk with Roman Yampolskiy

For this episode I was delighted to be joined by Dr. Roman Yampolskiy, a professor of Computer Engineering and Computer Science at the University of Louisville. Few scholars have devoted as much time to seriously exploring the myriad of threats potentially inhering in the development of highly intelligent artificial machinery than Dr. Yampolskiy, who established the field of AI Safety Engineering, also known simply as AI Safety. After the preliminary inquiry into his background, I asked Roman Yampolskiy to explain deep neural networks, or artificial neural networks as they are also known. One of the most important topics in AI research is what is referred to as the Alignment Problem, which my guest helped to clarify. We then moved onto his work on two other vitally significant issues in AI, namely understandability and explainability. I then asked him to provide a brief history of AI Safety, which as he revealed built on Yudkowsky’s ideas of Friendly AI. We discussed whether there is an increased interest in the risks attendant to AI among researchers, the perverse incentive that exists among those in this industry to downplay the risks of their work, and how to ensure greater transparency, which as you will hear is worryingly far more difficult than many might assume based on the inherently opaque nature of how deep neural networks perform their operations. I homed in on the issue of massive job losses that increasing AI capabilities could potentially engender, as well as the perception I have that many who discuss this topic downplay the socioeconomic context within which automation occurs. After I asked my guest to define artificial general intelligence, or AGI, and super intelligence, we spent considerable time discussing the possibility of machines achieving human-level mental capabilities. This part of the interview was the most contentious and touched on neuroscience, the nature of consciousness, mind-body dualism, the dubious analogy between brains and computers that has been all to pervasive in the AI field since its inception, as well as a fascinating paper by Yampolskiy proposing to detect qualia in artificial systems that perceive the same visual illusions as humans. In the final stretch of the interview, we discussed the impressive language-based system GPT3, whether AlphaZero is the first truly intelligent artificial system, as Gary Kasparov claims, the prospects of quantum computing to potentially achieve AGI, and, lastly, what he considers to be the greatest AI risk factor, which according to my guest is “purposeful malevolent design.” While this far-ranging interview, with many concepts raised and names dropped, sometimes veered into various weeds some might deem overly specialised and/or technical, I nevertheless think there is plenty to glean about a range of fascinating, not to mention pertinent, topics for those willing to stay the course.

Roman Yampolskiy’s page at the University of Louisville: http://cecs.louisville.edu/ry/

Yampolskiy’s papers: https://scholar.google.com/citations?user=0_Rq68cAAAAJ&hl=en

Roman’s book, Artificial Superintelligence: A Futuristic Approach: https://www.amazon.com/Artificial-Superintelligence-Futuristic-Roman-Yampolskiy/dp/1482234432

Twitter account for Skeptically Curious: https://twitter.com/SkepticallyCur1

Patreon page for Skeptically Curious: https://www.patreon.com/skepticallycurious


Skeptically Curious
In each episode I endeavour to know more and think better by interviewing knowledgeable guests about fascinating topics. I have an insatiable curiosity about many areas, including politics, economics, philosophy, history, literature, psychology, religion, and different branches of science such as neuroscience, biology, and physics. Regardless of subject matter, I hope to promote critical thinking, Enlightenment values, and the scientific method. Please join me on this journey as we engage and broaden our skeptical curiosity.