Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts211/v4/8b/0c/b2/8b0cb27b-f024-5a7d-efa1-9f8f56e78df4/mza_6147220922042393544.jpg/600x600bb.jpg
AI lab by information labs
information labs
35 episodes
5 months ago
AI lab podcast, "decrypting" expert analysis to understand Artificial Intelligence from a policy making point of view.
Show more...
Technology
Science
RSS
All content for AI lab by information labs is the property of information labs and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
AI lab podcast, "decrypting" expert analysis to understand Artificial Intelligence from a policy making point of view.
Show more...
Technology
Science
https://img.transistor.fm/XPw3QQ06Lsb_QWqNtnN3sGMF3OBDp38goHvsG-UYv8M/rs:fill:3000:3000:1/q:60/aHR0cHM6Ly9pbWct/dXBsb2FkLXByb2R1/Y3Rpb24udHJhbnNp/c3Rvci5mbS8zZjEz/MjlhNGQwOTY0MWUw/NjIxZjg3N2FjMjBj/OGFiYy5wbmc.jpg
AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox
AI lab by information labs
14 minutes
1 year ago
AI lab TL;DR | Jurgen Gravestein - The Intelligence Paradox

🔍 In this TL;DR episode, Jurgen Gravestein (Conversation Design Institute) discusses his Substack blog post delving into the ‘Intelligence Paradox’ with the AI lab

📌 TL;DR Highlights
⏲️[00:00] Intro
⏲️[01:08] Q1-The ‘Intelligence Paradox’:
How does the language used to describe AI lead to misconceptions and the so-called ‘Intelligence Paradox’?
⏲️[05:36] Q2-‘Conceptual Borrowing’:
What is ‘conceptual borrowing’ and how does it impact public perception and understanding of AI?
⏲️[10:04] Q3-Human vs AI ‘Learning’:
Why is it misleading to use the term ‘learning’ for AI processes and what this means for the future of AI development?
⏲️[14:11] Wrap-up & Outro

💭 Q1-The ‘Intelligence Paradox’

🗣️ What’s really interesting about chatbots and AI is that for the first time in human history, we have technology talking back at us, and that's doing a lot of interesting things to our brains.
🗣️ In the 1960s, there was an experiment with Chatbot Eliza, which was a very simple, pre-programmed chatbot (...) And it showed that when people are talking to technology, and technology talks back, we’re quite easily fooled by that technology. And that has to do with language fluency and how we perceive language.
🗣️ Language is a very powerful tool (...) there’s a correlation between perceived intelligence and language fluency (...) a social phenomenon that I like to call the ‘Intelligence Paradox’. (...) people perceive you as less smart, just because you are less fluent in how you’re able to express yourself.
🗣️ That also works the other way around with AI and chatbots (...). We saw that chatbots can now respond in extremely fluent language very flexibly. (...) And as a result of that, we perceive them as pretty smart. Smarter than they actually are, in fact.
🗣️ We tend to overestimate the capabilities of [AI] systems because of their language fluency, and we perceive them as smarter than they really are, and it leads to confusion (...) about how the technology actually works.

💭 Q2-‘Conceptual Borrowing’

🗣️ A research article (...) from two professors, Luciano Floridi and Anna Nobre, (...) explaining (...) conceptual borrowing [states]: “through extensive conceptual borrowing, AI has ended up describing computers anthropomorphically, as computational brains with psychological properties, while brain and cognitive sciences have ended up describing brains and minds computationally and informationally, as biological computers."
🗣️ Similar to the Intelligence Paradox, it can lead to confusion (...) about whether we underestimate or overestimate the impact of a certain technology. And that, in turn, informs how we make policies or regulate certain technologies now or in the future.
🗣️ A small example of conceptual borrowing would be the term “hallucinations”. (...) a common term to describe when systems like chatGPT say something that sounds very authoritative and sounds very correct and precise, but is actually made up, or partly confabulated. (...) this actually has nothing to do with real hallucinations [but] with statistical patterns that don’t match up with the question that’s being asked.

💭 Q3-Human vs AI ‘Learning’

🗣️ If you talk about conceptual borrowing, “machine learning” is a great example of that, too. (...) there's a very (...) big discrepancy between what learning is in the psychological terms and the biological terms when we talk about learning, and then when it comes to these systems.
🗣️ So if you actually start to be convinced that LLMs are as smart and learn as quickly as people or children (...) you could be over attributing qualities to these systems.
🗣️ [ARC-AGI challenge:] a $1 million USD prize pool for the first person that can build an AI to solve a new benchmark that (...) consists of very simple puzzles that a five-year old (...) could basically solve. (...) it hasn't been solved yet.
🗣️ That’s, again, an interesting way to look at learning, and especially where these systems fall short. [AI] can reason based on (...) the data that they've seen, but as soon as it (..) goes out of (...) what they've seen in their data set, they will struggle with whatever task they are being asked to perform.

📌 About Our Guest
🎙️ Jurgen Gravestein | Sr Conversation Designer, Conversation Design Institute (CDI)
 𝕏   https://x.com/@gravestein1989
🌐 Blog Post | The Intelligence Paradox
https://jurgengravestein.substack.com/p/the-intelligence-paradox
🌐 Newsletter
https://jurgengravestein.substack.com
🌐 CDI
https://www.conversationdesigninstitute.com
🌐 Profs. Floridi & Nobre's article
http://dx.doi.org/10.2139/ssrn.4738331
🌐 Jurgen Gravestein
https://www.linkedin.com/in/jurgen-gravestein

Jurgen Gravestein is a writer, conversation designer and AI consultant. He works at the CDI, the world’s leading training and certification institute in conversational AI. He also runs a successful Substack newsletter “Teaching computers how to talk”.

AI lab by information labs
AI lab podcast, "decrypting" expert analysis to understand Artificial Intelligence from a policy making point of view.