All content for Parlons Futur is the property of Thomas Jestin and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Chaque semaine, la revue de presse des 10 meilleurs articles de la semaine passée sur l'actu du futur de la technologie !
IA : "un risque existentiel non négligeable" nous disent les 3 chercheurs en IA les plus cités
Parlons Futur
12 minutes 7 seconds
5 months ago
IA : "un risque existentiel non négligeable" nous disent les 3 chercheurs en IA les plus cités
1. The 3 most cited AI researchers of all time (Geoffrey Hinton, Yoshua Bengio, Ilya Sutskever) are vocally concerned about this. One of them believes the risk of extinction is higher than 50%.
2. The CEOs of the 4 leading AI companies have all acknowledged this risk as real. “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity”
-Sam Altman, CEO of OpenAI “I think there's some chance that it will end humanity. I probably agree with Geoff Hinton that it's about 10% or 20% or something like that.”
-Elon Musk, CEO of xAI “I think at the extreme end is the Nick Bostrom style of fear that an AGI could destroy humanity. I can’t see any reason in principle why that couldn’t happen.”
-Dario Amodei, CEO of Anthropic “We need to be working now, yesterday, on those problems, no matter what the probability is, because it’s definitely non-zero.”
-Demis Hassabis, CEO of Google DeepMind 3. Half of surveyed AI researchers believe that there are double-digit odds of extinction https://x.com/HumanHarlan/status/1925015874840543653
Parlons Futur
Chaque semaine, la revue de presse des 10 meilleurs articles de la semaine passée sur l'actu du futur de la technologie !