Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Technology
History
About Us
Contact Us
Copyright
© 2024 PodJoint
Podjoint Logo
US
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/f4/16/a4/f416a4f6-95e9-1986-eb2f-f44f75b18601/mza_7331931575404702637.jpg/600x600bb.jpg
Technically U
Technically U
207 episodes
5 days ago
Welcome to Technically U – your hub for cybersecurity, AI, cloud computing, networking, and emerging tech. Created by experts with 100+ years of combined experience in telecom and cybersecurity, we make complex topics simple, fun, and actionable. Perfect for IT pros, ethical hackers, engineers, business leaders, and students. New episodes weekly. Technically… it’s all about U.
Show more...
Technology
RSS
All content for Technically U is the property of Technically U and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Welcome to Technically U – your hub for cybersecurity, AI, cloud computing, networking, and emerging tech. Created by experts with 100+ years of combined experience in telecom and cybersecurity, we make complex topics simple, fun, and actionable. Perfect for IT pros, ethical hackers, engineers, business leaders, and students. New episodes weekly. Technically… it’s all about U.
Show more...
Technology
https://d3t3ozftmdmh3i.cloudfront.net/staging/podcast_uploaded_episode/43263888/43263888-1761346040887-440aca5fa4de9.jpg
AGI Development: The Race to Human Level AI - Should We Worry?
Technically U
17 minutes 41 seconds
1 week ago
AGI Development: The Race to Human Level AI - Should We Worry?

🤖 AGI Development: The Race to Human-Level AI. Are we on the verge of creating Artificial General Intelligence?

In this deep-dive episode, we explore the most important technological race of our generation—the pursuit of AI systems that can match or exceed human intelligence across ALL domains.

🎯 What You'll Learn:

✅ What AGI actually is and how it differs from today's AI (ChatGPT, Siri, etc.)

✅ The shocking timeline predictions: Why experts say 2026-2030 (or maybe never)

✅ GPT-5's August 2025 release and what it means for the AGI timeline

✅ China's game-changing "Roadmap to Artificial Superintelligence" announcement

✅ The four major safety risks: misuse, misalignment, mistakes, and structural threats

✅ Why NO company scores above a D grade in AGI safety planning

✅ Geopolitical stakes: The new AI arms race between the US and China

✅ Best-case scenarios: Scientific breakthroughs, economic abundance, and human flourishing

✅ What you can do RIGHT NOW to influence this technology's development

📊 KEY STATISTICS & DEVELOPMENTS:

GPT-5 Released August 2025: Performs at PhD-level across multiple domains, matches human experts 40-50% of the time on economically valuable tasks

Timeline Predictions: Elon Musk (2026), Dario Amodei of Anthropic (2026), Demis Hassabis of DeepMind (5-10 years), Academic consensus (median 2040-2047)

China's Strategic Shift: Alibaba CEO announced "Roadmap to Artificial Superintelligence" in October 2025, marking China's entry into the AGI race

Safety Crisis: 2025 AI Safety Index shows companies pursuing AGI score below D grade in existential safety planning

Computing Power: AI training compute growing 4-5x annually, fueling rapid capability improvements

Global Investment: Hundreds of billions being invested by OpenAI, Anthropic, Google DeepMind, xAI, and Chinese firms

🔬 FEATURED TOPICS:

The Current State: We break down where we are right now in the race to AGI. From OpenAI's GPT-5 launch to Alibaba's shocking ASI announcement, discover why 2025 has been a turning point year.

Learn why expert predictions range wildly from "it's already here" to "it'll never happen" and what that disagreement tells us about the challenge ahead.

How We Get There:

Explore the two main paths to AGI: the scaling hypothesis (make models bigger and train them on more data) versus whole brain emulation (digitally recreate a human brain). We discuss whether current AI systems are truly "thinking" or just sophisticated pattern-matching, and why that question matters for safety.

The Safety Challenge:

This is where things get serious. We examine the four categories of AGI risk and why leading AI companies admit their current safety techniques won't scale to superintelligence. Learn about the alignment problem, the paperclip maximizer thought experiment, and why misuse and misalignment pose existential threats.

Geopolitical Stakes:AGI isn't just a technological race—it's reshaping global power dynamics. We explore why the Pentagon is establishing AGI steering committees, how China's approach differs from Silicon Valley's, and whether international cooperation is possible in an era of strategic competition.

The Upside:It's not all doom and gloom! Discover the incredible potential benefits: accelerating scientific research by decades, solving climate change, curing diseases, and creating economic abundance. We discuss why thought leaders like Geoffrey Hinton and Elon Musk advocate for Universal Basic Income in an AGI-enabled world.

🎓 WHO SHOULD WATCH:

Tech enthusiasts following AI developments

Students and professionals in computer science, engineering, or policy

Anyone concerned about AI safety and ethicsInvestors tracking the AI industry

People curious about humanity's technological future

Policy makers and educators

Science communicators and futurists

#AGI #ArtificialGeneralIntelligence #GPT5 #OpenAI #FutureTech #AIRace #AISafety #Anthropic #DeepMind #TechPodcast #MachineLearning #Superintelligence #AIEthics

Technically U
Welcome to Technically U – your hub for cybersecurity, AI, cloud computing, networking, and emerging tech. Created by experts with 100+ years of combined experience in telecom and cybersecurity, we make complex topics simple, fun, and actionable. Perfect for IT pros, ethical hackers, engineers, business leaders, and students. New episodes weekly. Technically… it’s all about U.