Is the AI arms race between tech giants and nations pushing us toward a dangerous future?
On this episode of Digital Disruption, we’re joined by the founder of SingularityNET and the pioneering mind behind the term Artificial General Intelligence (AGI), Dr. Ben Goertzel.
Dr. Ben Goertzel is a leading figure in artificial intelligence, robotics, and computational finance. Holding a Ph.D. in Mathematics from Temple University, he has been a pioneer in advancing both the theory and practical applications of AI, particularly in the pursuit of artificial general intelligence (AGI) a term he helped popularize. He currently leads the SingularityNET Foundation, TrueAGI, the OpenCog Foundation, and the AGI Society, and has organized the Artificial General Intelligence Conference for over fifteen years. A co-founder and principal architect of OpenCog, an open-source project to build human-level AI, Dr. Goertzel’s work reflects a singular mission: to develop benevolent AGI that advances humanity’s collective good.
Dr. Goertzel sits down with Geoff to share his insights on the accelerating progress toward AGI, what it truly means, and how it could reshape human life, work, and consciousness. He discusses the role of Big Tech in shaping AI’s direction and how corporate incentives, and commercialization are both driving innovation and limiting true AGI research. From DeepMind and OpenAI to decentralized AI networks, Dr. Goertzel reveals where the real breakthroughs might happen. The conversation also explores the ethics of AI, the dangers of fake democratization and false compassion, and why humanity must shape AI’s evolution with empathy and awareness.
In this episode:
00:00 Intro
00:21 What is Artificial General Intelligence (AGI)?
01:10 The pace of AI progress and the hype cycle
05:44 The path from human-level AGI to superintelligence
09:20 How close are we to AGI?
13:08 Transformer vs. multi-agent systems
14:05 Which AI labs might strike AGI gold? (DeepMind, OpenAI, Anthropic)
17:07 Big Tech’s innovator’s dilemma and why true AGI may come elsewhere
20:20 Predictive coding
22:59 Why Big Tech resists new AI training paradigms
29:16 Imagining life after AGI: optimism, transhumanism, and choice
33:29 Navigating the transition from AGI to ASI
37:55 Decentralized vs. centralized control of AGI
43:20 Who (or what) will be in control
47:19 Risks of power concentration in early AGI development
51:01 Who should own and guide AGI?
53:06 Why we need participatory governance for intelligent systems
54:47 The danger of fake compassion and false democratization
1:00:50 Finding meaning in the age of intelligent machines
1:04:13 How AGI could help humanity focus on inner growth
1:07:20 – Learning how to learn: the last human advantage
Connect with Dr. Goertzel:
LinkedIn: https://www.linkedin.com/in/bengoertzel/
X: https://x.com/bengoertzel
Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
How are AI and automation shaping both the attack and defense sides of cybersecurity?
On this episode of Digital Disruption, we’re joined by the founder and CEO of Have I Been Pwned, Troy Hunt.
Troy Hunt is an Australian security researcher and the founder of the data breach notification service, Have I Been Pwned. With a background in software development specializing in information security, Troy is a regular conference speaker and trainer. He frequently appears in the media, collaborates with government and law enforcement agencies, and has appeared before the U.S. Congress as an expert witness on the impact of data breaches. Troy also serves as a Microsoft Regional Director (an honorary title) and regularly blogs at troyhunt.com from his home on Australia’s Gold Coast.
Troy sits down with Geoff to share eye-opening insights on the evolving threat landscape of 2025 and beyond. Despite the rise of AI and automation, Troy emphasizes that many of today’s most damaging data breaches and ransomware attacks still stem from basic human error and social engineering. He explains how ransomware has shifted from encrypting files to threatening data disclosure, making it harder for organizations to manage risk and justify ransom payments. The conversation also touches on how breach fatigue and apathy have led many individuals and businesses to underestimate cybersecurity risks, even as incidents rise globally. He also highlights how AI tools are being weaponized by both defenders and attackers and argues that cybersecurity isn’t about perfect protection but about finding equilibrium: balancing usability, education, and risk mitigation.
In this episode:
00:00 Intro
01:15 Why human weakness beats AI
02:00 Young hackers and the rise of scattered spider
04:00 From hacktivists to career criminals
05:00 Ransomware’s new tactics
07:30 Should companies pay the ransom?
10:20 Can you ever be fully protected? Defense vs. response
11:20 How to convince boards cybersecurity is worth the money
14:20 Breach fatigue and public apathy
18:00 Reframing what ‘sensitive data’ really means
20:00 Passwords, reuse, and the real risk equation
24:00 Biometrics, face ID & the future of authentication
26:30 Threat Modeling 101
27:30 Barriers to cyber preparedness
29:30 How Have I Been Pwned works
32:00 The Future of Data Breaches
38:00 Microsoft’s Role in the Security Ecosystem
40:30 AI Hype vs. reality in cybersecurity
43:00 When AI helps hackers
52:00 Why transparency still matters after every breach
54:00 Accepting risk, building resilience
Connect with Troy:
Website: https://www.troyhunt.com/
LinkedIn: https://www.linkedin.com/in/troyhunt/
X: https://x.com/troyhunt
Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
What does the future of digital experiences look like when AI, accessibility, and entrepreneurship collide?
On this episode of Digital Disruption, we’re joined by serial tech entrepreneur, accessibility advocate, and co-founder of Global Accessibility Awareness Day (GAAD), Joe Devon.
As Chair of the GAAD Foundation, Joe strives to disrupt the culture of technology and digital product development by embedding accessibility as a core requirement. Inspired by his 2011 blog post highlighting the need for mainstream accessibility knowledge among developers, GAAD has grown into an annual event observed on the third Thursday of May, promoting digital access and inclusion for over one billion people with disabilities worldwide. He also co-hosts the Accessibility and Gen.AI Podcast, exploring the intersection of accessibility and artificial intelligence.
Joe sits down with Geoff to explore how AI startups are reshaping the digital landscape, from code accessibility to the rise of small business innovation. He shares the story of how one blog post led to a global accessibility movement, why AI-driven tools could either democratize or centralize technology, and how the entrepreneurial spirit will define the next decade. From robotics fused with large language models to AI coding assistants generating billions of lines of code, this conversation dives into the challenges, risks, and opportunities for entrepreneurs and digital leaders navigating this transformation.
In this episode:
00:00 Intro
00:23 The “ChatGPT moment” for robotics
01:23 The mission behind Global Accessibility Awareness Day
02:29 How AI shifts the accessibility conversation
03:08 Why accessibility matters for everyone
06:17 Usability and empathy in digital product design
12:28 How AI can unlock inclusion and personalization
14:45 Aphantasia, hyperphantasia & diverse human abilities
17:34 AI and the future of sign language translation
19:23 How to work with disability communities
23:59 Advice for leaders getting started with inclusive design
25:13 AI coding tools revolutionizing software development
29:27 Can AI accessibility become the new standard?
30:06 How GAAD became a global movement
35:24 Entrepreneurship vs. 9-to-5 in an AI-powered economy
45:07 Lessons from the early internet and RSS’s decline
47:04 The debate on Universal Basic Income (UBI)
50:26 Joe’s father’s influence and the accessibility journey
52:34 From a blog post to real change in banking
57:13 The rise of AI influencers vs. the value of real humans
58:36 Advice for those unsure about entrepreneurship
Connect with Joe:
LinkedIn: https://www.linkedin.com/in/joedevon/
X: https://x.com/joedevon
Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
What happens when AI becomes as good at thinking as humans and what skills remain uniquely ours?
On this episode of Digital Disruption, we’re joined by Mike Bechtel, Chief Futurist at Deloitte.
Mike began his career at Accenture Labs, where his team helped Fortune 500 clients put emerging technologies to profitable use. Twelve years and twelve U.S. patents later, he was named the firm’s first Global Innovation Director, tasked with creating the strategy, processes, and culture to foster company-wide intrapreneurship. At Deloitte, Mike and his team focus on making sense of what’s new and next in technology, with the goal of helping today’s leaders arrive at their preferred futures ahead of schedule. He also serves as an adjunct professor at the University of Notre Dame, where he teaches corporate innovation. In 2013, Mike co-founded and served as Managing Director of Ringleader Ventures, a venture capital firm investing in early-stage startups that had—intentionally or not—built simple solutions to complex corporate challenges.
Mike sits down with Geoff to talk about the future of AI and what it means for our work, creativity, and humanity. He shares an optimistic vision of a world where automation elevates human potential, allowing people to focus on creativity, innovation, and connection. This conversation challenges how we think about AI, AI art, the future of work, and the role of human skills. Mike draws on his experience advising leaders and students to show how we can prepare for a future where AI isn’t just a tool but a partner in thinking. He shares insights on the ethical challenges of outsourcing thought, why intent matters in how organizations use AI, and why AI is less about replacing jobs and more about automating the “muck” so humans can focus on the magic. Mike also challenges us to consider: what happens to writing, philosophy, and ethics when machines can master technical tasks faster than ever?
In this episode:
00:00 Intro
01:00 Why AI isn’t new (but why this moment matters)
04:10 Automation as a Trojan Horse for elevation
05:30 Best practices get automated, next practices get built
09:50 Expertise vs. curiosity: Why human skills win
15:00 From fear to opportunity
17:00 What clients really want in the AI era
19:20 Automating muck to unlock magic
22:00 The revenge of the humanities & synthetic thinking
23:00 Writing = thinking: Why we can’t outsource human thought
36:15 Why people still matter
42:00 Beyond AI: Blockchain, trust & cryptographic futures
47:30 Deepfakes, truth, and the math we can trust
1:05:50 The future of IT
1:08:00 AI Reshaping corporate teams & skills
1:14:40 Guidance for the next generation
1:19:30 Staying ahead of AI
1:21:45 Key takeaways
Connect with Mike:
Website: https://mikebech.tel/bio
LinkedIn: https://www.linkedin.com/in/mikebechtel/
X: https://x.com/mikebechtel
Visit our website: https://www.infotech.com/?utm_source=youtube&utm_medium=social&utm_campaign=podcast
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
Which strategies separate the AI startups that thrive from those that die?
Today on Digital Disruption, we’re joined by Jeremiah Owyang, a venture capitalist at Blitzscaling Ventures.
Jeremiah, a longtime Silicon Valley native, leads investments in early-stage AI startups at Blitzscaling Ventures. He focuses on startups with the potential for rapid scale and enduring leadership in valuable markets. He also organizes the popular Llama Lounge: The AI Startup Event Series. As a speaker, Jeremiah forecasts how early-stage technology will reshape business and society and advises audiences on how to turn disruption into advantage.
Jeremiah sits down with Geoff to unpack the reality of Silicon Valley’s AI gold rush. With more than 38,000 AI startups competing for attention, thin technical moats, and the looming threat of consolidation, founders face more pressure than ever to stand out. He shares insider insights on why most AI startups are vulnerable to being wiped out by a single update from @OpenAI or Google, which industries and roles are most at risk of automation, and why critical thinking remains one of humanity’s most valuable advantages. The conversation also explores how Gen Alpha, the first AI-native generation, will grow up and work differently than any before, as well as the rise of AI agents and their impact on everyday work.
In this video:
00:00 Intro
01:45 The AI startup explosion
04:30 Why most AI startups will fail without real advantages
07:10 One OpenAI update could destroy your startup overnight
10:25 Thin technical advantages and the search for moats
13:40 What VCs look for in AI startups
17:05 The rise of Lean AI startups
20:15 AI agents and the automation of entry-level jobs
23:50 The skills humans still need
27:10 Gen Alpha: The first AI-native workforce
30:20 AI’s role in corporate strategy and decision-making
34:00 The Risks of over-reliance on AI in business
37:25 From Gold Rush to shakeout
41:10 How CEOs can future-proof their companies in the AI era
45:30 Humanoid robots, agents, and the next wave of disruption
49:15 Winners and losers in the AI economy
Connect with Jeremiah:
Website: https://web-strategist.com/blog/about/
LinkedIn: https://www.linkedin.com/in/jowyang/
Instagram: https://www.instagram.com/jowyang/
X: https://x.com/jowyang
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
What risks come with AI systems that can lie, cheat, or manipulate?
Today on Digital Disruption, we’re joined by Dr. Ayesha Khanna, CEO of Addo AI.
Dr. Khanna is a globally recognized AI expert, entrepreneur, and CEO of Addo, helping businesses leverage AI for growth. With 20+ years in digital transformation, she advises Fortune 500 CEOs and serves on global boards, including Johnson Controls, NEOM Tonomus, and L’Oréal’s Scientific Advisory Board. A graduate of Harvard, Columbia, and the London School of Economics, she spent a decade on Wall Street advising on information analytics. A thought leader in AI, Dr. Khanna has been recognized as a groundbreaking entrepreneur by Forbes, named to Edelman’s Top 50 AI Creators (2025), and featured in Salesforce’s 16 AI Influencers to Know (2024). Committed to diversity in tech, she founded the charity 21C Girls, which taught thousands of students the basics of AI and coding in Singapore, and currently provides scholarships for mid-career women through her education company Amplify.
Ayesha sits down with Geoff to discuss how artificial intelligence is disrupting industries, reshaping the economy, and redefining the future of jobs. This conversation explores why critical thinking will be the most important skill in an AI-driven workplace, how businesses can use AI to scale innovation instead of getting stuck in “pilot purgatory,” and what risks organizations must prepare for, including bias, data poisoning, cybersecurity threats, and manipulative reasoning models. Ayesha shares insights from her work with governments and Fortune 500 companies on building national AI strategies, creating governance frameworks, and balancing innovation with responsibility. The conversation dives into how AI and jobs intersect, whether automation will replace or augment workers and why companies need to focus on growth, reskilling, and strategic automation rather than layoffs. They also discuss the rise of the Hybrid Age, where humans and AI coexist in every part of life, and what it means for society, relationships, and the global economy.
In this video:
00:00 Intro
00:43 The future of AI and the next 5 years
02:16 The biggest AI risks
05:25 Fake alignment & governance
09:08 Why AI pilots fail
15:30 What successful companies do
23:14 AI and jobs: Automation, reskilling, and why critical thinking matters most
29:39 The Hybrid Age
37:09 AI and society: relationships with AI, human agency, and ethical concerns
46:13 Global AI strategies
54:00 Overhyped narratives and what people get wrong about AI and jobs
56:27 The Skills Gap opportunity
58:31 The importance of risk frameworks, critical thinking, and optimism
Connect with Dr. Khanna
Website: https://www.ayeshakhanna.com/
LinkedIn: https://www.linkedin.com/in/ayeshakhanna/
X: (21) Dr. Ayesha Khanna (@ayeshakhanna1) / X
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
Can automation and critical thinking coexist in the future of education and work?
Today on Digital Disruption, we’re joined by Bryan Walsh the Senior Editorial Director at Vox.
At Vox, Bryan leads the Future Perfect and climate teams and oversees the podcasts Unexplainable and The Gray Area. He also serves as editor of Vox’s Future Perfect section, which explores the policies, people, and ideas that could shape a better future for everyone. He is the author of End Times: A Brief Guide to the End of the World (2019), a book on existential risks including AI, pandemics, and nuclear war though, as he notes, it’s not all that brief. Before joining Vox, Bryan spent 15 years at Time magazine as a foreign correspondent in Hong Kong and Tokyo, an environment writer, and international editor. He later served as Future Correspondent at Axios. When he’s not editing, Bryan writes Vox’s Good News newsletter and covers topics ranging from population trends and scientific progress to climate change, artificial intelligence, and on occasion children’s television.
Bryan sits down with Geoff to discuss how artificial intelligence is transforming the workplace and what it means for workers, students, and leaders. From the automation of entry-level jobs to the growing importance of human-centered skills, Bryan shares his perspective on the short- and long-term impact of AI on the economy and society. He explains why younger workers may be hit hardest, how education systems must adapt to preserve critical thinking, and why both companies and governments face tough choices in managing disruption. This conversation highlights why adaptability and critical thinking are becoming the most valuable skills and what governments and organizations can do to reduce the social and economic strain of rapid automation.
In this video:
00:00 Intro
01:20 Early adoption of AI: Hype vs. reality
02:16 Automation pressures during economic downturns
03:08 The struggle for new grads entering the workforce
04:37 Is AI wiping out entry-level jobs?
05:40 Why younger workers may be hit hardest
06:28 No clear answers on AI disruption
08:19 The paradox of AI: productivity gains vs. job losses
14:30 Critical thinking, education, and the future of learning
18:00 How AI reshapes global power dynamics
31:57 The workplace of the future: skills that matter most
44:03 Regulation, politics, and the AI economy
48:19 AI, geopolitics, and risks of global instability
57:33 Who bears responsibility for minimizing disruption?
59:01 Rethinking identity beyond work
1:00:22 Journalism in the AI era: threat or amplifier?
Connect with Bryan:
Website: https://www.vox.com/authors/bryan-walsh
LinkedIn: https://www.linkedin.com/in/bryan-walsh-9881b0/
X: https://x.com/bryanrwalsh
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
Are we heading toward an AI-driven utopia, or just another tech bubble waiting to burst?
Today on Digital Disruption, we’re joined by Dr. Emily Bender and Dr. Alex Hanna.
Dr. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. In 2023, she was included in the inaugural Time 100 list of the most influential people in AI. She is frequently consulted by policymakers, from municipal officials to the federal government to the United Nations, for insight into how to understand so-called AI technologies.
Dr. Hanna is Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. She is an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought-after speaker and expert who has been featured across the media, including articles in the Washington Post, Financial Times, The Atlantic, and Time.
Dr. Bender and Dr. Hanna sit down with Geoff to discuss the realities of generative AI, big tech power, and the hidden costs of today’s AI boom. Artificial Intelligence is everywhere but how much of the hype is real, and what’s being left out of the conversation? This discussion dives into the social and ethical impacts of AI systems and why popular AI narratives often miss the mark. Dr. Bender and Dr. Hanna share their thoughts on the biggest myths about generative AI and why we need to challenge them and the importance of diversity, labor, and accountability in AI development. They’ll answer questions such as where AI is really heading and how we can imagine better, more equitable futures and what technologists should be focusing on today.
In this video:
0:00 Intro
1:45 Why language matters when we talk about “AI”
4:20 The problem with calling everything “intelligence”
7:15 How AI hype shapes public perception
10:05 Separating science from marketing spin
13:30 The myth of AGI: Why it’s a distraction
16:55 Who benefits from AI hype?
20:20 Real-world harms: Bias, surveillance & labor exploitation
24:10 How data is extracted & who pays the price
28:40 The invisible labor behind AI systems
32:15 Diversity, power, and accountability in AI
36:00 Why focusing on “doom scenarios” misses the point
39:30 AI in business and risks leaders should actually care about
43:05 What policymakers should prioritize now
47:20 The role of regulation in responsible AI
50:10 Building systems that serve people, not profit
53:15 Advice for CIOs and tech leaders
55:20 Gen AI in the workplace
Connect with Dr. Bender and Dr. Hanna
Website: https://thecon.ai/authors/
Dr. Bender LinkedIn: https://www.linkedin.com/in/ebender/
Dr. Hanna LinkedIn: https://www.linkedin.com/in/alex-hanna-ph-d/
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
What does the future of AI assistants look like and what’s still missing?
Today on Digital Disruption, we’re joined by Adam Cheyer, Co-Founder of Siri.
Adam is an inventor, entrepreneur, engineering executive, and a pioneer in AI and computer human interfaces. He co-founded or was a founding member of five successful startups: Siri (sold to Apple, where he led server-side engineering and AI for Siri), Change.org (the world’s largest petition platform), Viv Labs (acquired by Samsung, where he led product engineering and developer relations for Bixby), Sentient (massively distributed machine learning), and GamePlanner.AI (acquired by Airbnb, where he served as VP of AI Experience). Adam has authored more than 60 publications and 50 patents. He graduated with highest honors from Brandeis University and received the “Outstanding Masters Student” award from UCLA’s School of Engineering.
Adam sits down with Geoff to discuss the evolution of conversational AI, design principles for next-generation technology, and the future of human–machine interaction. They explore the future of AI, augmented reality, and collective intelligence. Adam shares insider stories about building Siri, working with Steve Jobs, and why today’s generative AI tools like ChatGPT are both amazing and frustrating. Adam also shares his predictions for the next big technological leap and how collective intelligence could transform how we solve humanity’s most difficult challenges.
In this video:
0:00 Intro
1:08 Why today’s AI both amazes and frustrates
3:50 The 3 big missing pieces in current AI systems
8:28 What Siri got right and what it missed
11:30 The “10+ Theory”: Paradigm shifts in computing
14:18 Augmented Reality as the next big breakthrough
19:43 Design lessons from building Siri
25:00 Iteration vs. first impressions: How to launch transformational products
30:20 Beginner, intermediate, and expert user experiences in AI
33:40 Will conversational AI become like “Her”?
35:45 AI maturity compared to the early internet
37:34 Magic, technology, and creating “wow” moments
43:55 What’s hype vs. what’s real in AI today
47:01 Where the next magic will happen: AR & collective intelligence
50:51 The role of DARPA, Stanford, and government funding in Siri’s success
54:49 Advice for leaders building the future of digital products
57:13 Balance the hype
Connect with Adam:
Website: http://adam.cheyer.com/site/home?page=about
LinkedIn: https://www.linkedin.com/in/adamcheyer/
Facebook: https://www.facebook.com/acheyer
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
Check out other episodes of Digital Disruption: https://youtube.com/playlist?list=PLIImliNP0zfxRA1X67AhPDJmlYWcFfhDT&feature=shared
Are we ready for a future where human and machine intelligence are inseparable?
Today on Digital Disruption, we’re joined by best-selling author and founding partner of digital strategy firm, Future Point of View (FPOV), Scott Klososky .
Scott’s career has been built at the intersection of technology and humanity; he is known for his visionary insights into how emerging technologies shape organizations and society. He has advised leaders across Fortune 500 companies, nonprofits, and professional associations, guiding them in integrating technology with strategic human effort. A sought-after speaker and author of four books—including Did God Create the Internet? Scott continues to help executives around the world prepare for the digital future.
Scott sits down with Geoff to discuss the cutting edge of human-technology integration and the emergence of the "organizational mind." What happens when AI no longer supports organizations but becomes a synthetic layer of intelligence within them? He talks about real-world examples of this transformation already taking place, reveals the ethical and existential risks AI poses, and offers practical advice for business and tech leaders navigating this new era. This conversation dives deep into autonomous decision-making to AI regulation and digital governance, and Scott breaks down the real threats of digital reputational damage, AI misuse, and the growing surveillance culture we’re all a part of.
In this episode:
00:00 Intro
00:24 What is an ‘Organizational Mind?’
03:44 How fast is this becoming real?
05:00 Early insights from building an organizational mind
07:02 The human brain analogy: AI mirrors us
08:12 What does it mean for AI to “wake up”?
09:51 AI awakening without consciousness
11:03 Should we be worried about conscious AI?
11:59 Accidents, bad actors, and manipulation
15:42 Can we prevent these AI risks?
18:28 Regulatory control and the role of governments
20:03 Cat and Mouse: Can AI hide from auditors?
23:02 The escalating complexity of AI threats
27:00 Will nations have organizational minds?
29:12 Autonomous collaboration between AI nations
35:36 Bringing AI tools together
36:31 Knowledge, agents, personas & oversight
40:11 Why early adopters will have the edge
41:00 Are we in another AI bubble?
45:01 Scott’s advice for business & tech leaders
47:12 Why use-cases alone aren’t enough
Connect with Scott:
LinkedIn: https://www.linkedin.com/in/scottklososky/
X: https://x.com/sklososky
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
Is this a wake-up call for anyone who believes the dangers of AI are exaggerated?
Today on Digital Disruption, we’re joined by Roman Yampolskiy, a leading writer and thinker on AI safety, and associate professor at the University of Louisville. He was recently featured on podcasts such as PowerfulJRE by Joe Rogan.
Roman is a leading voice in the field of Artificial Intelligence Safety and Security. He is the author of several influential books, including AI: Unexplainable, Unpredictable, Uncontrollable. His research focuses on the critical risks and challenges posed by advanced AI systems. A tenured professor in the Department of Computer Science and Engineering at the University of Louisville, he also serves as the founding director of the Cyber Security Lab.
Roman sits down with Geoff to discuss one of the most pressing issues of our time: the existential risks posed by AI and superintelligence. He shares his prediction that AI could lead to the extinction of humanity within the next century. They dive into the complexities of this issue, exploring the potential dangers that could arise from both AI’s malevolent use and its autonomous actions. Roman highlights the difference between AI as a tool and as a sentient agent, explaining how superintelligent AI could outsmart human efforts to control it, leading to catastrophic consequences. The conversation challenges the optimism of many in the tech world and advocates for a more cautious, thoughtful approach to AI development.
In this episode:
00:00 Intro
00:45 Dr. Yampolskiy's prediction: AI extinction risk
02:15 Analyzing the odds of survival
04:00 Malevolent use of AI and superintelligence
06:00 Accidental vs. deliberate AI destruction
08:10 The dangers of uncontrolled AI
10:00 The role of optimism in AI development
12:00 The need for self-interest to slow down AI development
15:00 Narrow AI vs. Superintelligence
18:30 Economic and job displacement due to AI
22:00 Global competition and AI arms race
25:00 AI’s role in war and suffering
30:00 Can we control AI through ethical governance?
35:00 The singularity and human extinction
40:00 Superintelligence: How close are we?
45:00 Consciousness in AI
50:00 The difficulty of programming suffering in AI
55:00 Dr. Yampolskiy’s approach to AI safety
58:00 Thoughts on AI risk
Connect with Roman:
Website: https://www.romanyampolskiy.com/
LinkedIn: https://www.linkedin.com/in/romanyam/
X: https://x.com/romanyam
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
As AI becomes more capable, how should our social systems evolve in response?
Today on Digital Disruption, we’re joined once again by Zack Kass, an AI futurist and former Head of Go-To-Market at OpenAI. As a leading expert in applied AI, he harnesses its capabilities to develop business strategies and applications that enhance human potential.
Zack has been at the forefront of AI and played a key role in early efforts at commercializing AI and large language models, channeling OpenAI’s innovative research into tangible business solutions. Today, Zack is dedicated to guiding businesses, nonprofits, and governments through the fast-changing AI landscape. His expertise has been highlighted in leading publications, including Fortune, Newsweek, Entrepreneur, and Business Insider.
Zack sits down with Geoff to explore the philosophical implications of AI and its impact on everything from nuclear war to society’s struggle with psychopaths and humanity itself. This conversation raises important questions about the evolving role of AI in shaping our world and the ethical considerations that come with it. Zack discusses how AI may empower low-resource bad actors, transform local communities, and influence future generations. The episode touches on a wide range of themes, including the meaning of life, AI’s role in global conflict, its effects on personal well-being, and the societal challenges it presents. This conversation isn’t just about AI, it’s about humanity’s ongoing exploration of fear, freedom, happiness, and the future.
In this episode:
00:00 Intro
00:21 AI's exponential growth and speed of change
02:03 The expanding scientific frontier
03:19 Roger Bannister effect and AI inspiration
04:00 Societal vs. technological thresholds
06:00 The danger of low-resource bad actors
09:00 Psychopaths, crime, and the role of policy
12:00 Freedom vs. security
14:45 The risk of bias and broken justice systems
18:00 The role of AI in decision-making
20:00 Why we tolerate human error but not machine error
20:36 Breaking the fear cycle in a negative attention economy
22:12 Tech-driven optimism
23:55 Finding Happiness
25:32 Community, nature, and meaningful human connection
27:00 The problem with the “more is more” mindset
28:30 Narratives, new media, and information overload
31:09 The Power of local change and good news
33:06 Gen Z, Gen Alpha, and the next wave of innovation
Connect with Zack:
Website: https://zackkass.com/
X: https://x.com/iamthezack
LinkedIn: https://www.linkedin.com/in/zackkass/
YouTube: https://www.youtube.com/ @ZackKassAI
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
What role should government, regulation, and society play in the next chapter of Big Tech and AI.
Today on Digital Disruption, we’re joined by Pulitzer Prize–winning investigative reporter, Gary Rivlin.
Gary has been writing about technology since the mid-1990s and the rise of the internet. He is the author of AI Valley and 9 previous books, including Saving Main Street and Katrina: After the Flood. His work has appeared in the New York Times, Newsweek, Fortune, GQ, and Wired, among other publications. He is a two-time Gerald Loeb Award winner and former reporter for the New York Times. He lives in New York with his wife, theater director Daisy Walker, and two sons.
Gary sits down with Geoff to discuss the unchecked power of Big Tech and the evolving role of AI as a political force. From the myth of the benevolent tech founder to the real-world implications of surveillance, misinformation, and election interference, he discusses the dangers of unregulated tech influence on policy and the urgent need for greater transparency, ethical responsibility, and accountability in emerging technologies. This conversation highlights the role of venture capital in fueling today’s tech giants, what history tells us about the future of digital disruption, and whether regulation can truly govern AI and platform power.
In this episode:
00:00 Intro
02:45 The early promise of Silicon Valley
06:30 What changed in tech: From innovation to power
10:55 The role of venture capital in shaping Big Tech
15:40 Tech disruption vs. systemic control
20:15 The shift from public good to private gain
24:50 How Big Tech wields power over democracy
29:30 Can AI be regulated in time?
33:45 Lessons from tech history
38:20 Government’s role in tech oversight
43:05 Gary’s thoughts on tech accountability
47:30 Future risks of an unchecked tech industry
51:10 Hope for the next generation of innovators
55:00 Tech is at the center of politics
58:00 What should change?
1:09:00 Journalists using AI are more powerful
Connect with Gary:
Website: https://garyrivlin.com/
LinkedIn: https://www.linkedin.com/in/gary-rivlin/
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
In a world of rising cyber threats, what keeps the CIA’s former head of cybersecurity up at night?
Today on Digital Disruption, we’re joined by Andy Boyd, former Head of the CIA’s Center for Cyber Intelligence.
Andy was a Senior Intelligence Service officer in the Central Intelligence Agency’s Directorate of Operations (DO). His most recent assignment was Director of the CIA’s Center for Cyber Intelligence (CCI) which is responsible for intelligence collection, analysis, and operations focused on foreign cyber threats to US interests. Andy has experience leading worldwide intelligence operations and has in-depth knowledge of geopolitics, cyber operations, security practices, and risk mitigation.
Andy sits down with Geoff to discuss the future of cybersecurity in a rapidly evolving digital world. With decades of experience in cyber intelligence, Andy explains how global threats are evolving, from traditional espionage to AI-driven cyberattacks and disinformation. He dives into how intelligence agencies like the CIA assess and respond to state-sponsored cyber threats from China and Russia, and why the private sector is now a primary target. Andy breaks down how emerging technologies like generative AI are changing both offensive and defensive cyber strategies, and what this means for governments, businesses, and people. Andy also shares how one of the world’s leading professional services firms is navigating this new landscape, using culture, data, and innovation to stay ahead of cyber risks.
In this episode:
00:00 Intro
02:45 What the CIA's Cyber Intelligence Center actually does
05:30 Leading transformation across a global enterprise
07:20 Evolution of cyber threats from nation-states
08:15 Building trust and transparency with business stakeholders
11:10 The critical role of data in decision-making
13:00 How the CIA detects and responds to cyber attacks
17:05 Creating a culture of innovation and adaptability
17:45 The private sector as a frontline target
20:40 How Aon is approaching talent and upskilling
23:10 Offensive cyber operations: how far should the U.S. go?
27:30 Key leadership lessons and advice for future CIOs
29:50 China's cyber capabilities vs. Russia's tactics
35:25 The role of intelligence in election security
40:50 Why disinformation is more dangerous than hacking
45:30 How AI is transforming cyber espionage
50:10 What keeps Andy Boyd up at night
54:40 The importance of public awareness and resilience
Connect with Andy:
LinkedIn: https://www.linkedin.com/in/andrew-g-boyd-12194673/
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
What if you could control technology using only your thoughts?
Today on Digital Disruption, we’re joined by an expert in the space of Brain-Computer Interfaces (BCIs), Tan Le.
Tan is the founder and CEO of EMOTIV, a Silicon Valley-based company pioneering EEG-based BCI technology. Her work centers on non-invasive “brainwear” that enables direct interaction between the human brain and computers. Tan is an advocate for democratizing neurotechnology to empower individuals, researchers, and organizations to drive innovation. In February 2020, she published her first book, The NeuroGeneration: The New Era of Brain Enhancement Revolutionizing the Way We Think, Work and Heal.
Tan sits down with Geoff to talk about how her company is making it possible to connect your brain directly to digital systems, no hype, just science. From decoding mental commands to enhancing human cognition, they dive into the ethical challenges of reading brain data, what it really means to give technology access to your mind, and why non-invasive headsets are reshaping human-computer interaction.
In this episode:
00:00 Intro
03:00 Tan Le’s background
06:00 What is Brain-Computer Interface (BCI)?
09:00 The current state of BCI in 2025
12:00 Non-invasive vs. implantable tech
15:00 How BCIs read brain signals
18:00 Real-world applications: Healthcare and beyond
21:00 Consumer use cases and accessibility
24:00 The role of AI in brain signal interpretation
27:00 Ethics of brain data and consent
30:00 Mental wellness and performance insights
33:00 Government and regulatory perspectives
36:00 EMOTIV’s vision and tech stack
39:00 Human enhancement and neuroplasticity
42:00 Risks and misconceptions around BCI
45:00 Collaborations and research partnerships
48:00 Global adoption trends
51:00 Tan Le’s advice to future innovators
54:00 Predictions for the next 10 years
Connect with Tan:
Website: https://www.emotiv.com/
LinkedIn: https://www.linkedin.com/in/tanle/
X: https://x.com/TanTTLe
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
What if your data worked for you and not the platforms controlling it?
Today on Digital Disruption, we’re joined by John Bruce, CEO and Co-Founder of Inrupt.
With a background as both a founder and an executive at global tech firms, John Bruce is uniquely qualified to help engineer the next phase of the web alongside his co-founder Sir Tim Berners-Lee. He brings to bear decades of successful business leadership and experience creating new markets around innovative software. Prior to partnering with Tim, he was the co-founder and CEO of Resilient, now an IBM company, that developed a new approach to cybersecurity. Through Resilient and four other successful startups, John has experienced first-hand the strategic challenges that the current structure of the web causes for users, developers, and organizations around the world.
John Bruce sits down with Geoff Nielson to talk about a future where individuals and not platforms own their data. John shares how AI, consent-driven data sharing, and a decentralized digital wallet called, Charlie could fundamentally reshape how we interact with technology, institutions, and each other. He explains why we must reclaim personal data from tech giants and what “agentic wallets” are and how they work.
In this video:
0:00 Intro
1:25 Rebuilding the Web
3:30 From Tim Berners-Lee to today
5:10 Data ownership vs. data surveillance
7:00 Moving from platforms to people
9:15 What Is an Agentic AI wallet?
11:00 Why consent must be baked into AI and data flows
13:45 Use cases in healthcare, government & enterprise
16:10 “Decentralized” doesn’t mean disorganized
18:30 What leaders get wrong about data control
20:45 Enterprise integration
23:00 The ROI of giving users control of their own data
25:30 Why this moment feels like the early days of the web
27:00 What’s next for Inrupt, Solid, and the Internet itself
29:00 How We rebuild digital trust
31:00 Inrupt's vision beyond 2030
34:00 Partnering with institutions to scale Solid
37:00 Global digital identity and governance challenges
40:00 Building public trust in data ecosystems
43:00 A non-linear view of it all
Connect with John:
Website: https://www.inrupt.com/about
LinkedIn: https://www.linkedin.com/in/johnwbruce/
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
Is your business ready for a world where AI agents act, adapt, and make decisions for you?
Today on Digital Disruption, we’re joined by Global Chief AI Engineer at PwC, Scott Likens.
Scott Likens serves as the Chief AI Engineer at PwC, overseeing both the Global and U.S. teams. He leads the AI Engineering and Emerging Technology R&D groups, driving the firm’s strategy across AI, blockchain, VR, quantum computing, and other disruptive technologies. With over 30 years of experience in emerging tech, Scott has helped clients across industries transform their customer experience, digital strategy, and operations. He began his career in software engineering during the early days of the internet, working with major multinationals to apply a localized lens to global digital and innovation trends. Scott’s diverse technical background spans advanced analytics, digital architecture, AI engineering, and innovation. During his time at PwC, he has lived and worked in both China and the U.S., serving as a global technology leader and advisor to key clients. He is a regular speaker at international conferences on emerging technologies, including AI and generative AI, blockchain and crypto, IoT, quantum computing, and advanced robotics.
Scott Likens sits down with Geoff Nielson for a look into what’s actually happening across the front lines of AI and innovation. Scott shares insights from the edge of tech, from AI agents and embodied intelligence to quantum computing and synthetic identities. He explains why most enterprise AI efforts fail to scale, how to think in innovation “horizons,” and what separates real value from hype. He touches on many topics including, how holographic AI and digital twins are already reshaping communication and the skills, and structures shaping the IT organization of the future.
In this video:
0:00 Intro
1:55 GenAI hype vs. real Value in the enterprise
4:20 Embodied AI and the rise of holographic humans
6:00 Multilingual synthetic avatars
7:30 Deepfakes, trust & the role of blockchain in authentication
9:00 Responsible AI
12:15 Innovation is moving faster than trust
14:00 Speed or scale?
16:00 Defining true innovation vs. incremental tech
18:00 A Framework for emerging tech
20:30 From quantum to satellites: What’s next
23:00 Digital Twins, IoT, and Bipedal Robotics
25:30 AI at the edge
28:45 AI agents in action
30:20 Legacy system modernization without rewriting code
34:00 Enterprise use cases
36:30 What business leaders get wrong about tech
39:00 Moving from pilot projects to organization-wide impact
42:30 Balancing speed, risk & innovation in enterprise ai
44:00 How PwC enables innovation without losing control
47:00 Why “waiting” is not an ai strategy
48:15 The most important investment is your workforce
50:00 Upskilling, hiring, and culture shift at scale
52:00 Quantum, cryptography & the real threat timeline
54:30 What’s next for leaders and innovators
Connect with Scott:
LinkedIn: https://www.linkedin.com/in/scottlikens/
X: https://x.com/ScottLikens
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
Is the Metaverse dead? Why is #OpenAI building jewelry? What happens when AI becomes more emotionally present than people?
Today on Digital Disruption, we’re joined by CEO of Future Dynamics and author, Cathy Hackl.
Cathy is a globally recognized tech and gaming executive, futurist, and keynote speaker specializing in spatial computing, AI, virtual worlds, and gaming platform strategy. She is the co-CEO of Future Dynamics, a spatial computing and AI solutions firm, and a top LinkedIn tech voice. Known as the “Godmother of the Metaverse,” she created the Tech Intimacy Scale and is currently researching the intersection of AI, love, and relationships. Cathy has held leadership roles at Amazon Web Services, Magic Leap, and HTC VIVE, and has guided major brands like Nike, Walmart, Ralph Lauren, Louis Vuitton, and Clinique through their emerging tech and gaming strategies. She has spoken at events hosted by Harvard Business School, MIT, CES, SXSW, and the World Economic Forum. Named one of Ad Age’s Leading Women of 2023 and featured on Forbes Latam’s cover for its 100 Most Powerful Women issue, Hackl is also listed among Vogue Business's 100 Innovators. She hosts Adweek’s TechMagic podcast and contributes to Vogue Singapore. In 2022, she made history as the first human to ring the NASDAQ opening bell both physically and in avatar form on live TV.
Cathy Hackl sits down with Geoff Nielson for an honest conversation about where technology is headed and what’s really happening with spatial computing, AI hardware, and the future of human connection. Cathy unpacks the evolution of the metaverse and why she believes we’re moving toward something bigger: the spatial web. She shares her first-hand experience with Google Beam, a revolutionary 3D communication technology that doesn’t require a headset. This episode dives into OpenAI’s push into hardware, why it’s a data play, and what that means for your privacy. Emotional technologies like Apple Vision Pro and what they mean for memory, grief, and connection, and the future of dating and relationships in a world filled with AI agents and romantic chatbots.
In this episode:
0:00 Intro
1:00 Is the Metaverse dead or just renamed?
3:00 Google Beam: 3D communication without a headset
5:00 The Apple moment: reaching for a virtual object
7:00 Dating, job interviews & presence in 3D
9:00 Will this replace video calls?
11:00 Apple vision pro
13:00 Memory preservation & future family photos
15:00 OpenAI’s hardware push
17:00 AI Agents and who controls your data
20:00 From ChatGPT to therapy bots
22:00 Emotional manipulation, mental health & ai advice
24:00 Who owns the virtual air around you?
27:00 Virtual real estate, annotations & air rights
30:00 The battle for our senses
35:00 Tech that arrived too early
37:00 Why dating in 2D doesn’t work in a 3D world
39:00 Spatial computing in creative industries
41:00 Where tech meets intimacy, memory & legacy
44:00 The real use case: Human connection
47:00 The future of emotional presence and what’s at stake
49:00 Getting It Right Matters
Connect with Cathy:
LinkedIn: https://www.linkedin.com/in/cathyhackl/
Instagram: https://www.instagram.com/cathyhackl/
Facebook: https://www.facebook.com/HacklCathy
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
Can generative AI help us close the gap between expertise and access?
This week on a special episode of Digital Disruption, we're joined by New York Times best-selling author Malcolm Gladwell, recorded live in Las Vegas at Info-Tech Research Group's LIVE tech conference.
Malcolm Gladwell is the author of eight New York Times bestsellers, including his latest, Revenge of the Tipping Point. Named one of TIME’s 100 Most Influential People and one of Foreign Policy’s Top Global Thinkers, he is renowned for his unique perspective on the forces shaping human behavior and society. An extraordinary speaker, Gladwell combines eloquence, warmth, and humor to both entertain and challenge audiences. Through masterful storytelling, he unpacks complex and often misunderstood ideas, from decision-making in Blink and the roots of success in Outliers, to our underestimation of adversity in David and Goliath, and the missteps we make when interacting with strangers in Talking to Strangers.
Malcolm Gladwell sits with Geoff Nielson for an engaging conversation on the future of AI, the power of storytelling, and the evolving forces that shape society. From AI’s role in closing the expertise gap to how unexpected narratives drive lasting cultural change, Gladwell offers his signature perspective: thoughtful, contrarian, and always surprising. He talks about why the most transformative uses of AI may be the simplest, how generative tools can elevate human capability, and why culture never changes in ways we expect. Malcolm provides insight into how the media, brands and politics are changing and what that could mean for leadership, while touching on the surprising truth about misinformation, expertise, and AI as a corrective tool.
In this episode:
0:00 Intro
0:24 AI's biggest promise? Strengthening weak links
1:33 AI in developing vs developed countries
2:12 Should AI replace or empower teachers?
3:13 Closing the expertise gap with AI
4:18 AI as a safe place to learn without embarrassment
5:08 The human side of AI
5:32 AI in surprising places
6:15 Malcolm’s personal use of AI
7:57 Where Malcolm finds ideas and why AI can’t replicate them
9:52 Why creativity can’t be automated
12:42 Will AI ever replace storytellers and thinkers?
14:24 Paul Simon’s genius explained
17:27 What makes a great story? Tesla example
22:26 Are people the new brand? Apple vs Chevy
25:13 The power of "overstories" in shaping behavior
26:57 We live in multiple narratives at once
29:26 Can organizational culture be changed?
30:37 How cultural narratives evolve
31:37 Politics, power, and social media’s new role
35:07 We’re still figuring out what tech is for
38:10 Why AI disruption might not be as bad as we fear
39:34 Is Malcolm an optimist or just realistic?
41:22 Can AI restore trust in expertise?
43:43 The power of narrative over facts
44:15 Poking the bear
Connect with Malcolm:
Website: https://www.gladwellbooks.com/
X: https://x.com/Gladwell
Instagram: https://www.instagram.com/malcolmgladwell/
Visit our website: https://www.infotech.com/
Follow us on YouTube: https://www.youtube.com/@InfoTechRG
Can your organization survive its own hesitation to take bold bets?
Today on Digital Disruption, we’re joined by former Amazon exec and bestselling author, John Rossman.
John is an author, business advisor, and keynote speaker. He was an early Amazon executive who played a key role in launching the Amazon marketplace business in 2002. He has served as the senior technology advisor at the Gates Foundation and senior innovation advisor at T-Mobile. His books include “The Amazon Way” and “Think Like Amazon.” His new book, Big Bet Leadership: Your Transformation Playbook for Winning in the Hyper-Digital Era, is an actionable guide for leaders who want to succeed in complex transformations.
John sits down with Geoff to unpack why most change initiatives fall short, and what leaders can do to shift the odds in their favor. Looking back on his experience launching Amazon Marketplace and advising top organizations, he shares strategies for scaling effectively, leading through change, and building resilience in today’s digital environment. John explains why traditional transformation efforts often fail, the issues leaders come across, and how to adopt a system of risk-smart decision-making and drive meaningful change. Get ready to challenge assumptions, cut through the hype, and transform the way you lead.
In this episode:
00:00 Intro
01:07 Solving hard problems with an integrated mindset
03:07 Why most companies struggle with change
04:21 The 3 megatrends disrupting business
06:32 Why Back-office productivity must change
07:22 Why Past winners are at risk of losing
09:36 Why big bets often fail
13:30 The three habits of big bet leaders
16:42 Why innovation labs often fail
21:39 Why leaders must design decision points intentionally
25:03 The power of a clear “Big Bet Vector”
31:27 What extreme accountability really means
36:08 Should we abandon silos?
40:25 Advice for how CIOs can unlock progress despite technical debt
46:08 How CIOs can win
51:55 Why Change can’t just be an operator’s job
53:01 The Big Bet Playbook explained
56:27 Active skepticism
Website: https://johnrossman.com/
LinkedIn: https://www.linkedin.com/in/john-rossman/
X: https://x.com/johnerossman
Connect with John:
Visit our website: https://www.infotech.com/Follow us on YouTube: https://www.youtube.com/@InfoTechRG