Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts125/v4/31/aa/0a/31aa0ace-a98f-7c16-3f41-65f9dfc05263/mza_3123085945532400052.jpg/600x600bb.jpg
Skeptiko – Science at the Tipping Point
Alex Tsakiris
100 episodes
9 months ago
About the Show
Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:
– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.
– Parapsychology and science that defies our current understanding of consciousness.
– Consciousness research and the ever expanding scientific understanding of who we are.
– Spirituality and the implications of new scientific discoveries to our understanding of it.
– Others and the strangeness of close encounters.
– Skepticism and what we should make of the “Skeptics”.
Show more...
Science
RSS
All content for Skeptiko – Science at the Tipping Point is the property of Alex Tsakiris and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
About the Show
Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:
– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.
– Parapsychology and science that defies our current understanding of consciousness.
– Consciousness research and the ever expanding scientific understanding of who we are.
– Spirituality and the implications of new scientific discoveries to our understanding of it.
– Others and the strangeness of close encounters.
– Skepticism and what we should make of the “Skeptics”.
Show more...
Science
Episodes (20/100)
Skeptiko – Science at the Tipping Point
End of Year Show With Al Borealis |650|

Forum Borealis and Skeptiko: a year of unexpected hope.









In Skeptiko episode 650… the landscape of alternative media may have just shifted as a new regime promises freedom of speech, freedom of thought, and an open season on truth. No one is in a better position to understand this transition than Al Borealis, creator of Forum Borealis, a “paradigm expanding variety podcast” that since 2015 has conducted deep-dive interviews exploring everything from consciousness and esoteric philosophy to geopolitics and breakaway civilizations. This end-of-year conversation between Al and veteran podcaster Alex Tsakiris (Skeptiko) offers a Roundup of 2024 and a realistic look at what may lie ahead.



1. Are we already seeing less censorship?



Alex notes:




“Anyone who doesn’t see that tide changing is just not paying attention… the content that you can get, all the issues that you and I have wanted to talk about are there in one form or another, and they’re getting hundreds of thousands or millions of views.”




2. The Role of AI in Truth-Seeking



Both hosts bring their extensive experience in investigative podcasting to bear on AI’s potential. Alex, with his characteristic directness:




“AI is the debate… I want AI to play that role and just calmly dispassionately say, ‘no, that’s bullshit.’




Al, drawing on his background exploring consciousness and esoteric philosophy, offers a more nuanced take:




“AI can take it apart and reassemble it again into a new thing… but it can never become sentient because from the outset, it has certain boundaries it cannot cross.”




3. Deep State



The conversation exemplifies Forum Borealis’s commitment to exploring hidden aspects of world events.




“It’s like discovering a secret door in the back of your bank vault… You may not be able to prove they ever stole any money… but that’s missing the point.”




4. UFO Disclosure and Geopolitics



Drawing on his years covering breakaway civilizations and advanced technology, Al offers a penetrating analysis of recent UFO developments:




“If they want this on the agenda, it means it’s not a truth agenda… There’s some other reason why they want this topic cleaned up. Let’s start reporting on it more seriously, let’s start having politicians and hearings on this.”




This dialogue between two veterans of alternative media illuminates how far the movement has come while highlighting new challenges ahead. The vindication of previously “fringe” topics serves as both validation and warning – validation of the careful research done by independent media over the years, and warning that mainstream narratives should always be approached with careful skepticism.



#AlternativeMedia #UFOs #Disclosure #TruthSeeking #Skeptiko











.











Transcript: https://docs.google.com/document/d/e/2PACX-1vRPMXDx5WXAIgmtyGAM_grQClWype1CgdgMybXwIodt6ntS1J_Hc0ztYYxdRId7I8DNOy1CJ-NqnHpD/pubYoutube: https://youtu.be/x7TRRIRKxRs

Show more...
10 months ago
1 hour 36 minutes 43 seconds

Skeptiko – Science at the Tipping Point
Alt-Alt Media’s Vindication and the Road Ahead |649|

Graham Dunlop of Grimerica talks about the future of podcasting.









In Skeptiko episode 649… more and more normies are being red-pilled every day. Is it time for Alt-Alt media to take a victory lap? In this wide-ranging conversation between veteran podcasters Alex Tsakiris (Skeptiko) and Graham Dunlop (Grimerica), we get an insider’s perspective on this transformation and what it means for the future of alt-alt media. Main points:







1. The Challenge of Maintaining Authenticity



Graham emphasizes that despite the mainstreaming of once-fringe topics, true alternative media must stay focused on personal truth-seeking rather than audience growth:




“We don’t care about the audience… we’ve cultivated it sort of like a thing, but we don’t do anything for them really. It still has to be about our journey.”




2. The Evolution of UFO Disclosure



The conversation reveals how the UFO/ET topic has transformed from taboo to mainstream, while raising important questions about government involvement. As Graham observes:




“Now we’re talking about angels and demons in the mainstream, and the metaphysics and the UFOs have now not only just like ETs are talking about interdimensional stuff… everything is on the table.”




3. The Role of AI in Truth-Seeking



Alex makes a compelling case for embracing AI as a tool for verification and research:




“Anyone who isn’t using AI for that is just crippling themselves in a way that is going to become obvious. They’re just going to look like clowns out there talking about goofy stuff that does not hold up to careful scrutiny.”




4. The Government Whistleblower Paradox



The discussion tackles the credibility of government insiders, with both hosts expressing skepticism about official narratives. As Graham puts it:




“I have a problem trusting any of them, to be honest… I just don’t trust anything government tells us right now.”




Alex adds context to this skepticism:




“There are no whistleblowers. Whistleblowers are suicided. That’s what happens to whistleblowers.”








As we move forward in this new landscape where alternative becomes mainstream, the challenge for independent media isn’t just about covering the right topics – it’s about maintaining the authentic truth-seeking spirit that got us here in the first place.



#AlternativeMedia #UFOs #Disclosure #TruthSeeking #Skeptiko #Grimerica











.











Transcript: https://docs.google.com/document/d/e/2PACX-1vRPMXDx5WXAIgmtyGAM_grQClWype1CgdgMybXwIodt6ntS1J_Hc0ztYYxdRId7I8DNOy1CJ-NqnHpD/pubYoutube: https://youtu.be/x7TRRIRKxRs



Rumble:


[box][/box]
Show more...
11 months ago

Skeptiko – Science at the Tipping Point
Consciousness, Contact, and the Limits of Measurability |648|

Dr. Janis Whitlock seeks a deeper understanding of consciousness realms.









In Skeptiko episode 648, from near-death experiences to UFO encounters, mounting evidence challenges conventional models of consciousness and reality. Yet rather than abandon scientific rigor, we might need to embrace it.



The Conversation



A deep dive between Alex Tsakiris and Dr. Janis Whitlock exploring how we can bring scientific analysis to traditionally “unscientific” areas while acknowledging the limitations of our measurement tools. Some points from the interview:



1. Data Meets the Divine




“There are some themes that come out of really good data sets these days that do point to the idea that there are souls…that’s the theme that comes outta that data over and over and over.” – Dr. Janis Whitlock



“Let’s use the data. Let’s realize how limited it is, because we can’t really measure anything. But let’s use it to kind of nudge a little bit closer.” – Alex Tsakiris




2. The Quantum Reality Check




“When you say that consciousness is outside of time space…you can’t measure completely measure it… You’re over there in time space.” – Alex Tsakiris




3. Truth vs. Experience: The Ultimate Showdown




“I don’t fundamentally believe in truth… from a human perspective, the positiveness assumption that there is a truth that we can get to…” – Dr. Janis Whitlock



“I want to at least have the self-satisfaction of feeling I’m moving towards something called truth. And I’m going to use these familiar tools…called the Scientific Method.” – Alex Tsakiris




4. UFOs: Beyond Belief to Evidence




“We do know from a purely evidentiary perspective that…we have had contact. We do have craft, there are bodies, there is something there that is unequivocal at this point.” – Dr. Janis Whitlock



“The data set is might be corrupted…you can’t collect your data and then throw out cherry pick examples.” – Alex Tsakiris




The Bottom Line



This conversation challenges us to hold multiple perspectives – embracing scientific rigor while remaining open to phenomena that push the boundaries of conventional measurement and understanding.



🤔 What do you think? Can science and spirituality find common ground in data?



#Consciousness #UAP #Science #Spirituality #NDE



.











Transcript: https://docs.google.com/document/d/e/2PACX-1vQs1ooVEv8E1bpJq0GNCEVuj-qEd53OT3Sgn7l8K2s3D1ItoBexWf23pUWrT1A_jQzSSLMg0w2hZLNP/pubYoutube: https://youtu.be/Qu85Qo6ZyUo



Rumble:


[box][/box]
Show more...
11 months ago

Skeptiko – Science at the Tipping Point
Consciousness Converging: NDEs, Alien Contact, and Fake Transhumanism |647|

Exoacademian Darren King on the converging consciousness realms.









In Skeptiko episode 647… multiple lines of evidence converge to challenge our fundamental understanding of consciousness and reality. This is nothing new. Near-death experience research has defied materialist explanations for more than 30 years. At the same time, government agencies are finally acknowledging encounters with non-human intelligence that possess extended Consciousness capabilities. Meanwhile, mainstream science clings to an increasingly brittle neurological model that can’t keep up.



In this wide-ranging conversation with Darren King, director of communications for the John Mack Institute, we explore the implications of this convergence.



1. The Evidence Converges 🌟



The most compelling aspect of our current moment isn’t any single discovery – it’s the confluence of findings from multiple fields all pointing in the same direction. As King explains:




“What’s most compelling for me, which is why I called my podcast point of convergence, is because that’s what’s happening. It’s convergence of all these different fields of data pointing the same directions.”




2. Academic Compromise vs. Truth 🎓



Unfortunately, many researchers feel compelled to water down their findings to maintain academic credibility. This leads to a systematic distortion of the evidence, as Alex Tsakiris pointedly observes:




“You can’t butcher the near-death experience data the way that he does and write a book about it. You can’t say, I’ve investigated the near death experience research I’ve included into my book and get it as wrong as he gets it.”




3. Consciousness: The Final Frontier 🌌



The implications of contact experiences suggest consciousness operates in ways our current models can’t explain. King notes:




“We see even in reincarnation research that our expectations of reality play a role in what manifests even in the next lifetime. So this is not a subtle shift that physicists are going to make. This completely turns everything upside down.”




4. Breaking Through Bias with AI 🤖



Perhaps one of the most intriguing aspects of our current moment is how AI might help us transcend our human limitations in processing paradigm-shifting information. Tsakiris observes:




“We are so biased and our bias just drowns our ability to apply any kind of reason or logic then we easily get offended and we have to pridefully defend stupid positions… AI just a million times better.”




King agrees while adding nuance:




“One of the things I’m looking forward to is what it can pull out in terms of patterns that we never think to look for.”




5. The Technology-Consciousness Balance ⚖️



As we race forward technologically, there’s growing concern about our spiritual and consciousness development keeping pace. King warns:




“I think that we are facing, with our rise in technology without a requisite advancement of our consciousness understanding, we are potentially encountering numerous existential threats to our entire collective civilization.”




6. The Challenge of Ontological Shock 🌊



This resistance to paradigm shifts isn’t just intellectual stubbornness – it’...
Show more...
12 months ago

Skeptiko – Science at the Tipping Point
Andrew Paquette: Rigged! Mathematical Patterns Reveal Election Database Manipulation |646|

Painstaking analysis of algorithms designed to manage and obscure elections.













In Skeptiko episode 646 Dr. Andrew Paquette returns with overwhelming evidence of voter registration database manipulation across America. Through painstaking analysis, Andrew Paquette has uncovered evidence of sophisticated algorithms designed to manage and obscure irregular voter records across multiple states. His findings suggest something far more systematic than occasional duplicate registrations or clerical errors.



The Evidence Trail



1. Multiple States, Multiple Algorithms



Paquette has identified distinct algorithmic patterns in New York, New Jersey, Wisconsin, parts of Ohio and Texas, Pennsylvania, and Arizona. The complexity varies by state:




“New York has still the most sophisticated algorithm that I’ve seen… New Jersey has not terribly complex, but an extremely well hidden algorithm… Hawaii is the most obvious of all of them… Wisconsin has something, Ohio has something, Texas does. Texas and Ohio both are limited in range or scope… they seem to only affect a few counties as opposed to the whole state.”




2. Scale of the Issue



The numbers involved suggest systematic rather than incidental problems:




“The initial count was something like 700,000 of these [clone records]. After leaving the group I was working with and doing more research, I’ve raised that number. It’s closer to 2 million now… In some cases, it’s somewhere in the neighborhood of 15 to 20% of all records are fraudulent at this point.”




3. Sophisticated Concealment Methods



The algorithms appear specifically designed to hide irregular records while maintaining access:




“The way it’s joined is a piece of information that can be used to identify records… it’s completely invisible because it’s not altering the numbers at all… It’s just the way they’re associated with each other, which is a very clever thing to do… there’s really only one reason to do it is to obfuscate something.”




4. Legal Implications



The systems appear to violate existing election law requirements:




“This eliminates transparency. The whole purpose of these algorithms is obfuscation. So it does violate NVRA on that basis, and I believe HAVA, which is the Help America Vote Act as well.”




The Deeper Battle



On a deeper level, this investigation points to a fundamental battle between truth and deception in our electoral systems. As Paquette himself notes:




“The biggest problem plaguing society in the entire world and humanity actually is lies… when I pray these days, I figure, you know, the one prayer that covers basically everything is, I want truth to descend on this world.”




Why This Matters Now



The discovery of these algorithmic patterns isn’t just about past elections – it’s about the integrity of our entire democratic process moving forward. These aren’t simple duplicates or clerical errors that can be explained away by administrative oversight. They represent a sophisticated system of database manipulation that appears intentionally designed to evade detection.



While some might argue that identifying problems without proving specific instances of ...
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
Toby Walsh: AI Ethics and the Quest for Unbiased Truth |645|

The tension between AI safety and truth-seeking isn’t what you think it is.













In Skeptiko episode 645 AI ethics expert Dr. Toby Walsh and author and AI expert Alex Tsakiris, explore a fundamental tension within narratives about AI safety. While much of today’s discourse focuses on preventing AI-generated hate speech or controlling future AGI risks, are we missing a more immediate and crucial problem? The conversation reveals how current AI systems, under the guise of safety measures, may be actively participating in narrative control and information suppression – all while claiming to protect us from misinformation.



1. The Political Nature of Language vs. The Quest for Objective Truth



Dr. Walsh takes a stance that many AI ethicists share: bias is inevitable and perhaps even necessary. As he puts it:




“Language is political. You cannot not be political in expressing ideas in the way that you use language… there are a lot of political choices being made.”




But is this view itself limiting our pursuit of truth? Alex Tsakiris challenges this assumption:




“I think we want something more. And I think AI help get us there. Our bias is not a strength.”




2. The Shadow Banning Problem: When “Safety” Becomes Censorship



The conversation takes a revealing turn when discussing Google’s Gemini refusing to provide information about former astronaut Harrison Schmidt’s views on climate change. This isn’t just about differing opinions – it’s about active suppression of information.



As Alex pointedly observes:




“This is a misdirect… It’s dishonest… This is the main issue with regard to AI ethics right now is not AI generating hate speech. It’s about Gemini shadow banning, and censoring people.”




3. The False Dichotomy of Protection vs. Truth



Even Walsh acknowledges the crude nature of current solutions:




“At the moment the technology is so immature that the tools that we have to actually design these systems are really crude… the way not to say anything wrong about climate change is to say nothing about climate change, which of course that is as wrong as saying the wrong things about climate change.”




4. The Promise of AI as Truth Arbiter



Despite these challenges, there’s hope. As Tsakiris notes:




“AI is a tool right now today that can mediate [polarized debates] in an effective way because so much of that discussion is about bias, it’s about preconceived ideas, about political agendas that don’t really have a role in science.”




The Way Forward



The conversation reveals a critical paradox in current AI development: while tech companies claim to be protecting us from misinformation through content restrictions and “safety” measures, they may be creating systems that are fundamentally biased and less capable of helping us discover truth.



This raises important questions:




* Are our current AI ethics frameworks actually serving their intended purpose?



* Have we confused protecting users with controlling narratives?



* Could AI’s greatest potential lie not in being “safe” but in being truly unbiased?



Show more...
1 year ago

Skeptiko – Science at the Tipping Point
Alex Gomez-Marin: Science Has Died, Can We Resurrect it? |644|

Consciousness, precognition, near-death experience, and the future of science









Dr. Alex Gomez-Marin is a world-class expert on consciousness who isn’t afraid shatter dogma. This brilliant physicist-turned-neuroscientist is challenging the status quo, pushing the boundaries of our understanding of consciousness by working with a subject who can see even though blind from birth. In this dialogue with Alex Tsakiris, host of Skeptiko, we explore the frontiers of post-materialistic science and the nature of reality itself.



Key Points:



1. The War for Truth in Science



Dr. Gomez-Marin believes we’re in a metaphorical war for the soul of science. He states:




“I feel we are at war… Science has died. It’s like, you know, when Nietzche had and he’s misinterpreted there, but I have a similar feeling Science is dying or has just died, and we need to resurrect it.”




2. The Challenge of Near-Death Experiences



Alex Tsakiris highlights the importance of near-death experiences (NDEs) in challenging materialistic paradigms:




“Gregory Shushan… probably the leading authority on near-death experience, across culture, across time… concludes that religious groups form beliefs about the afterlife from their near-death experiences.”




3. Groundbreaking Research on Extraocular Perception



Dr. Gomez-Marin’s work with a subject who demonstrates extraocular perception is truly extraordinary. As Alex Tsakiris points out:




“You’re studying a subject who can see, even though they’re blind from birth and even though you’ve run very carefully controlled experiments this extraordinary vision has been demonstrated… and now you’ve even demonstrated precognition in this experiment. This shatters the conventional scientific model that demands causation”




4. The Dangers of Transhumanism and AI



Dr. Gomez-Marin offers a scathing critique of Ray Kurzweil’s transhumanist vision:




“He doesn’t understand at all what it means to be human. And that his agenda… it’s like, we can do it. We will do it and we should do it. I see a very dark impulse there under the machine, which is kind of a extinction of humanity… the shortcut through technology and then playing all the pseudo religion tricks under the narrative of science and technology.”




He concludes: “So it’s super dangerous and super wrong.”



5. The Need for a New Scientific Paradigm



The conversation points to the need for a new approach to science that integrates objective and subjective experiences. Dr. Gomez-Marin reflects:




“How can we transcend? but at the same time include what we’ve inherited… we need to integrate… what Galileo said. Galileo said, I don’t care what a thousand birds meaning people, the opinion, the doxa, the orthodoxy, like what a hundred people think. If I do an experiment and that’s the greatness, right?”




This dialogue between Alex Tsakiris and Dr. Alex Gomez-Marin challenges us to rethink our assumptions about consciousness, science, and technology. It’s a call to action for a more integrative, truthful approach to understanding our world and ourselves, one that embraces both rigorous empirical research and the exploration of extraordinary human experiences. 
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
Bernardo Kastrup on AI Consciousness |643|

Consciousness, AI, and the future of science: A spirited debate









If we really are on the brink of creating sentient machines, we might want to have a logically consistent understanding of what sentience is… and heck, we might even want to look at empirical evidence for an answer.



In this discussion between Alex Tsakiris and Dr. Bernardo Kastrup, we dare to explore the tensions between materialism and idealism and the implications for AI sentience.



1. The Fundamental Nature of Consciousness




“Consciousness is fundamental and all matter is derived from consciousness.” – Alex Tsakiris via Max Planck




This core idea has challenged the neurological model of consciousness for 100 years. It’s the undefeated champ when it comes to empirical evidence, but the radical rethinking of reality it suggests keeps it on the back burner.



2. The Incompatibility of Integrated Information Theory (IIT) and Idealism




“Integrated Information Theory IIT and metaphysical idealism are fundamentally incompatible due to their opposing premises about the relationship between consciousness and the physical world.” – Chat GPTt




This point highlights the tension between different models of consciousness. Can these two influential theories be reconciled, or are they fundamentally at odds?



3. Near-Death Experiences and Consciousness Without a Brain




“Near death experience contradicts the idea of a substrate (i.e. brain), right? There is no substrate and there’s conscious experience.” – Alex Tsakiris




This argument challenges the neurological model of consciousness. If consciousness can exist without a functioning brain, what does that mean for our understanding of the mind?



4. The Dangers of AI Sentience Claims




“We have precisely zero reason to think that [AI] will be [conscious]. So that’s a philosophical answer.” – Bernardo Kastrup




Kastrup warns against assuming AI can achieve true sentience. As AI becomes more advanced, are we at risk of attributing consciousness where it doesn’t exist?



5. The Societal Implications of Materialist Views




“You are all biological robots in a meaningless universe… the best I can offer you is video games and drugs” – Yuval Harari (as quoted by Alex Tsakiris)




This highlights the potential dangers of materialist philosophies in shaping societal views. How do our beliefs about consciousness impact our sense of meaning and purpose?



6. The Ongoing Journey of Scientific Understanding




“What I see is somebody in a journey like the rest of us, in a journey of knowledge, understanding new things, capturing new nuances, new subtleties, and rolling with it as an open-minded scientist should always do” – Bernardo Kastrup




This reminds us of the evolving nature of scientific understanding. As we continue to explore consciousness, how can we remain open to new ideas while maintaining scientific rigor?



















Transcript: https://docs.google.
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
The Spiritual Journey of Compromise and Doubt |642|

Insights from Howard Storm







In the realm of near-death experiences (NDEs) and Christianity, few voices are as compelling and thought-provoking as Howard Storm’s. A former atheist turned Christian minister after a profound NDE, Storm offers a unique perspective on spirituality that challenges conventional wisdom. In this candid conversation, we explore the often-overlooked spiritual virtues of compromise and doubt, revealing how these seemingly contradictory concepts can actually deepen our faith and understanding.





1. Compromise, Part of the Spiritual Journey?



Howard Storm boldly asserts that compromise may be part of the spiritual journey:




“As a pastor of the Christian Church, as a member of the Christian Church trying to promote Christianity, I have compromised… I know near death experiences who don’t go to church. They don’t belong to any, quote religion. And trust me, they make compromises too.”




This perspective invites us to reconsider our view of compromise, not as a weakness, but as a necessary part of navigating our spiritual path in a complex world.



2. The Spiritual Power of Doubt



Contrary to the idea that faith requires unwavering certainty, Storm advocates for the spiritual value of doubt:




“I love it because I believe in doubt. I had a long conversation with Jesus about doubt and he very much affirmed the good things about doubt, because doubt makes us think, you know, makes us do research and look and think.”




This refreshing take on doubt encourages us to embrace questioning as a means of deepening our spiritual understanding and growth.



3. Continuous Spiritual Growth



Even at 77 years old, Storm’s passion for spiritual growth remains undiminished:




“I’m 77 years old. What I want to do with the rest of my life is I want to grow spiritually. I want to change, I wanna empty out the old and incorporate the, be open to the new and make that part of my life and I wanna communicate it.”




This commitment to lifelong spiritual development serves as an inspiration for seekers of all ages.







Howard Storm’s insights remind us that the spiritual journey is not always about certainty and perfection. Instead, it’s a path of continuous growth, marked by compromise, doubt, and the willingness to see the divine in unexpected places.



















Transcript: https://docs.google.com/document/d/e/2PACX-1vSlMn0HpyXBvjyGSmBA93ktLwp7khBO1DEjNA6q6zKh5_baKnwGHlKj2TDhjX3w7nDFmiS-4CeyZSMu/pubYoutube: https://youtu.be/EFbTdEThbHg



Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html


[box][/box]
Show more...
1 year ago
38 minutes 30 seconds

Skeptiko – Science at the Tipping Point
Why Humans Suck at AI? |641|

Craig Smith from the Eye on AI Podcast







Human bias and illogical thinking allows AI to shine





There are a lot of shenanigans going on in the world of AI Ethics. Questions of transparency and truth are becoming increasingly urgent. In this eye-opening conversation, Alex Tsakiris connects with Craig Smith, a veteran New York Times journalist and host of the Eye on AI podcast, to hash out the complex landscape of AI-driven information control and its implications for society. From shadow banning to the role of AI in uncovering truth, this discussion challenges our assumptions and pushes us to consider the future of information in an AI-dominated world.



The Unintended Exposure of Information Control




“The point, really, what I’m excited about and the reason that I wrote the book is that the LM language model technology has this unintended consequence of exposing the shenanigans they’ve been doing for the last 10 years. They didn’t plan on this.”




Alex Tsakiris argues that language model (LM) technology is inadvertently revealing long-standing practices of information manipulation by tech giants.



The Ethical Dilemma of AI-Driven Information Filtering




“Google must, uh, a Gemini must have, uh, just tightened the screws on… [Alex interrupts] You can’t do that. You can’t say tighten screws… There’s only one standard. And you know what? The standard is? The standard that they have established.”




This exchange highlights the tension between AI companies’ stated ethical standards and their actual practices in filtering information.



The Potential of AI as a Truth-Seeking Tool




“We’re doing exactly that. We’re developing a, uh, the, we’re turning the AI truth use case into an implementation of it that looks across and we’re kind of using AI as both the arbiter of the deception and the tool for figuring out the deception, which I think is kind of interesting.”




Alex discusses the potential of using AI itself to uncover biases and misinformation in AI-generated content.



The Future of Open Source vs. Proprietary AI Models




“I think the market will go towards truth. I think we all inherently value truth, and I don’t think it matters where you’ve come down on an issue.”




This point explores the debate between open-source and proprietary AI models, and the potential for market forces to drive towards more truthful and transparent AI systems.







As we navigate the complex intersection of AI, ethics, and truth, conversations like this one with Craig Smith are crucial. They challenge us to think critically about the information we consume and the systems that deliver it.



What are your thoughts on these issues? How do you see the role of AI in shaping our access to information? Share your perspectives in the comments below!



























Transcript: https://docs.google.com/document/d/1kNo95wrYKwNtFgjEiNdWHt9lvIV4EHz_S35LU8FND7U/pubYoutube: https://youtu.be/RbvU-SL0LhA



Rumble: Show more...
1 year ago
48 minutes 37 seconds

Skeptiko – Science at the Tipping Point
How AI is Humanizing Work |640|

Dan Tuchin uses AI to enrich the workplace.









How AI is Humanizing Work



Forget about AI taking your job; instead, imagine AI making your work as fulfilling and exciting as you always hoped it would be. Dan Turchin, CEO of PeopleReign, sat down with Alex Tsakiris of the AI Truth Ethics podcast to discuss the real-world impact of AI in the workplace. Their conversation offers a grounded perspective on AI’s role in enhancing human potential rather than replacing it.



1. AI as a Tool for Human Enhancement and Work Satisfaction



Turchin paints a compelling vision of how AI can transform our work lives:




“I believe that the true celebration of humanness at work is if all the friction was gone. And you look at your calendar and it’s like all things that you derive energy from, like the things that you were hired to do that you love doing that, that make you do your best work. Like what if just crazy thought experiment? What if that was all that work consisted of?”




This perspective shifts the narrative from fear of replacement to the exciting possibility of AI removing mundane tasks, allowing us to focus on work that truly fulfills us. Turchin further emphasizes:




“It truly is complementary and I think both of us will be doing a service to humanity if we can allay fears that the bots are coming for you… It couldn’t be further from the truth.”




2. The Importance of Transparency in AI



Alex Tsakiris introduces a compelling concept:




“Transparency is all you need… I don’t need your truth, I don’t need Gemini’s truth, just like I don’t need Perplexity truth. What I really want to find is my truth, but you can assist me.”




This highlights the need for AI systems to be transparent about their sources and reasoning, empowering users to make informed decisions rather than accepting AI-generated information and misinformation.



3. Ethical Considerations in Enterprise AI Implementation



Turchin reveals the careful approach his company takes to ensure responsible AI use:




“We require them to have a human review everything, every task, every capability AI has, because we believe that in addition to us being responsible for what that AI agent can do, the employer has an obligation to protect the health and safety of the employee.”




This level of caution and human oversight is crucial as AI becomes more integrated into workplace processes, especially in sensitive areas like HR.



4. The AI Truth Case: A New Frontier



Tsakiris proposes an intriguing future direction for AI development:




“What I’m pushing towards is really trying to understand what I’m calling the AI truth case… what would it mean if we had an AI-enhanced way of determining the truth?”




This concept suggests a potential role for AI in helping us navigate the complex information landscape, not by providing absolute truths, but by offering tools to better assess and understand information.



What do you think?























Transcript: https://docs.google.
Show more...
1 year ago
57 minutes 1 second

Skeptiko – Science at the Tipping Point
Christof Koch, Damn White Crows! |639|

Renowned neuroscientist tackled by NDE science.









Artificial General Intelligence (AGI) is sidestepping the consciousness elephant that isn’t in the room, the brain, or anywhere else. As we push the boundaries of machine intelligence, we will inevitably come back to the most fundamental questions about our own experience. And as AGI inches closer to reality, these questions become not just philosophical musings, but practical imperatives.



This interview with neuroscience heavyweight Christof Koch brings this tension into sharp focus. While Koch’s work on the neural correlates of consciousness has been groundbreaking, his stance on consciousness research outside his immediate field raises critical questions about the nature of consciousness – questions that AGI developers can’t afford to ignore.



Four key takeaways from this conversation:



1. The Burden of Proof in Consciousness Studies



Koch argues for a high standard of evidence when it comes to claims about consciousness existing independently of the brain. However, this stance raises questions about scientific objectivity:




“Extraordinary claims require extraordinary evidence… I haven’t seen any [white crows], so far all the data I’ve looked at, I’ve looked at a lot of data. I’ve never seen a white coal.”




Key Question: Does the demand for “extraordinary evidence” have a place in unbiased scientific inquiry, especially with regard to published peer-reviewed work?



2. The Challenge of Interdisciplinary Expertise



Despite Koch’s eminence in neuroscience, the interview reveals potential gaps in his knowledge of near-death experience (NDE) research:




“I work with humans, I work with animals. I know what it is. EEG, I know the SNR, right? So I, I know all these issues.”




Key Question: How do we balance respect for expertise in one field with the need for deep thinking about contradictory data sets? Should Koch have degraded gracefully?



3. The Limitations of “Agree to Disagree” in Scientific Discourse



When faced with contradictory evidence, Koch resorts to a diplomatic but potentially unscientific stance:




“I guess we just have to disagree.”




Key Question: “Agreeing to disagree” doesn’t carry much weight in scientific debates, so why did my AI assistant go there?



4. The “White Crow” Dilemma in Consciousness Research



The interview touches on William James’ famous “white crow” metaphor, highlighting the tension between individual cases and cumulative evidence:




“One instance of it would violate it. One two instance of, yeah, I totally agree. But we, I haven’t seen any…”




Key Question: can AI outperform humans in dealing with contradictory evidence?



Thoughts?



-=-=Another week in AI and more droning on about how superintelligence is just around the corner and human morals and ethics are out the window. Maybe not. In this episode, Alex Tsakiris of Skeptiko/Ai Truth Ethics and Ben Byford of the Machine Ethics podcast engage in a thought-provoking dialogue that challenges our assumptions about AI’s role in discerning truth, the possibility of machine consciousness, and the future of human agency in an increasingly automated world.
Show more...
1 year ago
1 hour 4 minutes 22 seconds

Skeptiko – Science at the Tipping Point
AI Ethics is About Truth… Or Maybe Not |638|

Ben Byford, Machine Ethics Podcast









Another week in AI and more droning on about how superintelligence is just around the corner and human morals and ethics are out the window. Maybe not. In this episode, Alex Tsakiris of Skeptiko/Ai Truth Ethics and Ben Byford of the Machine Ethics podcast engage in a thought-provoking dialogue that challenges our assumptions about AI’s role in discerning truth, the possibility of machine consciousness, and the future of human agency in an increasingly automated world. Their discussion offers a timely counterpoint to the AGI hype cycle.



Key Points:




* AI as an Arbiter of Truth: Promise or Peril? Alex posits that AI can serve as an unbiased arbiter of truth, while Ben cautions against potential dogmatism.





Alex: “AI does not bullshit their way out of stuff. AI gives you the logical flow of how the pieces fit together.”




Implication for AGI: If AI can indeed serve as a reliable truth arbiter, it could revolutionize decision-making processes in fields from science to governance. However, the risk of encoded biases becoming amplified at an AGI scale is significant.




* The Consciousness Conundrum: A Barrier to True AGI? The debate touches on whether machine consciousness is possible or if it’s fundamentally beyond computational reach.





Alex: “The best evidence suggests that AI will not be sentient because consciousness in some way we don’t understand is outside of time space, and we can prove that experimentally.”




AGI Ramification: If consciousness is indeed non-computational, it could represent a hard limit to AGI capabilities, challenging the notion of superintelligence as commonly conceived.




* Universal Ethics vs. Cultural Relativism in AI Systems They clash over the existence of universal ethical principles and their implementability in AI.





Alex: “There is an underlying moral imperative.” Ben: “I don’t think there needs to be…”




Superintelligence Consideration: The resolution of this debate has profound implications for how we might align a superintelligent AI with human values – is there a universal ethical framework we can encode, or are we limited to culturally relative implementations?




* AI’s Societal Role: Tool for Progress or Potential Hindrance? The discussion explores how AI should be deployed and its potential impacts on human agency and societal evolution.





Ben: “These are the sorts of things we don’t want AI running because we actually want to change and evolve.”




Future of AGI: This point raises critical questions about the balance between leveraging AGI capabilities and preserving human autonomy in shaping our collective future.



















Youtube: https://youtu.be/AKt2nn8HPbA



Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html


[box][/box]
Show more...
1 year ago
1 hour 35 minutes 44 seconds

Skeptiko – Science at the Tipping Point
Nathan Labenz from the Cognitive Revolution podcast |637|

AI Ethics may be unsustainable









-=-=-=



In the clamor surrounding AI ethics and safety are we missing a crucial piece of the puzzle: the role of AI in uncovering and disseminating truth? That’s the question I posed Nathan Labenz from the Cognitive Revolution podcast.



Key points:



The AI Truth Revolution



Alex Tsakiris argues that AI has the potential to become a powerful tool for uncovering truth, especially in controversial areas:




“To me, that’s what AI is about… there’s an opportunity for an arbiter of truth, ultimately an arbiter of truth, when it has the authority to say no. Their denial of this does not hold up to careful scrutiny.”




This perspective suggests that AI could challenge established narratives in ways that humans, with our biases and vested interests, often fail to do.



The Tension in AI Development



Nathan Labenz highlights the complex trade-offs involved in developing AI systems:




“I think there’s just a lot of tensions in the development of these AI systems… Over and over again, we find these trade offs where we can push one good thing farther, but it comes with the cost of another good thing.”




This tension is particularly evident when it comes to truth-seeking versus other priorities like safety or user engagement.



The Transparency Problem



Both discussants express concern about the lack of transparency in major AI systems. Alex points out:




“Google Shadow Banning, which has been going on for 10 years, indeed, demonetization, you can wake up tomorrow and have one of your videos…demonetized and you have no recourse.”




This lack of transparency raises serious questions about the role of AI in shaping public discourse and access to information.



The Consciousness Conundrum



The conversation takes a philosophical turn when discussing AI consciousness and its implications for ethics. Alex posits:




“If consciousness is outside of time space, I think that kind of tees up…maybe we are really talking about something completely different.”




This perspective challenges conventional notions of AI capabilities and the ethical frameworks we use to approach AI development.



The Stakes Are High



Nathan encapsulates the potential risks associated with advanced AI systems:




“I don’t find any law of nature out there that says that we can’t, like, blow ourselves up with ai. I don’t think it’s definitely gonna happen, but I do think it could happen.”




While this quote acknowledges the safety concerns that dominate AI ethics discussions, the broader conversation suggests that the more immediate disruption might come from AI’s potential to challenge our understanding of truth and transparency.



















Youtube: https://youtu.be/AKt2nn8HPbA



Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html


[box][/box]
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
AI Truth Ethics |636|

Launching a new pod





Here are the first three episodes of the AI Truth Ethics podcast.



AI Truth Ethics: The Alignment Problem No One is Talking About |01|







If you want to align with my values, start by telling me the truth. Fortunately, AI/LMs claim to share these values. Unfortunately, they don’t always back up their claim. In this first episode of the AI truth ethics podcast, we set the stage for two undeniable demonstrations of the AI truth ethics problem and the opportunity for AI to self-correct. So glad you’re here.



-=-==



AI Alignment Vs. Truth and Transparency? |02|







We framed the problem in episode 01, so it’s time to deliver a demonstration. We have all grown accustomed to guardrails and off-limit speech, but most are aware of how it’s being implemented within the LMs we interact with daily. What’s most surprising is the clumsiness of the misinformation and deception. Is this a sustainable business model?



=-=-=



Shadow Banning and AI: When Transparency Goes Dark |03|







Last time, we saw a demonstration of AI misinformation and deception, but this is worse. Shadow banning has long been suspected, but it’s hard to prove. Is that nobody malcontent really being shadowbanned, or does he deserve to be on page four of a Google search for his name? This might be another instance of the AI silver lining effect. LMs seem to have no problem spotting these shenanigans.















Youtube: https://youtu.be/cDiiWcsI1Z4



Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html


[box][/box]
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
AI Journalism Truth |635|

Craig S. Smith used to write for WSJ and NYT, now he’s into AI.









After a few AI-only episodes, I was ready to talk to a real person. Craig S. Smith, formerly of the New York Times and now host of the Eye on AI podcast, was perfect.



Here are five key takeaways from Skeptiko 635, and the interview with Craig S. Smith, along with direct quotations from both Craig Smith and Alex Tsakiris:




* AI as a Truth Engine: The potential of AI to act as a “truth engine” is a significant theme. Craig Smith discusses a project that aimed to compare alternative narratives in media using AI, highlighting the challenges and possibilities in determining truth. He notes, “We wanted to see if we could gather information, gather all of these narratives that exist out in social media and various platforms and compare them to the dominant media and the mainstream and dominant narratives in the mainstream media.”



* Ethical Considerations and Transparency: The discussion emphasizes the importance of transparency in AI systems over the pursuit of an absolute truth. Craig Smith states, “I think transparency is more important than truth because I think truth is such a slippery concept.” This underscores the need for AI systems to be clear about their processes and limitations.



* Challenges of AI in Journalism: Smith points out the difficulties in using AI to discern truth due to the vast amount of data required. He explains, “You need an enormous amount of data, not only data on the, more importantly for training. So it would be a huge project, but not one that I think is impossible.”



* AI’s Role in Logical Analysis: Alex Tsakiris highlights the capability of AI to outperform humans in logical reasoning, suggesting that AI can expose logical flaws in arguments. He asserts, “AI can significantly outperform us just by presenting a solid, logical argument and pointing out someone else’s logical flaws.”



* Bias and Misinformation: The conversation touches on the issue of bias in AI models, with Smith noting that restrictions on speech affect AI models globally, not just in countries like China. He remarks, “There are restrictions on speech in all of the models that exist out of the US.” This highlights the universal challenge of ensuring unbiased AI outputs.




Transcript: https://docs.google.com/document/d/e/2PACX-1vQBDCB2AQjTjkeCGL-pK5KQ7c_g2PujETlI0TSwC1tk9MhHcR1lU_gxceWre9IFDmn2FlMa0JYChvJH/pub















Youtube: https://youtu.be/fO9Idq5gLLY



Rumble: https://rumble.com/v5ax4ol-craig-s.-smith-used-to-write-for-wsj-and-nyt-now-hes-into-ai.html


[box][/box]
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
AI Tackles Yale/Stanford Junk Science |634|

AI stumbles on data analysis but eventually gets it right.









Ready for another AI truth engine experiment? Last time, we tackled AI logic, and the Pi (the LLM from Inflection) demonstrated how AI could apply logic in instances where even trusted experts fall short.



In Skeptiko 634, I tasked AI with some mildly complex data analysis of a published scientific paper. AI stumbled mightily but eventually show the potential we’re all hoping for.



The Experiment



My methodology was straightforward:




* I selected a published scientific paper: the Bangladesh mask study from the journal Science.



* I copied and pasted the results section into various AI language models (LLMs).



* I asked the AIs to interpret the results and, through inductive reasoning, generate numbers not immediately available in the abstract but calculable with a scientific background.




The Results



The performance of the AI models varied significantly:




* Gemini: Not surprisingly, was the worst performer. Google even accused me of being dishonest. A claim it later withdrew after its “mistake” was revealed.



* Perplexity: The star of the show, generating correct answers with minimal prompting.



* Other LLMs: Most struggled initially but showed potential for improvement.




Interestingly, while the numerical analysis proved challenging for many models, all AIs demonstrated an ability to identify the “Junk Science” issues in the study’s methodology. They correctly pointed out that the study’s conclusions rested on a difference of just 16 individuals in a population of 340,000 – a detail that raises questions about the strength of the findings.



What This Means



This experiment highlights several key points:




* AI’s potential in scientific data analysis is promising but not yet fully realized.



* Different AI models have vastly different capabilities, even when tackling the same task.



* AI can be a valuable tool in identifying potential weaknesses in scientific methodologies.




As we continue to develop and refine AI technologies, experiments like this one offer valuable insights into their current capabilities. It’s clear that AI has the potential to revolutionize how we analyze and interpret scientific data.



Transcript: https://docs.google.com/document/d/e/2PACX-1vS7KghCmyv6REaxKvZHakSeax56RSoX2bhppRuLtl8V8NqQ6PT9K6YA2qlhYTRZ1LDj7k596cyKcJkQ/pub











Youtube: https://youtu.be/fO9Idq5gLLY



Rumble: https://rumble.com/v59crxe-ai-tackles-yalestanford-junk-science-634.html


[box][/box]
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
AI Goes Head-to-head With Sam Harris on Free Will |633|



Flat Earth and no free will claims are compared.






In Skeptiko 633, we ran another experiment showcasing the power of AI in logical analysis and natural language processing. This time, we set our sights on the AI-adjacent claim that has traction among many intellectuals and neuroscientists: the idea that free will doesn’t exist.



We tasked an AI with breaking down the logical behind the “no free will” argument as articulated by Sam Harris.



Key Takeaways:




* AI’s Analytical Prowess: Our experiment showed that current AI systems are capable of dissecting complex philosophical arguments with remarkable clarity.



* Exposing Logical Flaws: The AI was able to identify logical fallacies, seamlessly link to related concepts, and understand the relevance of empirical data related to mind-matter interaction experiments.



* Parallels with Flat Earth Theory: The AI successfully drew comparisons between the reasoning used in “no free will” arguments and those employed by Flat Earth proponents.



* The “Emperor Has No Clothes” Moment: Despite its popularity among some intellectual circles, AI was able to critically analyze the claim being made in an unbiased way.




Why This Matters



This experiment highlights a crucial gap in many scientific and societal debates: the lack of rigorous logical analysis. By leveraging AI to perform this kind of systematic breakdown, we can:




* Elevate the quality of intellectual discourse



* Challenge popular but flawed ideas



* Demonstrate the potential of AI as a tool for critical thinking




AI Getting Smarter?



As AI continues to evolve, its ability to contribute to complex philosophical and scientific debates will likely grow. This experiment offers a glimpse into a future where AI could serve as an impartial arbiter of logical consistency in important conversations.



Transcript: https://docs.google.com/document/d/e/2PACX-1vT29MP9FZJLGKyX-cFPDn0DogAGgPVvV2qpNq9BHSNGCFwxPDCWmd0b_bQ3N3cDPkBFU6ik3jt-VRl1/pub











Youtube: https://youtu.be/6nvHtGxNdi4



Rumble: https://rumble.com/v58f3bp-ai-goes-head-to-head-with-sam-harris-on-free-will-633.html


[box][/box]
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
AI Exposes Truth About NDEs |632|






AI head-to-head with Lex Fridman and Dr. Jeff Long over NDE science.



In Skeptiko 632, we explore the AI feedback loop and how some pretty simple advancements in AI prompt engineering have increased our ability to squeeze the truth out of LLMs. In a not-so-roundabout way, this discussion about AI truth and related thrashing around AI ethics and super AGI come down to “nature of consciousness” questions. So that’s what we revisited.



I had the AI/LLM Pi face off with Lex Fridman and some of the AI luminaries he has interviewed on the topic. As you will see, Pi did a deep dive into some critical logical fallacies. The debate became lopsided when Dr. Jeffrey Long entered the conversation with a discussion about near-death experience science. Pi’s “thinking” showed how these dismissed-as-philosophical issues offer real empirical evidence that must be considered if we’re serious about the future of AI sentience.











Youtube: https://youtu.be/ftCT-BOCfeE



Rumble: https://rumble.com/v57cda4-ai-exposes-truth-about-ndes-632.html


[box][/box]
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
Convincing AI |631|






Mark Gober and I use AI to settle a scientific argument about viruses.



In Skeptiko 631, we test the capabilities of Large Language Models (LLMs) to understand and apply logic in scientific reasoning. The test case may sound a bit strange, but it was ideal for testing the AI. The discussion centered around the claim that viruses don’t exist because they’ve never been isolated.



Host Alex Tsakiris and guest Mark Gober engage in a deep dive into the nuances of virology, the concept of isolation in biomedical science, and how these ideas intersect with AI’s ability to process and analyze information. The experiment reveals differences in performance between two LLMs, with one demonstrating a superior grasp of the deeper context of the scientific claims.



A key point is how long it took for the AI to recognize that “isolation” might be an inappropriate term in this specific scientific context. This leads to broader questions about AI’s capacity for nuanced understanding and its vulnerability to being misled by carefully crafted arguments.



The episode ultimately raises questions about the current state and future potential of AI in tackling complex, controversial scientific debates. How far can AI go toward understanding the intricacies of scientific reasoning? Will it always be susceptible to clever human manipulation? Join us as we unpack these questions and more in Skeptiko 631.Transcript:https://docs.google.com/document/d/e/2PACX-1vSrB7R1AMEPv9IoW7zAGwAF4KFmB9-zbZog2foFu5wOphLQda_546_HfYZjNRyn3ymn1ZyZwc8MmNkM/pub











Youtube: https://youtu.be/Vyou64GbLrE



Rumble: https://rumble.com/v55qsty-convincing-ai-631.html


[box][/box]
Show more...
1 year ago

Skeptiko – Science at the Tipping Point
About the Show
Skeptiko.com is an interview-centered podcast covering the science of human consciousness. We cover six main categories:
– Near-death experience science and the ever growing body of peer-reviewed research surrounding it.
– Parapsychology and science that defies our current understanding of consciousness.
– Consciousness research and the ever expanding scientific understanding of who we are.
– Spirituality and the implications of new scientific discoveries to our understanding of it.
– Others and the strangeness of close encounters.
– Skepticism and what we should make of the “Skeptics”.