🎙️ With Roger Spitz – Foresight Strategist, Techistentialist & President of the Disruptive Futures Institute
Summary
In this episode of the in-between trust podcast, Eva Simone Lihotzky speaks with Roger Spitz, a global foresight strategist and founder of the Disruptive Futures Institute. Together, they explore the deep relationship between trust, agency, and technology, and why decision-making in complex systems requires more than control.
Roger introduces the concept of techistentialism - a lens through which to understand how humans and algorithms now share the terrain of choice, risk, and consequence. This conversation invites us to rethink the foundations of leadership, the illusion of predictability, and the necessity of awareness, resilience, and anticipatory thinking in an era shaped by disruption.
🔑 Takeaways
Trust is a fundamental need, not a luxury
Agency is the basis for ethical and effective decisions
Technology and human decision-making are inseparable
Awareness of complexity changes how we lead and respond
Anticipatory thinking is essential for navigating uncertainty
Control is incompatible with complex systems
Antifragility allows organizations to benefit from shocks
Agility enables more resilient leadership and communication
AI can enhance trust — if used in the right contexts
Delegating decisions to machines can erode human capacity
🎙️ Sound Bites
"Control is incompatible with complex systems."
"Antifragility means benefiting from shocks."
"Foresight is a form of intelligence."
⏱️ Chapters
00:00 Introduction to Trust and Agency
02:17 The Role of Trust in Decision Making
04:41 Understanding Technology and Trust
08:03 Incorporating Techistentialism in Business
11:24 Awareness, Agency, and Anticipation
17:14 Navigating Control in Complex Systems
23:00 Agility in Communication and Leadership
28:56 The Impact of Technology on Systems
33:49 AI’s Role in Strengthening Trust
39:42 In-Between Moments and Conclusion
🔭 Keywords
trust, agency, technology, decision making, tech essentialism, complexity, AI, leadership, communication, resilience
💻 Links
in-between trust on Instagram: @inbetween_trust
More about the disruptive futures institute: 
🎙️ With Monika Jiang – Researcher, Writer & Community Curator
Summary
In this episode of the in-between trust podcast, Eva Simone Lihotzky speaks with researcher and community curator Monika Jiang about the layered relationship between oneliness, trust, and technology. Together, they explore how artificial intimacy, digital environments, and emotional proximity are reshaping the way we connect — with each other and with ourselves.
Monika shares her thinking on the historical roots of oneliness, the limitations of digital intimacy, and what it takes to design communities that truly foster belonging. This episode is an invitation to slow down, listen closely, and rebuild the emotional fabric that trust depends on — across human and digital space.
🔑 Takeaways
Oneliness is a historical concept of interconnectedness
Loneliness and trust are deeply entangled
Technology can create artificial intimacy — not always connection
Trust is built through presence, difference, and shared space
Human relationships require emotional complexity, not convenience
Communities thrive when they embrace tension and paradox
Leaders must create space for difficult emotions
Digital intimacy is real, but different — and needs design
Consistency builds trust in community, not grand gestures
Trust is a practice of faith, not a checklist
🎙️ Sound Bites
"Trust is a tricky thing."
"Oneliness feels like a living motion."
"Community needs embracing differences."
⏱️ Chapters
00:00 Exploring Oneliness and Trust
05:03 The Impact of Technology on Connection
09:55 Digital vs. Human Connection
14:54 Designing Intimate Communities
19:41 The Role of Leaders in Fostering Trust
24:36 Navigating Truth in Relationships
29:26 The Future of Trust and Community
🔭 Keywords
oneliness, trust, community, technology, connection, loneliness, digital intimacy, leadership, emotional fabric, AI
💻 Links
in-between trust on Instagram: @inbetween_trust
More about the oneliness project: https://www.monikajiang.org
More about Monika Jiang: https://www.linkedin.com/in/monika-jiang/
More about The House of Beautiful Business: https://houseofbeautifulbusiness.com
🎙️ With Andrea Schlüter – Head of Strategy, Operations and Partnerships
Summary
In this episode of the in-between trust podcast, Eva Simone Lihotzky speaks with Andrea Schlüter about the evolving relationship between trust and technology, with a special focus on AI systems, certification, and governance. They explore how trust is designed into systems, how regulation can become a foundation for innovation, and the challenges of aligning technical complexity with cultural context. Andrea shares insights from her work at the TÜV AI Lab, where building frameworks for trustworthiness in AI is more than compliance—it’s about shaping the future of safe, transparent, and ethical technology.
🔑 Takeaways
🎙️ Sound Bites
"Trust is a lot about consistency."
"Trustworthiness by design is key."
"We need a common language among experts."
⏱️ Chapters
00:00 Understanding Trust in Technology
03:09 The Role of AI in Trustworthiness
06:10 Building Trust through Certification
08:59 Navigating the AI Ecosystem
12:05 The Challenge of Trust in AI Systems
14:48 Cultural Aspects of Trust in AI
17:41 Establishing a Common Language for AI
20:44 The Importance of Diverse Perspectives
23:58 Practical Benefits of AI Taxonomy
26:41 Reflections on Trust and Innovation
🔭 Keywords
trust, technology, AI, certification, trustworthiness, TÜV, AI Act, innovation, ethics, culture
💻 Links
in-between trust on Instagram: @inbetween_trust
More about TÜV AI Lab: https://www.tuev-lab.ai
More about Andrea Schlüter: https://www.linkedin.com/in/andrea-schlueter/
with Dr. Simon Walter - investor, strategist and advisor at the intersection of startups, brand, and innovation.
📝 Summary
In this episode of The In-Between Trust Podcast, Eva Lihotzky is joined by investor and strategist Dr. Simon Walter to explore how trust operates at the core of early-stage investing, brand building, and technology adoption. Together, they reflect on what it takes to back founders when data is scarce, why integrity outlasts business plans, and how transparency, truth, and consistency shape credible brands. The conversation spans everything from free trials and first impressions to AI adoption and perception gaps across generations. At its heart lies the insight that trust is what bridges fast-moving realities and the long-term belief in progress.
🎧 Takeaways
Trust is the most valuable currency in times of uncertainty
Founders' integrity often matters more than any pitch deck
Experience shapes gut instinct — but can also reinforce bias
Branding is ultimately about building long-term trust
Transparency in failure can strengthen brand relationships
Startups build trust through free trials and early credibility
Strong brands are perceived as safer, even without proof
Generational loyalty varies — but shared values matter more
Trust connects the gap between fast-moving reality and slower truths
🔊 Sound Bites
"Trust is a personal thing."
"Transparency strengthens trust."
"Perception is reality."
⏱ Chapters
00:00 – The Value of Trust in Uncertain Times
04:59 – Investing in Startups: Trust and Integrity
09:57 – Branding: Building Trust Through Transparency and Consistency
14:51 – Establishing Trust in New Technologies and Startups
19:31 – Generational Perspectives on Trust and Loyalty
24:48 – Truth vs. Reality: The Role of Perception in Trust
29:18 – Navigating Brand Communication in Times of Doubt
🔗 Links
Strategist's Notes // Dr. Simon Walter on Substack: https://drsimonwalter.substack.com
in-between trust on instagram: https://www.instagram.com/inbetween_trust/
Summary
In this episode of the in-between trust podcast, Eva Simone Lihotzky explores innovation sovereignty with the generative AI and cloud expert Sergiu Petean. Their conversation moves through topics about bravery in decision-making, technological sovereignty, and the potential of open-source solutions to drive European collaboration. Together, the speakers reflect on how trust underpins innovation, leadership, and value creation — and why we must treat technology not as a commodity, but as a shared, strategic asset. This episode is a call to act with urgency, integrity, and clarity in shaping the digital and political systems of tomorrow.
🔑 Takeaways
Trust is the foundation of quality in relationships
Building a culture of trust requires hard conversations
Bravery enables authentic decisions and personal growth
Sovereignty is key to innovation and digital self-determination
Open source can foster collaboration and shared progress in Europe
Regulation helps shape ethical tech innovation
Trust is essential for European collaboration
Technology can be connective — or divisive
Leaders need technological literacy for future decisions
Europe must act now on digital sovereignty
🎙️ Sound Bites
"Trust for me is the foundation of quality."
"We need to be more courageous."
⏱️ Chapters
00:00 The Foundation of Trust
05:11 Understanding Sovereignty
09:13 Cultural Sovereignty and Innovation
11:06 The Role of Regulation in Sovereignty
13:09 Building Trust in Europe
15:10 Technology as a Connector or Divider
17:37 The Power of Open Source
19:27 Creating Communities in Open Source
22:42 Leadership for the Future
26:27 The Value of Technology in Organizations
29:39 In-Between Moments and Reflections
💻 Links
in-between trust on Instagram: @inbetween_trust
More about [Guest Name or Organization]: [insert link]
🔭 Keywords
trust, bravery, sovereignty, innovation, open source, regulation, technology, collaboration, leadership, value creation
🎙️ With Josef Lentsch – Political Entrepreneur, CEO of the Political Tech Summit, author & Managing Partner at the Innovation in Politics Institute
Summary
In this episode of the in-between trust podcast, Eva Simone Lihotzky speaks with political entrepreneur Josef Lentsch about the transformation of democracy through innovation, leadership, and trust. From co-founding the NEOS party in Austria to building the Political Tech Summit, Josef shares his perspective on political entrepreneurship as a practice of systemic change. Together, they explore the erosion and rebuilding of trust in political systems, the role of AI in democratic communication, and how technology can support—not replace—citizen engagement. The conversation highlights how leadership, transparency, and adaptability are key to restoring trust across systems and borders.
🔑 Takeaways
Political entrepreneurship is about building systems that scale trust.
Trust reduces friction — in both governance and society.
Authentic leadership is core to meaningful political transformation.
Technology can support, but not substitute, democratic dialogue.
Political startups create new paths for citizen participation.
Rebuilding trust requires both structural reform and cultural change.
Cross-border collaboration is key to political innovation in Europe.
AI must be governed with integrity to support political legitimacy.
🎙️ Sound bites
“Trust makes democracy efficient — and possible.”
“We need to build better models.”
“Political tech is not a silver bullet, but it’s part of the solution.”
⏱️ Chapters
00:00 The Concept of Political Entrepreneurship
04:25 The Importance of Trust in Democracy
10:17 Building Trust Through Political Startups
16:05 Leadership and Trust in Politics
22:32 The Role of AI in Political Communication
25:36 Navigating the Intersection of Tech and Politics
30:38 Hope Amidst Challenges in Democracy
36:40 In-Between Moments and Reflections
💻 Links
in-between trust on Instagram: @inbetween_trust
More about Josef Lentsch: https://www.linkedin.com/in/jlentsch/
More about Innovation in politics: https://innovationinpolitics.eu
More about the Political Tech Summit: https://www.politicaltech.eu
More about Josef Lentsch's book:
https://www.amazon.de/Political-Entrepreneurship-Successful-Centrist-Start-ups/dp/3030028607
🔭 Keywords
political entrepreneurship, trust in democracy, political startups, civic engagement, digital democracy, AI in politics, leadership and trust, political innovation, democratic systems, political tech
with Julia Löffler – Scientist in Molecular Medicine, Neuroscience & Science Communication at the Charité in Berlin
Summary
In this episode of the in-between trust podcast, Eva Simone Lihotzky speaks with Julia Löffler about the biological, neurological, and relational foundations of trust — and why trust is as much a somatic experience as it is a cognitive one. They explore Julia’s journey from molecular medicine into neuroscience and science communication, the role of empathy in scientific work, and how biology and technology move on fundamentally different timelines. The conversation dives into the chemistry of trust, the tension between innovation speed and human adaptation, and the importance of translating science into language people can understand and act on. Julia shares why dynamic relationships, transparency, and adaptive learning are key to building trust in both medicine and technology.
🔑 Takeaways
Trust is the essential currency in science and medicine.
Biology and neurochemistry — from oxytocin to cortisol — shape trust.
Communication is a core skill in translating science into action.
Technology must adapt to the natural pace of biology.
Trust is dynamic and built over time through relationships.
Advancing knowledge does not guarantee immediate understanding.
Digital and physical systems must both account for human trust needs.
Adaptive learning is essential for responding to uncertainty.
🎙️ Sound bites
“Trust is built on dynamic relationships.”
“Biology runs on its own time.”
“Advancing knowledge does not always mean immediate understanding.”
⏱️ Chapters
00:00 Introduction to Biology, Neuroscience & Trust
03:12 Julia’s Journey from Molecular Medicine to Communication
06:24 The Neurochemistry of Trust
09:15 Bridging Gaps Between Science, Patients & the Public
13:02 Technology vs. Biological Timelines
16:38 Trust as an Adaptive, Dynamic Process
20:05 The Role of Empathy in Scientific Work
23:27 Translating Complexity into Accessible Language
27:41 Future of Trust in Science and Technology
30:15 In-Between Moments and Reflections
💻 Links
in-between trust on Instagram: @inbetween_trust 
More about Julia Löffler: https://www.linkedin.com/in/julia-loeffler/
🔭 Keywords
trust in science, neuroscience, molecular medicine, science communication, neurochemistry of trust, oxytocin, cortisol, adaptive learning, empathy in science, bridging disciplines, technology and biology, trust in medicine
with Karel J. Golta - Founder & Managing Director, INDEED Innovation
Summary
In this episode of the In-Between Podcast, Eva Simone Lihotzky engages in a deep conversation with Karel J. Golta about the intersection of circular innovation, humane design, and trust. They explore Karel's journey into circular innovation, the importance of empathy in design, and how trust is a crucial element in creating sustainable systems. The discussion also delves into the shift from linear to circular systems, the need for long-term thinking, and the role of collaboration in fostering trust within communities. Karel emphasizes the importance of designing for continuity and ambiguity, and how these principles can lead to a more regenerative future. The conversation concludes with reflections on the value of trust in design and the courage required to innovate responsibly.
🔑 Takeaways
Karel's journey into design began in childhood with Lego.
Humane innovation prioritizes people, planet, and purpose equally.
Trust is built through the intent behind design.
Design communicates values without explicit words.
Circular systems require transparency and accountability.
Shifting to circular models demands a moral responsibility.
Designers should view users as co-creators, not targets.
Design for continuity rather than just beginnings.
Trust comes from consistency, clarity, and care.
Value in a regenerative model is about enabling rather than taking.
🎙️ Sound bites
"Trust is built or broken by intent."
"Design communicates values without words."
"Designers shape the way people see things."
⏱️ Chapters
00:00 Introduction to Circular Innovation and Trust
02:45 The Concept of Humane Innovation
05:50 Trust in Design and Circular Systems
08:51 Shifting from Linear to Circular Systems
12:02 Designing for Ambiguity and Continuity
14:25 Building Trust in Collaborative Systems
17:31 The Regenerative Future and Value Creation
20:27 Collaboration for a Sustainable Future
23:25 The Role of Trust in Community Building
26:42 In-Between Moments and Personal Reflections
💻 Links
in-between trust on instagram: @inbetween_trust
https://www.instagram.com/inbetween_trust/
More about Karel J. Golta: https://www.linkedin.com/in/karelgolta/
More about indeed innovation: https://www.indeed-innovation.com
🔭 Keywords
circular innovation, humane innovation, trust in design, sustainable systems, regenerative future, collaboration, community building, design for continuity, value creation, systems thinking
with Paula Cipierre, Responsible AI Expert & Strategist
In this episode of the in-between trust podcast, Eva Simone Lihotzky speaks with Paula Cipierre, one of the leading responsible AI strategists with a profound background in law, policy, and tech, about what it means to translate legal and ethical principles into organizational practice - and how trust must be built not just through systems, but through culture, clarity, and human connection.
Together, they explore:
Why trust needs control, not just promises
How to create a culture of compliance that doesn’t collapse into checkboxes
The tension between data governance and intelligent systems
What it takes to operationalize values across teams, languages, and sectors
Why interdisciplinary thinking and empathy are foundational leadership skills in the age of AI
With insight from law, humanities, and hands-on tech policy work, Paula brings a rare perspective to ethical AI - one rooted in systems and storytelling.
🔑 Takeaways
Trust in AI depends on both transparency and institutional reliability
Compliance isn’t the goal - culture is
Regulation can enable innovation if done with clarity
Data governance and bias mitigation start long before AI
AI literacy is critical to confident, responsible use
Responsibility requires interdisciplinary skill and local ownership
Leadership means acting with explainability, not just authority
People follow people - trust starts with example
🎙️ Quote Highlights
“Trust is good, but control is better.”
“We need a much more integrated approach.”
“Lead by example; people follow people.”
⏱️ Chapters
00:00 – From Humanities to Ethical AI
03:06 – Trust in Technology: The Role of Control
06:03 – Translating Ethics into Practice
09:01 – Compliance vs. Responsibility
11:45 – Cross-Sector Collaboration
14:39 – Regulation as a Tool for Innovation
17:43 – Data Governance in AI
20:42 – AI Literacy and Employee Empowerment
23:29 – Future Skills in AI Governance
26:48 – Sustainability and System Awareness
29:30 – Leading by Example
Links
https://www.linkedin.com/in/paula-kift/
https://www.linkedin.com/company/in-between-trust-podcast/
“Slow trust builds faster futures.”
with Stefan Schoepfel, Founder of the Value AI Institute
In this episode of the in-between trust podcast, Eva Simone Lihotzky speaks with Stefan Schoepfel, founder of the Value AI Institute, about how we lead—and trust—in an era shaped by intelligent systems. They explore what it means to embed ethical principles, emotional intelligence, and leadership clarity into AI development and deployment.
Stefan shares why trust must take a much larger space in the conversation, how unlearning linear thinking unlocks innovation, and how responsibility must move beyond compliance toward genuine accountability. From governance to culture, this episode is a call to stay human—and stay in the driver’s seat.
___
🔑 Takeaways
Trust is foundational for user acceptance and systemic success.
Ethical principles must guide both design and deployment.
AI leadership requires emotional intelligence and clear oversight.
Organizations must embed ethics into processes—not just policies.
Responsible tech can support sustainability and the societal good.
Unlearning linear thinking is key to adapting and leading.
Ongoing trust-building requires visibility and cultural buy-in.
AI systems must always include human-in-the-loop safeguards.
Compliance should enable—not hinder—innovation.
Don’t let tech steer blindly—stay in control.
__
🎙️ Sound Bites
“Trust needs to take a much larger space.”
“AI must comply with ethical principles.”
“Stay in the driver's seat with technology.”
“Unlearning is as vital as innovation.”
“Leadership in AI means embracing ambiguity.”
____
⏱️ Chapters
00:00 – Introduction to Trust and AI
02:21 – The Value AI Institute: Mission and Goals
05:28 – The Importance of Trust in AI
09:46 – Ethical Principles and Responsible AI Design
11:20 – Implementing Ethical AI in Organizations
14:06 – The Role of Leadership in AI Systems
16:20 – Building Trust in Teams and Systems
18:07 – Navigating Leadership Challenges with AI
19:24 – The Impact of AI on Ethical Usage
22:50 – AI for Societal Good and Sustainability
25:45 – Unlearning Linear Thinking in AI
28:19 – Embracing Ambiguity in AI Leadership
29:33 – Key Takeaways on Technology and Trust
___
🧩 Keywords
AI, trust, ethical AI, Value AI Institute, leadership, responsible tech, societal good, emotional intelligence, organizational culture, ambiguity, unlearning, governance
‚Truth by code is a strong asset‘ -
with Anna Spitznagel, CEO of trail.ai
In the first episode of the in-between trust podcast, Eva Simone Lihotzky speaks with Anna Spitznagel—co-founder and CEO of trail.ai—about building trust at the heart of AI governance. Anna shares her journey designing a “co-pilot” for responsible AI systems, and explores how transparency, organizational culture, and technical rigor intersect in shaping trustworthy innovation.
Together, they dive into what it means to build “truth by code,” how compliance can enable—not hinder—progress, and why literacy, leadership, and lived experience are essential in navigating the AI era. Anna also opens questions about the future of trust across ecosystems—from upstream model providers to everyday users.
⸻
🔑 Takeaways
⸻
🎙️ Sound Bites
“Trust is my personal highest value.”
“Truth by code is a strong asset.”
“Literacy is key to understanding AI.”
“Governance is not a blocker—it’s an enabler of scale.”
⸻
⏱️ Chapters
00:00 – Introduction to AI Governance and Trust
01:38 – Defining Trust in AI
04:27 – Building Trust in Organizations
10:13 – The Role of Leadership in AI
12:51 – Designing for Transparency
18:58 – Navigating Use Cases & Compliance
23:14 – The Future of Trust in AI
30:45 – Unanswered Questions That Remain
⸻
🧩 Keywords
AI governance, trust, transparency, leadership, compliance, data privacy, organizational culture, literacy, AI use cases, critical reasoning, automation, future of AI