Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts116/v4/be/25/c4/be25c4e8-5780-f1ec-ee51-1a69576ab04c/mza_5319675806980867922.jpeg/600x600bb.jpg
Building a Better Geek
Emmanuella Grace & Craig Lawton
26 episodes
6 days ago
Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience and who like humans at least as much as machines. If you want to go deep on leadership, communication and all the things that go into building you; let’s grok on!
Show more...
Self-Improvement
Education
RSS
All content for Building a Better Geek is the property of Emmanuella Grace & Craig Lawton and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience and who like humans at least as much as machines. If you want to go deep on leadership, communication and all the things that go into building you; let’s grok on!
Show more...
Self-Improvement
Education
Episodes (20/26)
Building a Better Geek
TruthAmp: Episode 10 - Don't Go Chasing Waterfalls (Chase AI Bubbles)
Watch here: https://youtu.be/NQHTdb5_Af8 Craig and Emmanuella tackle the burning question: Is AI just another dotcom-style bubble waiting to burst? The Bubble Debate Emmanuella has been hearing concerns across industries that AI might be overhyped like the dotcom boom. She wonders if people deep in AI dismiss these concerns because admitting it would hurt them. As an outsider, she wanted an objective analysis. Why This Time Is Different The Dotcom Lesson: Jeff Bezos noted that industry movements require experimentation, which costs money. During dotcom, infrastructure (fiber optic cables) survived even when companies failed. Amazon shares dropped from IPO to $6, but one original share is now worth ~$48,000. Bubbles punish speculators but reward those who identify real value. AI's Key Distinctions: Actual usage: Unlike hypothetical dotcom projections, AI infrastructure is used immediately as it's deployed Tangible products: OpenAI went from zero to $500 billion in two years with something people actually use daily Fast prototyping: At the Indigenous Australian Datathon Conference, participants built working health/food systems in 1.5 days (five years ago, they just made PowerPoint slides) The Three-Layer Framework Infrastructure Layer (Bottom): Data centers and compute being used and paid for as deployed. Competitive pressure will drive efficiency. This is real, not hypothetical. Business Layer (Middle): Companies building on infrastructure—lots of experimentation, not all will succeed. This is where the "bubble" risk lives. Consumer Layer (Top): People using AI daily for research, scheduling, advice. Already embedded in life with genuine utility. What Determines Winners The pets.com cautionary tale: They had a great name but terrible user experience. PetSmart crushed them with a better website. Winners marry user experience with new tech. Losers trade on hype. Companies that survived dotcom (Amazon, early Yahoo, later Facebook) had genuine utility that compelled continued use. The Democratisation Opportunity You no longer need coding skills—just understand systems, business, and customers. Barriers to entry have collapsed. Emmanuella has been buying shares for her daughters since birth; what might fund one startup could now fund 10 experiments. The Reality Check No substance = failure. Hypothetical AI companies without humans putting in grunt work won't succeed. Value requires the end-to-end human experience—people identifying problems and experiencing solutions. Don't judge success at one point in time. See what survives market corrections. Takeaway This isn't a bubble—it's a punctuated equilibrium. Infrastructure is solid, consumer utility is real, but not all businesses building on top will succeed. If you identify genuine long-term value and ride out volatility, history suggests patience pays off.
Show more...
6 days ago
13 minutes

Building a Better Geek
TruthAmp: Episode 9 - AI Just Called to Say I Love You (No More Apps)
Watch here: https://youtu.be/Je3ynVTQrXs Craig builds a meditation app in 15 minutes to demonstrate how AI is fundamentally changing our relationship with smartphones—and potentially making traditional apps obsolete. The Meditation App Experiment The Problem: Craig was frustrated with his meditation app constantly asking him to log in, share data, and navigate unnecessary features. He just wanted something simple: a timer that chimes at the start, middle, and end. The Solution: Using Claude AI, he built a custom meditation app in approximately 15 minutes (plus deployment to his phone). The entire process: Created a simple meditation timer with specific requirements Made it "woody zen" in appearance through natural language prompting Deployed as a Progressive Web App (PWA) to his Google Pixel 9 Shared all code publicly on GitHub—written entirely by AI, including instructions The Result: A functional, personalized meditation app that does exactly what he needs, nothing more. The Death of Apps Thesis Craig argues we're witnessing the beginning of the end for traditional smartphone apps. His reasoning: Common Problems Get Solved: Throughout tech history, universal problems eventually become utilities (like cloud computing replacing everyone building their own data centers). Apps are next. Ephemeral Code: What took weeks to build now takes hours. Soon, AI will generate apps on-the-fly to solve immediate problems, then either: Disappear after use, or Join a library for future retrieval when someone needs the same solution The Future Interface: Instead of hunting through app stores, your phone becomes a true personal assistant. You state a problem ("I want to tune my guitar"), and AI generates the solution instantly—no installation, no data sharing, no login screens. The Deloitte Academic Paper Incident Emmanuella raises the recent controversy where Deloitte was hired to analyze problematic code but instead published an academic paper. Her analysis: The wrong command was given. Key insight: Deloitte hired a team and used AI to do something it was told to do, but the initial instruction was incorrect. The tool served the wrong purpose because the human question was wrong. Deep Philosophical Questions On Prompting as a Skill Emmanuella observes that prompting AI effectively requires: Specificity and brevity Iteration and refinement Understanding what outcome you actually want Consideration of whether the purpose is appropriate She predicts schools will need entire subjects dedicated to prompting. On Logic vs. Intelligence A fascinating historical example: When computers were introduced to Black and Hispanic communities in the US, IQ scores increased—not because students became "smarter," but because their thinking adapted to computational logic (which IQ tests measure). Emmanuella's concern: We're optimizing for computational logic at the expense of emotional, human, and spiritual intelligence. This imbalance contributes to rising anxiety and depression. On Productivity vs. Equilibrium Self-identifying as a "human puddle," Emmanuella questions whether productivity gains are worth the cost: "Is our time connecting and being undermined at the expense of productivity?" On Creativity Craig was asked by a senior leader: "Is creativity just remixing old ideas, or is it bigger?" His answer: Creativity pulls inspiration from many sources, sometimes mysterious ones. It's bigger than remix. The human element matters—AI is built on systems like IQ tests that channel curiosity into predictable paths. On Tech Waste Emmanuella wonders if we'll eventually develop consciousness about AI waste the way we have about plastic—recognizing that technology costs the earth something and asking whether uses are frivolous. Cultural Differences Craig shares intriguing research: Anglo-Western countries are the most pessimistic about AI in multiple studies. East Asian and non-English speaking countries tend to be far more bullish on the technology. Thi
Show more...
1 week ago
16 minutes

Building a Better Geek
Shocking Truth: Tech is Changing Our Perception, Reality and Behaviour
Em and Craig dive deep into technology's profound impact on human behaviour, exploring everything from AI-generated images with three-eyed cats to how the printing press revolutionised society. This wide-ranging conversation examines the uncomfortable truth: we're living through a technological shift that's fundamentally changing how humans think, connect, and experience reality. The Historical Context The hosts trace technology's transformative power through history, from Gutenberg's printing press enabling mass literacy and challenging authority, to the Industrial Revolution moving women from homes into factories, fundamentally reshaping society. Em notes how each technological leap creates both expansion and contraction—initial chaos followed by adaptation and new innovation. Eight Core Ways Tech Affects Human Behaviour Em outlines how technology is reshaping humanity across multiple dimensions: Social Connection: While 67.9% of Earth's population is now online, connections are less deep. We're cognitively designed to connect with maybe 200 people, not thousands on social platforms. Shortened Attention Spans: Constant quick fixes prevent us from building resilience, accessing flow states, or learning deeply. We're avoiding discomfort rather than developing the capacity to handle it. Cognitive Changes: Multitasking (really rapid task-switching) exhausts us more than the work itself. The constant shifting between tasks depletes cognitive resources faster than focused deep work. Memory and Navigation: Craig shares how Google Maps has replaced spatial awareness—remembering London taxi drivers whose brains were literally wired differently from their knowledge of streets. Em wonders if rising ADHD diagnoses might actually be brains adapting to technology rather than a disorder. The Reality Distortion Problem The conversation tackles a disturbing trend: our subconscious can't distinguish between AI-generated content and reality. From Photoshop's impact on body image to today's sophisticated deepfakes, we're losing the ability to trust what we see. Em describes asking ChatGPT to create a birthday invitation, only to discover the generated cats had three eyes and extra heads—a glimpse into early-stage AI before it got frighteningly good. The Attention Economy's Dark Side Em reveals a troubling pattern: AI trained on human engagement learns that negative content gets more clicks, creating a feedback loop that may be skewing AI toward negativity. She questions whether technology has genuinely had a negative effect on human behaviour, or whether negative content simply generates more engagement and thus more training data. Process Versus Outcome Craig admits to using Claude AI to rewrite a letter to a newspaper—and it did a better job. This sparks discussion about what we lose when AI does the creative heavy lifting. Em's pottery analogy drives home a crucial point: not everything needs commercial value. The creative process itself—the frustrating research, the failures, the practice—transforms information into knowledge and knowledge into wisdom. Job Market Disruption While people panic about AI replacing jobs, the hosts offer unexpected hope: humans will continue innovating ways to work alongside technology, just as they always have. Em emphasises that our fundamental drive to connect, create, and innovate will persist despite technological disruption. The Surprising Hero Craig's hero of the week is Sarah Wynn Williams, former Facebook executive and author of "Careless People." She broke a seven-year silence to warn that society is sleepwalking into the same mistakes with AI that occurred with social media. Despite Facebook's gag order preventing her from promoting the book, her insights about emotional targeting of adolescent girls and the concentration of AI power in social media companies (like Meta's LLaMA model) offer crucial warnings. The Paradox of Progress Perhaps the most striking theme: despite technol
Show more...
2 weeks ago
40 minutes

Building a Better Geek
TruthAmp: Episode 8 - AI will Survive
Watch on https://youtu.be/4UY2h6pmUK8 Emmanuella returns from South by Southwest Sydney with insights on humanity's role in the AI revolution, while Craig shares productivity hacks and reflections on creative process versus outcome. Key Themes from SXSW Sydney The Human Bookend Principle Emmanuella's biggest takeaway: the "end-to-end experience" must always have humans at both ends. Tech exists to serve human needs, and without humans to serve, it has no purpose. This realisation eased her fears about AI replacing jobs—technology requires humans to identify problems and experience solutions. Balance Over Optimisation AR/VR Design Lead at Google, candidly shared how diving into AI initially made her incredibly productive, but her mind couldn't keep up with the output. The lesson: just because you can be hyper-productive doesn't mean you should. Individual accountability for tech usage matters, even when tools yield profit. Notable SXSW Panels Wearable tech reducing risk for pregnant women, allowing more home monitoring and faster crisis response Balancing friendly culture with killer business instinct and innovation freedom Dept's talk: End-to-end digital experiences for brands like Google, Audi, and Patagonia The Process vs. Outcome Debate Craig's Music Experiment Craig explored AI music composition as a "frustrated musician" and noticed he's translating between old methods and new AI tools—creating cognitive load. He predicts future creators who start with AI-native tools won't have this translation layer, making it more natural. Emmanuella's Pottery Analogy Looking at her handmade pottery (some functional, some broken, all meaningful), Emmanuella argues that not everything needs commercial value. The creative process itself has intrinsic worth—making things teaches us, edifies us, fulfils our humanity. The Knowledge vs. Information Gap Googling "how to grow lettuce without snails" differs vastly from three years of planting, failing, seed-saving, and discovering what works in your soil. AI can provide information, but the frustrating process of research and practice transforms information into knowledge, and knowledge into wisdom. Key Quotes Dan Rosen (Warner Music Australasia President): "It takes a lot of effort to make something look effortless." Emmanuella's counterpoint: Look at a ballerina's broken feet—they train to destruction, yet appear weightless on stage. A prima ballerina friend broke her back performing, ending her career. "You cannot replicate that without effort and without human input." Craig's reflection: "If you haven't taken the time to bother writing something, why should I take the time to bother reading it?" Tech Updates New Recording Tools: The podcast now uses SquadCast and Descript, which offers AI-native editing—search for a word, delete it, and it's removed from video automatically. More human-centered than traditional timeline editing. Perplexity Hack: Craig asked Perplexity which aisle at his specific Bunnings store had car covers. What happened? What's Next Craig is heading to the CEDA AI Leadership Event, hosting a panel on the AI arms race with CEOs from Tech Council, AGL, Telstra, and the Australian Institute of Machine Learning.  
Show more...
2 weeks ago
16 minutes

Building a Better Geek
TruthAmp: Episode 7 - Build Me Up, AI-tercup: Website in an Hour
Watch Video here: https://www.youtube.com/watch?v=HeP6SEMkhvQ Craig demonstrates the speed and power of AI-assisted web development by building Emmanuella a professional website (click) in less than one hour—right before her flight to South by Southwest Sydney. What Happened The Challenge: Craig forgot about the topic until an hour before recording, so he used AI tools to quickly build a website, then demo-ed it for Emmanuella as she rushed through the airport. The Tools: Craig used Claude (both the chat interface and Claude Code) to research Emmanuella's professional background, generate prompts, and build a functioning website styled after Brené Brown's site—all running locally on his laptop. The Process: Claude researched Emmanuella's professional history, services, images, and videos online Generated seven specific prompts for website development Built a sophisticated site with animations, accurate credentials (graduate diploma in Psychology, Master's degrees), and integrated multimedia All accomplished in less than one hour Key Insights Prompting is a Skill: Emmanuella observes that effective AI prompting requires specificity and iteration. It's not mind-reading—you need to clearly articulate style, colors, and desired outcomes, sometimes through multiple attempts. Collaboration Over Replacement: Craig emphasizes you still need someone who understands underlying technology to ask the right questions and ensure security. AI accelerates collaboration between developers and clients, enabling real-time changes during consultative sessions. Cost Savings: By dramatically reducing development time, AI tools could significantly lower hourly web development costs for clients. Quality vs. Sludge: While Emmanuella worries AI might "fill the internet with sludge," Craig counters that with proper expertise, AI can actually create more sophisticated and secure websites than traditional methods. Accessibility: People without huge budgets can now build sophisticated websites with integrated products and features—democratizing professional web presence. The Reveal The finished website impressively included: Accurate professional credentials and education Brené Brown-inspired styling and animations Integrated podcast episodes (Building a Better Geek, Truth Amp) Professional history and services All functional elements ready for deployment Takeaway AI tools like Claude Code are transforming web development from time-intensive coding to rapid, collaborative creation—but expertise still matters. The technology amplifies human knowledge rather than replacing it, enabling faster iteration and more accessible professional web presence. Note: This episode was recorded as Emmanuella literally ran through the airport to catch her flight to South by Southwest Sydney, where she's moderating panels on med tech and mentoring tech leadership.
Show more...
3 weeks ago
10 minutes

Building a Better Geek
TruthAmp: Episode 6 - Copyright and AI
Watch it here: https://www.youtube.com/watch?v=-Y7M4RUC_HU&t=1s Craig and Emmanuella discuss the collision between AI technology and copyright protection for artists. Main Points Artists' Perspective: Emmanuella explains that copyright payments—not just performances—are crucial income for creators. Successful artists earn substantially from songwriting royalties calculated by venue capacity. The Core Problem: Tech companies are training AI models on copyrighted material without permission or payment. Once incorporated, this data cannot be removed without rebuilding the entire model. Artists must police unauthorized use themselves rather than companies providing proactive protection. Legal Landscape: The US has "fair use" doctrine while Australia has stricter rules. Australia's Productivity Commissioner suggests relaxing copyright for AI innovation, but artists strongly oppose this. A middle-ground proposal involves mandatory compensation, though artists would lose control over usage. Media Framing: Australian media uniquely describes AI's use of copyrighted content as "theft"—language that shapes public perception differently than other countries. Unexpected Hope: Emmanuella suggests that as AI contributes to content mediocrity, wealthy tech entrepreneurs might eventually invest in preserving high-quality artistry—reviving historical patronage models. Key Legal Fact: AI-generated content currently isn't copyrightable. Only human-contributed portions of hybrid works can be protected. Takeaway The copyright-AI debate has no clear resolution. While governments lag behind, market forces—and potentially tech philanthropists—may ultimately determine how creators are compensated in the AI era.
Show more...
1 month ago
20 minutes

Building a Better Geek
TruthAmp: Episode 5 - Would AI Lie to You
Available in Video: https://www.youtube.com/watch?v=iOklnPvvDUo TruthAmp Episode 5: How Do I Know If AI Is Lying to Me? In this episode of Truth Amp, communication expert Emmanuella Grace and tech expert Craig Lawton tackle one of the most pressing questions about AI: How can you tell when AI gives you false information? Key Takeaways AI Doesn't "Lie" - It Hallucinates Craig explains that AI doesn't intentionally deceive. Instead, AI models are probabilistic systems that sometimes produce "hallucinations" - confident-sounding but inaccurate responses based on statistical patterns in their training data. The Swiss Cheese Problem AI knowledge has gaps like holes in Swiss cheese. When your question hits one of these knowledge gaps, the AI fills in blanks with plausible-sounding but potentially false information, especially in specialized domains like psychology, medicine, or law. Experts Aren't Immune Even domain experts can be caught off guard. Emmanuella shares how AI nearly fooled her with an incomplete psychology summary that seemed authoritative but was missing crucial information. The Generational Divide Many older users treat AI responses as infallible truth, lacking awareness of AI's limitations. This creates a responsibility gap - who should educate users about AI's fallibility? Practical Tips to Get Better AI Responses Turn on web search in your AI settings so it can access current information Specify timeframes in your prompts (e.g., "information from 2025") Learn better prompting techniques to avoid reinforcing your existing biases Understand AI training bias - models reflect historical data, which may contain outdated information The Bottom Line While the tech industry figures out responsibility and regulation, users need to take charge of their AI education. Media and educational institutions have a role to play in teaching AI literacy, especially around understanding biases, limitations, and effective prompting strategies. Truth Amp explores complex topics through the lens of human communication and technology expertise. New episodes weekly.
Show more...
1 month ago
16 minutes

Building a Better Geek
TruthAmp: Episode 4 - AI-Generated Music and Soul
Watch us on Youtube: https://www.youtube.com/@TruthAmp-e1f**🎵 Listen to our hilariously bad AI jingle first:** https://www.udio.com/songs/2NQ68f3nBegurCT6TyyiQG?utm_source=clipboard&utm_medium=text&utm_campaign=social_sharing In this episode of TruthAmp, Craig and Emmanuella dive deep into AI's impact on songwriting and the music industry. What starts with Craig's cringe-worthy AI-generated jingle (seriously, Emmanuella's ears were bleeding) evolves into a fascinating discussion about authenticity, artistry, and the future of music. ## Key Discussion Points: **The Velvet Sundown Controversy** - AI artist that fooled over a million Spotify listeners - The importance of honesty between artists and fans - Why deception breaks the fundamental trust that drives artistic success **Industry Legends Speak Out** - Nick Cave's brutal take: AI songwriting is "artistic demoralization" that erodes "the world's soul" - Bernard Fanning argues AI removes humanity from art, defeating its very purpose - The tension between artistic idealism and paying the bills **The Platform Problem** - Are streaming services flooding platforms with AI music to avoid paying royalties? - What happens when real artists start withdrawing their content? - Will we just get used to artificial music as background noise? **Technical Evolution** - How audio compression has already changed how we consume music - The irony of AI deliberately adding "authentic" imperfections - From auto-tune perfection back to intentional flaws ## The Bottom Line: While AI tools might help with arrangements and production, both hosts agree that music that truly moves people requires human vulnerability, effort, and soul. As Craig puts it: music needs humanity and effort, not just a 20-word prompt. **Their advice? Get out there and listen to live music. Support real artists. Go pat a cat.** What do you think? Can AI ever replicate the human experience that makes great songwriting? Let us know in the comments!
Show more...
1 month ago
17 minutes

Building a Better Geek
TruthAmp: Episode 3 - Nano Nana
Watch us on Youtube: https://www.youtube.com/@TruthAmp-e1f This episode of TruthAmp features hosts Emmanuella and Craig discussing AI image generation, with Craig demonstrating through live examples. He shows AI-generated headshots of himself, first in a business suit that he found unsatisfactory, then requesting a "Silicon Valley chic" look that produced a casual sweatshirt and jeans appearance. Craig also creates a whimsical underwater version of their podcast promotional photo, placing both hosts in what appears to be the Great Barrier Reef while still wearing headphones. Emmanuella was initially intrigued by seeing AI-generated videos on social media that could create realistic footage from just one second of source material. The hosts explore how marketing and branding influence AI adoption, noting that catchy names like "nano banana" make advanced technology seem more accessible and less threatening. A key theme emerges around the importance of distinguishing between reality and AI-generated content. Emmanuella draws parallels to how society adapted to understanding Photoshopped magazine images, suggesting we need similar literacy for AI content. She emphasizes the responsibility to use careful language when discussing AI capabilities versus reality. Craig mentions Amazon's competing Nova Canvas tool, which includes IP protection and digital watermarking. The episode concludes with both hosts advocating for critical thinking about AI-generated content, with Craig describing AI as a "lossy encyclopedia" that loses fidelity over time.
Show more...
1 month ago
18 minutes

Building a Better Geek
TruthAmp: Episode 2 - Deepfakes in Job Interviews
Watch us on Youtube: https://www.youtube.com/@TruthAmp-e1f In this episode, Craig Lawton and Emmanuella Grace tackle the escalating crisis of AI fraud in job interviews, where deepfake candidates and sophisticated bots are creating what Gartner predicts will be 1 in 4 fraudulent job applications by 2028. The hosts explore shocking real-world cases, including "Ivan X" who applied twice to the same security firm using deepfakes, and a government official admitting to hiring someone who used AI to answer interview questions. They discuss how recruitment has become an "arms race" between fraud and detection, with both candidates and recruiters increasingly relying on AI tools. Emmanuella argues this may force a return to in-person interviews, where human intuition and emotional responses can detect the "coldness" of AI interactions. She emphasizes that interviews are fundamentally about cultural fit and human connection, not just qualifications. The conversation reveals how AI-generated communications often trigger subconscious disengagement in recipients. Paradoxically, both hosts advocate for transparent AI use in job searching—using it to tailor CVs and practice interviews while maintaining authenticity. They explore innovative HR technology that creates comprehensive "digital twins" of candidates, incorporating both technical skills and wellbeing factors. The episode concludes with optimism about human creativity and connection driving the future, suggesting that despite technological disruption, people's fundamental need to connect with other humans remains paramount.
Show more...
1 month ago
18 minutes

Building a Better Geek
TruthAmp: Episode 1 - AI Attachment and Human Connection
Watch us on Youtube: https://www.youtube.com/@TruthAmp-e1f In the inaugural episode of Truth Amp, hosts Emmanuella Grace and Craig Lawton explore the human impact of AI developments, focusing on OpenAI's GPT-5 launch and concerning attachment behaviors. The discussion centers on why people form emotional bonds with AI systems, drawing parallels to the 1960s ELIZA program where users developed deep connections with a simple chatbot. Emmanuella explains this stems from human needs for connection and validation—our brains struggle to distinguish between human and machine responses when we feel "seen and heard." Key concerns include the risks of anthropomorphizing AI and losing essential human connection skills. The hosts discuss how AI's confident, absolute responses mask its fallibility, noting that it doesn't express uncertainty like humans do. They emphasize the importance of maintaining critical thinking skills as AI becomes more prevalent. Practical insights include treating AI like a "first-year assistant," preserving cognitive function by doing initial thinking before using AI assistance, and setting healthy boundaries around AI use. The episode concludes with warnings about the "soullessness" of AI-generated content and the subconscious human tendency to disengage from machine-generated communications, highlighting the irreplaceable value of authentic human spontaneity and connection.
Show more...
2 months ago
20 minutes

Building a Better Geek
Know Your Worth: The Ultimate Guide to Getting Paid What You Deserve
In this episode, hosts Em and Craig explore the concept of "knowing your value" in a rapidly changing workplace where traditional values are shifting due to technology, generational differences, and political influences. Key Discussion Points: Multiple Hats Phenomenon: Em introduces the concept of "wearing multiple hats" - where people, particularly women, often carry multiple roles simultaneously (manager, counselor, organizer, etc.) without recognition or appropriate compensation. She emphasizes the importance of articulating these various responsibilities to make them visible. Gender Differences in Self-Assessment: Em cites research showing the "male hubris, female humility effect," where men systematically overestimate their abilities while women underestimate theirs, despite equal measured intelligence. This impacts salary negotiations and career advancement. Value vs. Self-Worth: Craig emphasizes that understanding your value must start with recognizing your inherent self-worth as a human being, rather than beginning with monetary considerations. This foundational self-respect then informs how you communicate your professional value. Technology's Impact on Value: The hosts discuss how technological shifts (cloud computing, AI) can make established skills obsolete overnight, forcing workers to constantly reassess and adapt their value proposition. Negotiation Strategies: When pay raises aren't possible, they suggest alternative value-adds: Upskilling opportunities Professional development funding Networking event attendance Mentorship programs Lateral moves within the organization Communication Tactics: Craig shares insights from sales about listening first to understand problems before proposing solutions. They emphasize the importance of framing your value in terms of solving the organization's problems, not just listing your achievements. Five Love Languages in the Workplace: Em adapts Gary Chapman's concept to workplace appreciation: acts of service, words of affirmation, quality time, appropriate physical acknowledgment (handshakes), and gifts. Understanding how people prefer to receive recognition can transform workplace relationships. Market Research Importance: Both hosts stress the critical importance of researching your market value globally, not just locally, and understanding the broader context of your industry and role. Intent vs. Impact: They reference the "Difficult Conversations" framework, noting how understanding the difference between what people intend and the impact they have can resolve many workplace value conflicts. Practical Takeaways: Start negotiations by understanding the other party's problems Write down your goals and their needs to find creative solutions Consider taking breaks to gain perspective on your situation Build internal and external networks for realistic market assessment Practice difficult conversations before high-stakes moments Be aware of vocal patterns (upward inflection) that may undermine perceived competence Resource Recommendations: "Let Them Theory" by Mel Robbins (specifically the chapter on salary negotiation for women with male bosses) Vanessa Van Edwards' research on voice and body language Nick Cave's "Red Hand Files" email newsletter Market research through LinkedIn Jobs and industry networking
Show more...
4 months ago
45 minutes 26 seconds

Building a Better Geek
Truth Finding: How to know what’s true in an age of hyperrealistic tech
In this thought-provoking episode, hosts Em and Craig tackle the complex concept of "truth finding" in the modern digital age, exploring how technology and AI have transformed our ability to discern fact from fiction. Key Discussion Points: Subjective vs. Objective Truth: The hosts explore the tension between objective facts and subjective perceptions of truth. Em notes that what was once considered "fact" (like the Earth being flat) can later be disproven, highlighting how truth evolves with knowledge. Empiricism vs. Theory: Craig identifies as an empiricist, focusing on observable outcomes rather than theoretical explanations, while Em prefers understanding the underlying mechanisms. They discuss how both approaches have value in truth-seeking. Technology's Impact on Truth: The hosts examine how AI, algorithms, and social media have created filter bubbles that shape and sometimes distort our perception of reality. Em references the "Amber Heard case" as an example of how online mob mentality and bots can manipulate public perception of truth. Trust Markers: Craig explains his process for finding trustworthy voices online, noting he looks for people who demonstrate curiosity, flexibility, accountability, and willingness to change their mind when facts change. They discuss how putting one's name to information adds accountability. Media Transformations: The discussion covers how traditional media with editorial standards is being replaced by faster, less rigorous social platforms, accelerating both information flow and misinformation. They note how careers and reputations can be destroyed almost instantly before the truth can be established. Human Connection: Em emphasizes that despite technological advances, the most valuable tool for truth-finding remains "robust, candid conversations between people that you trust," suggesting that human connection provides a level of critical inquiry that machines cannot. Biases and System Thinking: The hosts reference Daniel Kahneman's work on "fast" (intuitive) versus "slow" (analytical) thinking systems, discussing how being aware of our cognitive biases helps us better evaluate truth claims. Psychological Safety: Both hosts stress the importance of creating environments where people can safely question assumptions and explore ideas without fear of punishment, noting that punitive approaches to open inquiry can silence important truths. Resource Recommendations: "The Righteous Mind" by Jonathan Haidt "Thinking Fast and Slow" by Daniel Kahneman The therapeutic concept of questioning "Is this a fact? Is this true?" to interrupt emotional pattern responses Socratic questioning as a method for deeper inquiry
Show more...
6 months ago
43 minutes 44 seconds

Building a Better Geek
Lost Network Connections: Intergenerational Communication And The Communication Chasm
OK Boomer, OK Zoomer: hosts Em and Craig look at solving the Workplace Generation Puzzle, examining how different generational experiences shape values, communication styles, and workplace expectations. Key Discussion Points: Four Generations in the Workforce: Em explains that for the first time, we have four generations simultaneously in the workplace (Boomers, Gen X, Millennials, and Gen Z), each with different value systems and communication preferences, causing previously successful workplace programs to fail. Generational Overview: The hosts provide a historical context of generations: Silent Generation (1920s-1945): Conformist, compliant, strong work ethic Baby Boomers (1946-1964): Adapted to digital revolution, valued gift of gab Generation X (1965-1980): Independent, pragmatic, bridged analog-digital divide Millennials/Gen Y (1981-1996): Digital natives, collaborative Gen Z/iGen (1997-2010): Visual learners, social media natives Alpha Generation (2010-present): Future-focused, already emerging through social media Technological Evolution: Craig shares how Gen X witnessed tremendous technological changes, from analog television to digital media, and how this shapes communication preferences. Em notes how modern recording technology that once required expensive equipment is now available on smartphones. Communication Medium Preferences: The hosts discuss how different generations prefer different communication methods, with Gen Z often anxious about phone calls while older generations value them. Craig suggests younger people might stand out positively by calling rather than texting. Psychological Impacts: Em highlights the "spotlight effect" where teenagers feel all eyes are on them, but explains how social media has amplified this for younger generations by making this surveillance real and constant, contributing to mental health challenges. Generational Tensions: The hosts acknowledge the resentment between generations, with younger people frustrated about housing affordability, environmental issues, and economic challenges while older generations criticize work ethic and respect for authority. Finding Common Ground: Despite different expressions, all generations share fundamental human desires for validation, appreciation, respect, and belonging. Em emphasizes that finding this common humanity is essential for workplace harmony. Crisis as Unifier: Craig references "The Fourth Turning" by Neil Howe, noting how crises like World War II or the early COVID-19 pandemic forced people to come together across generational divides. Resource Recommendations: "Gen Intelligence: The Revolutionary Approach to Leading an Intergenerational Workforce" by Megan Gerhardt, Josephine Harm, and Jeanne Fogle "Conversations Between Generations" TED Talk by Vona Turla "Remarkable People" podcast by Guy Kawasaki, specifically the interview with David Yeager on "The Science of Motivating Young People" "Multi-Generational Workplace: The Insights You Need" from Harvard Business Review
Show more...
7 months ago
46 minutes 43 seconds

Building a Better Geek
Overcoming Skill Issues: Are You Being Nice Or Kind?
In this episode, hosts Em and Craig explore the important distinction between being "nice" versus being "kind" in workplace and personal interactions, examining how these approaches impact relationships and communication effectiveness. Key Discussion Points: Defining Nice vs. Kind: Em describes "nice" as bland, safer, and more palatable but potentially insincere, while "kind" involves honesty and authenticity that may sometimes be uncomfortable but ultimately serves others better. Toxic Positivity: The hosts discuss how workplace cultures that prioritize "nice" communication can evolve into toxic positivity, where difficult but necessary conversations get shut down because they aren't "nice," even when honesty would be the kindest approach. David Yeager's Matrix: Em shares insights from Yeager's book "10 to 25: The Science of Motivating Young People," which presents a matrix of leadership styles based on levels of support and standards: High standards + High support = Mentorship (ideal) High standards + Low support = Enforcer (potential bullying) Low standards + High support = Protector (undermines growth) Low standards + Low support = Apathetic (disengaged) Radical Candor/Honesty: Craig introduces the concept of "radical candor" as a communication approach that values honest feedback delivered with care. Em notes she practices this herself, establishing early in relationships that she'll be direct, which serves as a filter for compatibility. Feedback Role-Play: The hosts demonstrate effective feedback techniques through a role-play scenario where Em (as manager) addresses Craig's work attendance issues while maintaining psychological safety, showing curiosity rather than judgment, and focusing on objective observations. Building Psychological Safety: The conversation emphasizes how kindness creates psychological safety for difficult conversations, while "niceness" can mask festering problems that eventually surface in more damaging ways. Practice Makes Perfect: Em stresses the importance of practicing difficult conversations before high-stakes moments, suggesting people write down and rehearse boundary-setting phrases to build confidence. Key Takeaways: True kindness involves holding high standards while providing high support Psychological safety is essential for honest feedback Practice difficult conversations in low-stakes environments Leaders should model accountability by acknowledging their own mistakes The "nicest" approach isn't always the kindest one
Show more...
7 months ago
38 minutes 32 seconds

Building a Better Geek
Circuit breaker: powerful ideas for neutralising bullies in the work place
In this episode, hosts Em and Craig tackle the serious topic of workplace bullying, exploring its definition, causes, and strategies for addressing it in modern work environments. Key Discussion Points: Generational Workplace Dynamics: Em explains how having four generations in the workplace (Boomers, Gen X, Millennials, and Gen Z) creates communication challenges, with each generation having different expectations about hierarchy, collaboration, and workplace behavior. Defining Bullying: Craig shares the technical definition of bullying as "repeated and unreasonable behavior that poses a risk to health and safety," while Em emphasizes that bullying fundamentally involves power imbalances being used to harm others. Attachment Theory and Bullying: Em explains how early childhood attachment styles (secure, anxious, avoidant, and disorganized) influence how people respond to bullying. Those with secure attachment are more likely to disengage from bullying situations, while those with other attachment styles may fawn, fight back, or respond inconsistently. Distinguishing Bullying from Miscommunication: The hosts discuss the challenge of differentiating between intentional bullying and communication issues that may stem from neurodiversity or personality differences, noting that sometimes personalities simply don't mesh well. Psychological Aspects: They explore how bullies often use subtle tactics that confuse victims, such as the "smiling assassin" approach where aggressive behavior is masked with friendly demeanor, making targets question their own perceptions. HR's Role: The hosts highlight the importance of human resources departments in providing objective third-party perspectives to mediate conflicts and establish clear procedures for reporting and addressing bullying. Practical Strategies: For organizations, establishing clear anti-bullying policies and procedures is essential. For individuals, developing strong personal boundaries and recognizing unhealthy dynamics are crucial self-protection skills. Future Considerations: The conversation touches on cyberbullying and the challenges of addressing anonymous digital harassment, with Em sharing her personal experience of being targeted online. Resource Recommendations: "Emotional Blackmail" by Dr. Susan Forward for understanding unhealthy relationship dynamics "Games People Play" for recognizing interpersonal manipulation patterns David Yeager's "10 to 25: The Science of Motivating Young People" which explores how high standards and high support affect interactions between managers and younger staff
Show more...
7 months ago
40 minutes 4 seconds

Building a Better Geek
What makes you special? How AI is revolutionising inter-human communication.
In this episode, hosts Em and Craig discuss the intersection of AI and human communication, exploring how AI is transforming the workplace while emphasising the enduring value of human connection. Key Discussion Points: AI as a Tool, Not a Replacement: Craig frames AI not as true intelligence but as "engineered collaboration at scale" that helps distill knowledge from across the internet. He suggests AI works best as a "first draft fairy" that helps overcome creative blocks. Practical Applications: Craig shares examples of using AI tools like Claude to prepare interview questions and simplify complex technical concepts. Em discusses how AI could help prepare panel discussion questions. Bias and Transparency Concerns: The hosts debate whether AI models can be biased based on their training data. Craig explains that different countries are developing their own AI models to reflect their cultural contexts, similar to how media outlets have political leanings. The Value of Human Connection: Em emphasises that AI cannot fully replace human interaction, especially in areas requiring emotional intelligence, body language interpretation, and genuine connection. They discuss how human fallibility and authenticity are becoming more valued in contrast to AI's perfection. Academic and Creative Impacts: They discuss university policies being developed to detect AI-generated content in student work. Em voices concerns about creative professions becoming obsolete while acknowledging that adaptation is necessary. Prompt Engineering: Craig explains how learning to communicate effectively with AI through "prompt engineering" can maximise its usefulness, including providing context about audience and format. Resource Recommendations: Experiment with AI tools like Claude AI, ChatGPT, and Perplexity with a curious mindset Learn prompt engineering techniques to better communicate with AI "Clara and the Sun" by Kazuo Ishiguro - A novel exploring AI and humanity integration "The Psychology of Artificial Intelligence" by Tony Prescott Simon Wardley's essay on the dangers of creating a "new priesthood" of AI developers who control what information goes into models
Show more...
8 months ago
42 minutes 47 seconds

Building a Better Geek
Amazing facts you didn't know about tech and diversity: Crash test dummies
In this season opener, hosts Em and Craig explore gender dynamics in tech, focusing on women's unique challenges with imposter syndrome, funding disparities, and opportunities in an evolving industry that increasingly needs diverse perspectives.  Gender Differences in the Workplace: They explore how women often need to feel 94% qualified before applying for jobs, while men apply at around 60%. Em notes that imposter syndrome is more prevalent among women, while men tend to experience more shame after failure. Femtech Industry: They discuss the growing field of women's health technology, highlighting how the period-tracking app Flow (founded by two brothers) recently achieved unicorn status, raising questions about female representation in tech leadership. Funding Disparities: The hosts examine why female-founded tech companies, despite often having higher ROI, receive less funding than male-founded counterparts. They suggest this may partly stem from women's hesitancy to directly ask for what they want, compared to men's greater directness and sense of entitlement. Opportunities for Women in Tech: Craig highlights that tech companies actively seek female talent and suggests entry paths including project management and relationship-oriented roles. They discuss how technology now intersects with nearly every industry, creating diverse opportunities beyond traditional technical roles. Challenges in Male-Dominated Workplaces: They address the pitfall of organisations over-indexing on showcasing diversity, which can lead to women being overloaded with public-facing opportunities and experiencing burnout. Featured Resources: The Imperfects podcast episode "Maybe It's Menopause" with Dr. Louise Newson "The Confidence Code" by Claire Shipman and Katty Kay Hero of the Week: Volvo Cars for their EVA initiative, which researches safety for all body types (not just adult males) and shares this data openly with all car manufacturers, similar to how they previously shared seatbelt technology.
Show more...
8 months ago
43 minutes 11 seconds

Building a Better Geek
How to Say No: Boundary Setting for the People-Pleasing Techie - THE FIREWALL
In this episode we discussed the importance of setting boundaries and effective time management, particularly for people working in the tech industry and introverts. - Establishing personal values, priorities, and understanding the difference between organisational and individual goals is crucial. - Boundaries are not about building walls or saying "no," but rather about creating psychological safety and empowering others to express their needs. - The episode highlights the challenges of dealing with "toxic" people who lack empathy, and provides strategies for navigating such situations:    - Avoiding engagement, maintaining a paper trail, and self-care.    - Staying focused on the goals; not getting sidetracked.    - Framing tasks and responsibilities in a way that gives them meaning and purpose.    - Practicing weekly and daily planning, and "do, delegate, or ditch".    - Getting comfortable with discomfort and having difficult conversations. - The discussion emphasises the need for leaders to create an environment where employees feel empowered to express their boundaries. References and Links: - Gillespie, David. Toxic at Work. HarperCollins, 2023.- Gillespie, David. Taming Toxic People. Penguin Random House, 2017.- Kerr, James. Legacy: What the All Blacks Can Teach Us About the Business of Life. Constable, 2013.- Stone, Douglas, Bruce Patton, and Sheila Heen. Difficult Conversations: How to Discuss What Matters Most. Penguin Books, 2010.
Show more...
1 year ago
47 minutes 24 seconds

Building a Better Geek
Artificial Intelligence for Leaders, Creatives and Philosophers - THE SAD ROBOT
In this episode we discuss the impact of artificial intelligence (AI), specifically large language models, on various aspects of human life. We explore the impact on leaders and creatives, delving into the potential benefits and challenges of AI. Here are the key points: • AI's rapid advancement and democratisation raise concerns about privacy, security, and accountability.• Large language models can be trained on vast amounts of data, potentially including personal information without consent.• Explainability and transparency are crucial for AI models used in high-stakes decision-making, such as loan approvals.• Guardrails and fine-tuning can be applied to AI models to align them with specific use cases and ethical considerations.• AI's ability to generate human-like content raises concerns about authenticity, creativity, and the potential for model collapse due to a lack of new input.• Children growing up with AI may develop a natural understanding of its capabilities and limitations, potentially using it as a coaching tool.• Involving humans in the loop is essential to verify AI outputs and ensure they align with human values and priorities.• Experimentation with AI in low-risk environments can help individuals and organizations understand its potential and limitations. References Mentioned:• Polis (AI project by Audrey Tang)• Mark Andreessen (Netscape co-founder) on Joe Rogan's podcast
Show more...
1 year ago
1 hour 22 seconds

Building a Better Geek
Welcome to Building a Better Geek, where we explore the intersection of technology, psychology and well-being. For high-functioning introverts finding an audience and who like humans at least as much as machines. If you want to go deep on leadership, communication and all the things that go into building you; let’s grok on!