We all know the saying: nothing is certain except death and taxes. But in today’s digital world, there’s a third certainty—data. In this episode of Privacy Chats with Rachel and John, we ask a big question: what happens to your digital footprint after you die?
💬 From personal anecdotes to platform policies, we explore:
⚰️ The default “limbo” state of accounts on Google, Facebook, and Apple
🛠️ Practical tools like Google’s Inactive Account Manager & Apple’s Legacy Contacts
📜 Estate planning laws (like RUFADAA) that give loved ones digital access
📉 The risks of doing nothing—and letting outdated Terms of Service decide for you🔑 Why password managers might be your unsung digital heirs
This episode is a wake-up call (and a gentle nudge) to take action while you can. 👀 Tune in, reflect, and maybe even update your password vault.
Approximate timestamps:
00:00 🎙️ Intro – Death, Taxes... and Data?
01:45 🧠 What Happens to Our Data After We Die?
04:10 👵 A Generational Shift in Digital Footprints
06:30 🗃️ Google Drive & Inactive Account Manager
10:45 📨 Setting Up a Digital Legacy with Google
13:15 📘 Facebook Memorialization & Legacy Contacts
17:40 💔 A Real Example: Remembering a Friend on Facebook
20:30 ❓ Who Can Request Memorialization (and How)?
23:20 ⚖️ The Legal Void Around Post-Mortem Data
26:00 📜 Estate Planning, RUFADAA & Digital Assets
28:40 🛠️ Tools to Prep: Password Managers & Account Access
30:15 🍏 Apple’s Legacy Contacts – Pros & Limitations
33:00 🎶 Why Digital Purchases Don’t Really Belong to You
34:50 🧩 Recap – 4 Things to Do Before You Die (Digitally)
35:45 📝 Disclaimer & Next Episode Preview
___________________________________________________________________________________________________________________________________________________________________________________________________
While this video's content is based on our own thoughts and professional opinions, it was also made possible through the consultation of the following resources:
Can "Legitimate Interest" justify using personal data to train AI models? 🤔That’s the central tension we explore in episode 35 of of Privacy Chats. What started as a follow-up to our global privacy law recap turned into a deep dive into how companies lean on legitimate interest to process personal data at scale. We explore how this legal basis stacks up against user's basic rights, what regulators like NOYB are pushing back on, and whether current rules strike the right balance in an AI-driven world.Approximate timestamps: 00:00 🎙️ Intro & Recap of Global Privacy Laws 01:30 🧠 What Is “Legitimate Interest”? 04:10 ⚖️ GDPR’s Six Legal Bases Explained 10:25 🧪 ICO’s Three-Part Test (Purpose, Necessity, Balancing) 14:00 🚗 Rachel’s Car Analogy for Legal Basis 16:00 ❌ When Legitimate Interest Doesn’t Apply 19:45 📩 NOYB’s Cease & Desist to a Major Tech Company 23:10 🤖 AI Training, Consent & the “Right to Be Forgotten” 27:20 🕵️♂️ Is Legitimate Interest a Catch-All for Big Tech? 30:00 💡 Should There Be a New Legal Basis for AI Innovation? 33:15 🔄 The Privacy Paradox & Balancing Rights 35:45 📝 Disclaimer____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
While this video's content is based on our own thoughts and professional opinions, it was also made possible through the consultation of the following resources: 1. What counts as legitimate interest?: https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/lawful-basis/legitimate-interests/what-is-the-legitimate-interests-basis/#what_counts2. noyb sends 'cease and desist' letter over AI training. European Class Action as potential next step (14 May 2025): https://noyb.eu/en/noyb-sends-meta-cease-and-desist-letter-over-ai-training-european-class-action-potential-next-step
In this episode of Privacy Chats, we explore the growing global momentum behind comprehensive privacy regulations.
With over 140 countries that have embraced laws inspired by the GDPR, Rachel and John zoom in on four frameworks in particular: GDPR (EU), LGPD (Brazil), APPI (Japan), and PIPA (South Korea) — highlighting how they align (and diverge) across key areas including:
Scope and extraterritorial reach
Lawful bases for processing
Data subject rights
Sensitive data definitions
DPO and DPIA requirements
Breach notification rules
Enforcement, sanctions, and international data transfers
Along the way, we analyze which countries are still lagging, where U.S. state laws fit into the picture, and how global organizations can navigate compliance across borders.
________________________________________________________________________________________________________________________________
This episode was inspired by the following publications and resources:
European Commission: GDPR
https://commission.europa.eu/law/law-topic/data-protection_en
ANPD Brazil: LGPD
https://iapp.org/media/pdf/resource_center/Brazilian_General_Data_Protection_Law.pdf
South Korea PIPC: PIPA
https://www.pipc.go.kr/eng
Japan PPC: APPI
https://www.japaneselawtranslation.go.jp/en/laws/view/2616/en
On Episode 33 of Privacy Chats, we revisit the United States’ evolving approach to AI governance, breaking down the latest policy mandates and exploring what they mean for responsible AI procurement, innovation, and public trust.And for a bit of perspective, we rewind the clock—comparing today’s AI frontier to past regulatory turning points, like the rise of the FDA.
____________________________________________________________________________________________________________________________________________________________________________________
This episode was informed by the following publications:
- https://learn.g2.com/eu-ai-continent-action-plan
- https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/
- https://www.federalregister.gov/documents/2023/11/01/2023-24283/safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence
- https://www.theverge.com/2025/1/21/24348504/donald-trump-ai-safety-executive-order-rescind
This is a 3-part episode, so see below for the time-stamps discussing each topic:
🇪🇺 Europe’s AI Ambitions: 00:00-25:00
The EU is stepping up with new announcements aimed at accelerating its AI strategy. But is it too late to compete with the US and China, and how might this shape their positioning as key global privacy regulators? We unpack the optimism, the funding, and the political will behind Europe's AI push.
🧬 23andMe’s Identity Crisis: 25:00-40:50
Once a pioneer in consumer genomics, 23andMe is now navigating a very public privacy backlash and a Chapter 11 bankruptcy. We explore what went wrong—and what it signals about trust in data-driven health companies.
🧠 Chat Memory Is Here: 40:50-55:44 OpenAI just launched a game-changing “memory” feature that remembers your preferences and builds continuity across chats. Helpful or a little too close for comfort? We dive into the privacy implications and use cases.
________________________________________________________________________________________________________________________________________________________________
This episode was informed by the following publications:
Europe’s AI Optimism:
European Commission's latest AI initiatives look to drive competitiveness, broader integration: https://iapp.org/news/a/european-commission-s-latest-ai-initiatives-look-to-drive-competitiveness-broader-integration
Shaping Europe’s leadership in artificial intelligence with the AI continent action plan: https://commission.europa.eu/topics/eu-competitiveness/ai-continent_en
GDPR Simplification (LinkedIn post written by Stephan Geering): https://www.linkedin.com/pulse/gdpr-simplification-wish-list-stephan-geering-lnbge?utm_source=chatgpt.com
23andMe:
Congress has questions about 23andMe bankruptcy: https://techcrunch.com/2025/04/19/congress-has-questions-about-23andme-bankruptcy/?utm_source=chatgpt.com
How 23andMe's bankruptcy led to a run on the gene bank: https://www.npr.org/2025/04/25/1247139353/23andme-data-genome-bankruptcy-privacy-customer-data?utm_source=chatgpt.com
What will happen to your 23andMe data — and can you delete it?: https://www.thetimes.com/us/news-today/article/what-happen-23andme-data-how-delete-tfjn5lfmx?utm_source=chatgpt.com®ion=global
ChatGPT Memory Updates:
ChatGPT can now remember and reference all your previous chats (Ars Technica): https://arstechnica.com/ai/2025/04/chatgpt-can-now-remember-and-reference-all-your-previous-chats/
Memory FAQ - OpenAI: https://help.openai.com/en/articles/8590148-memory-faq
HIPAA is a critical piece of the nuanced privacy puzzle in the US, but it’s often misunderstood — frequently reduced to just a medical form or mistaken for a blanket privacy law. On Episode 31 of Privacy Chats with Rachel and John, we bring you a HIPAA “crash course” - who it’s for, data in scope, and the consequences of non-compliance.
And, just for fun, here are some silly HIPAA puns brought-to-you by ChatGPT:
"HIPAA-chondriac" – Someone who constantly worries about privacy breaches.
"HIPAA-ly ever after" – What you hope for after a successful compliance audit.
"HIPAA-crite" – Someone who preaches privacy but doesn’t follow the rules.
"HIPAA-critical situation" – When someone is taking compliance a little too seriously.
________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
This episode was directly informed by the US Department of Health and Human Services’ dedicated HIPAA page and by publications from the HIPAA Journal:
On Episode 30 of Privacy Chats with Rachel and John, we take a look back at the eventful-ness of 2025 thus far, including DeepSeek’s shake-up of the global AI tech sector and continuous policy implications of the new US Presidential Administration peeling back previous AI safety mandates. Have AI acceleration priorities eclipsed AI safety priorities? If so, how long will it take for the greatest harm to be felt? Would greater enforcement pressure lead to more compliance?
This episode was informed by the following sources:
"Congressional Committee Kickstarts New Federal Privacy Law Dialogue"
"DeepSeek’s Popular AI App Is Explicitly Sending US Data to China"
"DeepSeek rushes to launch new AI model as China goes all in" - Reuters
"A view from DC: US House Republicans organize a privacy working group"
"A view from DC: The first few days of Trump’s AI and privacy agenda"
Jen Easterly On The Future of Cybersecurity and Her Agency's Survival
Foreign Hackers Are Using Google’s Gemini in Attacks on the US
In Episode 29 of Privacy Chats, Rachel and John cover the growing concerns over cybersecurity and privacy as it relates to Chinese-made consumer technology and how numerous countries have responded to such risks. They discuss the ever-evolving position of the U.S. with respect to TikTok and the downstream effects of our political landscape on U.S.-based Technology companies.
________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________
The episode was informed by the following resources & publications:
Decentralized social media is gaining traction, but is it really the future of online discourse?
Many users are begging for a paradigm shift in how social media platforms are governed today. This stems from growing concerns about mainstream, centralized platforms’ impact on privacy, democracy, mental health, and public discourse.
In episode 28 of Privacy Chats, we investigate the rise of platforms like BlueSky, exploring how they work, their privacy implications, and whether they truly offer an alternative to centralized giants like X and Threads. We succinctly break down the AT Protocol, content moderation models, and the broader challenges of balancing user control with accessibility.
Tune in for an insightful discussion and decide for yourself whether decentralized social media platforms will indeed represent a new paradigm shift for social media —or if it's destined to remain a trend.
This episode was informed by the following sources:
Replying to those ridiculous “hi, how are you?” texts sound like innocent fun. But what if there’s something much more complex going on under the surface?
On episode 27 of Privacy Chats with John and Rachel, we investigate the ever-growing popularity of text-based scams, from "warming" phone numbers to "pig butchering" schemes (wild name, huh?) AND “smishing” attempts disguised in everyday transactions, such as package tracking and road toll notices. We break down how scammers manipulate victims, the tactics used to gain trust, and how these scams have continued to evolve globally.
With practical security tips and insights from authoritative sources, our goal is to equip listeners with the knowledge to keep their friends and families safe from these increasingly sophisticated digital threats.
This episode was informed by the following publications & forums:
On Episode 27 of Privacy Chats with Rachel and John, we build upon our prior episode uncovering gift card scams - this time, focusing on how scammers trick people into purchasing and sending gift card details under false pretenses - such as fake IRS threats, tech support scams, or impersonation schemes - what they’re used for, why they’re effective, and most importantly, key prevention tips to help you and your family keep your wits about you in the new year!
................................................................................................................................................................................................................................................................................................................................................................
Resources that inspired our episode:
Maryland Legislation ~ Consumer Protection - Retail Sales of Gift Cards (Gift Card Scams Prevention Act of 2024)
https://mgaleg.maryland.gov/mgawebsite/Legislation/Details/hb0896?ys=2024RS
https://mgaleg.maryland.gov/2024RS/fnotes/bil_0006/hb0896.pdf
Gift Card Exchange website
https://www.cardcash.com/sell-gift-cards/
Reddit Thread about Scammers extracting money from gift cards
Jim Browning YouTube
https://www.youtube.com/watch?v=h9Rk51WQC9A
Money.com article about sites to sell gift cards
https://money.usnews.com/money/personal-finance/spending/articles/sites-to-sell-gift-cards-online
Gift cards are a fun way to spread holiday joy, removing the guesswork for the giver while ensuring the recipient gets exactly what they’re looking for…... .until you find that a scammer has interfered in the process!
On Episode 25 of Privacy Chats with Rachel and John, we break down the vulnerabilities and scams associated with gift cards, particularly those involving the theft or misuse of funds loaded onto gift cards, based on John’s own experiment evaluating the security of cards between different vendors.
Tune in to learn about the various security features (or lack thereof) on common gift cards, how scams often occur, and different prevention strategies to minimize risks for gift card givers and receivers.
People often wonder if their phone is listening to them when they see advertisements related to things that they recently discussed, but didn’t search on their phone. Could this be true? Is your phone listening to you?
On Episode 24 of Privacy Chats with Rachel and John, we look at published findings of other people who have dived into this topic.
Artificial Intelligence continues to move the boundary of hypothetical technologies - such as brain reading devices - closer and closer to reality. How close we are to that boundary today, and where that boundary sits in terms of becoming a threat to privacy and civil liberties, remains a subject of debate. On Episode 23 of Privacy Chats, Rachel and John discuss the progress made in recent neurotechnology advancements as well as other affect-recognition technologies, including non-invasive video reconstruction using brain activity and brain fingerprinting (yes - you read that right!) What can, or should, we expect as far as effects on law enforcement and personalized advertising activities? Is it too early to tell? Tune in and join the conversation too find out! This episode was inspired and informed by the following publications: 19 May 2023 - Cinematic Mindscapes: High-quality Video Reconstruction from Brain ActivityBrain fingerprinting: a comprehensive tutorial review of detection of concealed information with event-related brain potentials
Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993)
What will a second Trump term mean for US Privacy and AI Policy?
Former president Joe Biden set a record number of Executive Orders into motion related to Artificial Intelligence and National Security into motion during his term, but there’s still plenty of work to be done. FTC Chair Lina Khan pursued legal action against US technology companies in an unprecedented way during Biden’s term and pushed the boundaries of the FTC’s enforcement role in the process. And although containing the harmful impacts of AI remains a bipartisan goal, to what extent - considering the anticipated tradeoffs to innovation - is where the divide continues to exist.
How will the new administration impact the direction of Privacy, Security, and Artificial Intelligence policy, both on a domestic and international scale? On episode 22 of Privacy Chats with Rachel and John, Rachel talks about her 3 key predictions regarding how the Trump administration will likely respond to the activities set into motion under former President Joe Biden.
…..…..…..…..
…..…..…..…..
This episode makes reference to the following Executive Orders instituted under former president Joe Biden:
(EO 14110) Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
This episode was written inspired by research involving the following publications:
NextGov.com: Trump promised to repeal Biden’s AI executive order — here’s what to expect next
IAPP News: A view from DC: What does a second Trump presidency mean for privacy, AI governance?
IAPP News: A view from DC: The beginning of the end of the free flow of data
Oversight.house.gov: Oversight Committee Releases Staff Report Finding FTC Chair Khan Abused Authority to Advance the Biden-Harris Administration’s Agenda
Security Infowatch: What can the security industry expect from a second Trump term?
Tech Press: Where US Tech Policy May Be Headed During a Second Trump Term
Wikipedia: Donald Trump Tiktok Controversy Wikipedia
Reason.com: https://reason.com/2024/04/24/another-illegal-power-grab-from-the-ftc/
White House.gov: Fact Sheet: Key AI Accomplishments in the Year Since the Biden-Harris Administration’s Landmark Executive Order
Doxing is “the action or process of searching for and publishing private or identifying information about a particular individual on the internet, typically with malicious intent.” - Oxford Languages
New technologies, and faster processing of information is making it far easier to find and access information about us. AI will bring this to the next level allowing for rapid aggregation of disparate data.
On Episode 21 of Privacy Chats with Rachel and John, we look at new technologies that make it easier to identify people and access information about them, and why we might want to be more cautious about what information we allow to be out there for people to access and utilize.
GenAI (Generative Artificial Intelligence) and AI in general, continues to be all-the-rage, permeating nearly every business conversation involving automation, scalability, and improved insights in the pursuit of minimizing cost and maximizing revenue. Given Privacy and AI are inseparable concepts at their core - what does this increased emphasis on AI mean for professionals subject to these technologies in their day to day roles?
On Episode 20 of Privacy Chats with Rachel and John, Rachel and John reflect on the industrial revolution’s influence on surveillance, standardization, and automation, how these practices have influenced the economic imperatives of AI, and how AI will continue to challenge the concept of worker’s reasonable expectation of Privacy in the modern workplace.
Privacy Engineering is an essential function of mature Privacy Programs, joining aspects of software engineering, data protection, privacy compliance, and privacy risk management. On the latest episode of Privacy Chats, Rachel interviewed Jay Averitt, a Privacy Engineer at Microsoft, who shares his career journey, how he would describe Privacy Engineering at an “ELI5” level, and what in particular makes Privacy Engineering a challenging yet meaningful career. Jay regularly writes about the Privacy industry on LinkedIn, initiating insightful conversations regarding common challenges faced and crowdsourcing practical, real-world solutions with other Privacy professionals: https://www.linkedin.com/in/jay-averitt/.
Innovation is the spark that ignites the engine of progress, all of which is fueled by cooperation and a mutual interest in creating a better world. We tend to look at privacy from the perspective of what is, rather than from the perspective of what could be.
On Episode 20 of Privacy Chats with Rachel and John, we spin a new narrative on how data could be owned, managed, and distributed in a way that brings a new level of transparency and control to individuals and contributes to the greater benefit of humanity.
AI Tool Koe Recast claims that its AI voice-generating software requires just a few sentences of audio to replicate your voice. If that’s not suspicious enough, other options need as little as three seconds to capture and reproduce it to a convincing degree. Long gone are the days of Caller ID Spoofing!
Scammers these days might just be using ChatGPT to help write a convincing story as well. Such AI impersonation tools are surely a small price to pay for the $2.7B that US consumers lost to imposter scams in 2023 alone. And although it’s been the tried-and-true method for decades, phone calls are no longer the scammer’s medium of choice. According to the FTC, the highest overall reported losses were caused by scammers on social media. We’ll have to wait and see if the FTC’s amendments to their Trade Regulation Rule meaningfully improves the situation in 2024.
In the meantime - you can tune into Privacy Chats with Rachel and John to learn more about the very real implications of high quality deep fakes found in our everyday lives. What’s cooler than out-scamming the scammer, anyway?