How do tech platforms develop clear content policies while balancing user freedom, regulatory requirements, and cultural contexts? What does it take to scale trust and safety efforts for billions of users in a rapidly changing digital landscape? Navigating these challenges requires foresight, transparency, and a deep understanding of user behavior.
In today’s episode of Click to Trust, we are joined by Cathryn Weems, Head of Content Policy at Character.AI, to take on the intricacies of building Trust and Safety policies. Cathryn shares her extensive experience shaping content policies at some of the world’s largest tech platforms, from crafting transparency reports to addressing complex government takedown requests. She offers unique insights into balancing global scalability with localized approaches and why clear, enforceable transparency reports are key to fostering trust.
In this episode, you’ll learn:
Jump into the conversation:
(00:00) Meet Cathryn Weems
(01:10) The evolution of Trust & Safety as a career path
(05:30) Tackling the complexities of content moderation at scale
(10:15) Crafting content policies for gray areas and new challenges
(14:40) Transparency reporting: Building trust through accountability
(20:05) Addressing government takedown requests and censorship concerns
(25:25) Balancing cultural context and global scalability in policy enforcement
(30:10) The impact of AI on content moderation and policy enforcement
(35:45) Cathryn’s journey as a female leader in Trust & Safety
(40:30) Fostering trust and improving safety on digital platforms
How can Trust and Safety professionals navigate promoting online safety and mitigating real world harms while also maintaining their own well-being?
In today’s episode of Click to Trust, we are joined by Heather Grunkemeier, Founder of Twinkle LLC and a seasoned Trust and Safety leader, who shares her personal journey through the highs and lows of working in this space. Heather discusses the mental health challenges faced by professionals in Trust and Safety and the importance of setting boundaries to prevent burnout. Drawing from her own experience, she provides invaluable advice on creating a sustainable work-life balance and the tools that have helped her along the way, including finding support in peers, prioritizing self-care, and asking the right questions in job interviews.
In this episode, you’ll learn:
Jump into the conversation:
(00:00) Introduction to Heather Grunkemeier
(02:00) Heather’s journey into Trust & Safety
(05:00) Personal burnout: The toll of overachieving in tech
(09:25) The emotional and mental impact of Trust & Safety roles
(12:40) How balance boundaries and protect mental health in T&S
(16:00) The importance of boundary-setting for T&S professionals
(19:30) How companies can better support Trust & Safety teams
(23:45) Heather’s advice for new professionals in Trust & Safety
(28:15) Leading with compassion in Trust & Safety
(35:10) Heather’s consulting venture and advice for aspiring entrepreneurs
How can platforms safeguard vulnerable populations while meeting the needs of service providers? What roles do transparency and continuous education play in building trust and preventing incidents? Ensuring safety in caregiving platforms requires thoughtful strategies that go beyond the basics of vetting and monitoring.
In today’s episode of Click to Trust, we are joined by Jane Yu, Head of Trust and Safety at Papa, to explore the complexities of building trust in caregiving services. Jane shares her experiences developing safety protocols and fostering a community of trust across the gig economy. From revamping background check processes to launching innovative safety features like emergency response features and ID verification, Jane provides a behind-the-scenes look at how safety is upheld for both caregivers and members alike.
Read Papa's Inaugural Transparency Report: https://resources.papa.com/transparency-report-2024
In this episode, you’ll learn:
Jump into the conversation:
Maintaining trust and safety online is a delicate balancing act.
What responsibility do platforms have to support their content moderators? What does it mean to be an ethical BPO? How can platforms promote user safety both on and offline?
In today’s episode of Click to Trust, Alice Hunsberger, VP of Trust and Safety at Partner Hero, dives deep into the evolving landscape of ethical content moderation and the often-overlooked challenges faced by content moderators and trust and safety professionals. From her early experiences on dating platforms like OkCupid and Grindr to leading trust and safety at PartnerHero, Alice shares valuable insights on balancing privacy, safety, and user expression on digital platforms.
In this episode, you’ll learn:
Jump into the conversation:
Keeping users safe is a complex task for all online platforms. Many try to enact content policies to protect us from harmful content, but are those policies enough? And just how enforceable are they anyway?
In this episode of Click to Trust, we examine the critical role content policies play in ensuring online safety. And to help us do that, we’ll hear from Sabrina (Pascoe) Puls, TrustLab’s Director of Trust and Safety Policy & Operations. She explains how content policies work behind the scenes to help protect users and platforms by preventing online harms like misinformation, hate speech, and more. Sabrina also reveals the overlooked challenges that come with developing and enforcing these rules.
And throughout the episode, we’ll question whether current safety measures are truly effective or if they unintentionally miss the mark, leaving both users and platforms vulnerable.
In this episode, you’ll learn:
Jump into the conversation:
(00:00) Introduction to Sabrina (Pascoe) Puls
(02:20) Differences Between Content Policies and Community Guidelines
(06:53) Common Pitfalls in Policy Creation
(09:19) Collaboration Between Policy and Engineering
(17:07) Automation vs. Human Moderation
(20:27) Convincing Leadership to Invest in Trust and Safety
Dating apps let users transform virtual interactions into real-world meetings. So, how effectively are these platforms addressing the myriad safety challenges?
In this episode of Double Click, we explore Hinge's latest initiative: Hidden Words. This user-driven moderation tool designed to empower daters by allowing them to filter out specific words, phrases, or emojis from their matches' first messages. But how effective is this new approach when it comes to increasing safety for users?
Benji Loney, TrustLab’s Chief Product Officer, explores the delicate balance between user empowerment and the risks of creating superficial safety measures. Additionally, Sabrina Pascoe, TrustLab’s Director of Customer Success and Vendor Operations, raises critical questions about the design and effectiveness tools like Hidden Words, particularly in terms of their unintended consequences on reporting bad actors.
We’ll also hear from Jeff Dunn, Hinge’s VP of Trust and Safety, who provides a behind-the-scenes peek into the development and implementation of Hidden Words.
In this episode, you’ll learn:
Jump into the conversation:
[00:00] Introducing Hinge’s new feature
[00:58] What exactly is Hinge's Hidden Words feature?
[05:28] How does Hinge’s Hidden Words feature compare with Reddit's Automod Product
[09:31] The trouble with user-controlled moderation tools
[10:30] Hinge’s Jeff Dunn on the development of Hidden Words
In this first episode of Doubleclick, a Click to Trust mini-series, we delve into the uproar caused among online creators when Adobe's updated terms of service sparked a heated debate. Allegations surfaced that Adobe might use user-generated content to train generative AI, leading to widespread concern about privacy and intellectual property rights. As social media ignited with screenshots and reactions, Adobe responded with clarifications and adjustments to mitigate the fallout.
Host Carmo Braga da Costa navigates through the nuances with expert insights and industry perspectives. From the initial confusion to Adobe's reassurances and subsequent legal challenges, we explore the broader implications for user trust in tech giants and the evolving landscape of digital rights.
Join us as we unpack the complexities of terms of service agreements in the era of AI and consumer privacy, questioning how companies balance innovation with ethical responsibilities. Will Adobe regain the trust of its disillusioned users, or has the damage been irreparably done?
In this episode, you’ll learn:
Jump into the conversation:
06:49 Adobe subscription model causes backlash and fear.
07:48 Adobe reassures users over GenAI concerns.
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.
In this episode of Click to Trust, we draw our series of episodes about election misinformation to a close with an examination from TrustLab CEO Tom Siegel. From the need for collaboration between platforms to combat misinformation to the impact of algorithmic amplification on the spread of misinformation to the role of AI in both exacerbating and addressing the problem, several key themes have emerged over the last six episodes. As several key elections draw closer, Tom reflects on these themes and informs us on how to stay safe (and vigilant) in our online lives.
Highlights:
Jump into the conversation:
[05:37] How echo chambers amplify misinformation.
[09:11] How AI impacts misinformation.
[12:26] What role do advertisers play in curbing misinformation online?
[21:47] Why media literacy is a crucial safeguard.
Media Literacy Resources:
Understanding AI and Deepfakes:
General Resources on Staying Informed:
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.
In this episode of Click to Trust, TrustLab’s own Xiaolin Zhuo about the European Union’s Code of Practice on Disinformation. The Code of Practice is a first-of-its-kind study that measures the prevalence and sources of misinformation across major social platforms.In our conversation, Xiaolin shared her team’s findings on online misinformation and the impact the Code of Practice can have on slowing down the spread of misinformation online.
Highlights:
Jump Into the Conversation:
[05:27] TrustLab’s study shows multiple metrics with unexpected platform patterns.
[11:50] Data helps combat misinformation by giving users an understanding of where information is coming from.
16:47 Addressing misinformation in EU requires education and collaboration between platforms and users.
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.
In this episode of Click to Trust, we’ll hear from computational sociologist Tom Davidson about how misinformation and social media have changed citizen engagement online. One of the many results is the growth of populist political parties fueled by online spaces where misinformation can run rampant. Tom explains the differences in engagement between populist and non-populist parties on platforms like X and Facebook.
Highlights:
Jump Into the Conversation:
[06:12] Populists use social media for direct connection.
[08:25] Hate speech and misinformation fuel populist engagement.
[17:28] Facebook is widely used in Europe, but Twitter is used selectively.
[29:17] Populists are targeting big tech and AI tools.
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.
In this episode of Click to Trust, we’ll hear from Karishma Shah, Program Manager of News Integrity Parterships at Meta. In her interview with Carmo Braga da Costa, Karishma provides a behind the scenes look at how organizations like Meta are partnering with independent, fact-checking organizations to fight against misinformation. Karishma explains the full scope of the challenge presented by AI-generated content, and expresses a need for collaboration among tech companies to address misinformation and protect users.
Highlights:
Jump Into the Conversation:
[05:45] Facebook's fact-checking program partners with organizations globally.
[14:51] How to support fact-checkers in facing online threats.
[17:50] AI-generated content and the potential risk for elections.
[21:44] User-generated platforms must address misinformation challenge.
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.
In this episode of Click to Trust, we’ll hear from Katie Harbath, an expert in misinformation and election interference as well as the Chief Global Affairs Officer at Duco. In her interview with Tom Siegel, Katie shares her thoughts on the balance between freedom of speech and user safety and how the approaches taken by social media companies so far are not enough to safeguard their users. Katie’s insights highlight the role that individual responsibility plays in combating misinformation and that consumers simply can not wait for organizations to tackle the problem of misinformation for us.
Highlights:
Jump Into the Conversation:
[09:38] Misinformation caused by websites pivoting to claim 5G causes COVID.
[19:22] Algorithms designed to maximize inflammatory content consumption.
[24:45] What happened with Hertz Rent-a-Car inadvertently placed an ad on a misinformation website.
[31:26] The constant cat-and-mouse game with the accelerating pace of AI development.
Resources:
Anchor Change with Katie Harbath (newsletter)
Impossible Tradeoffs with Katie Harbath (podcast)
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.
In this episode of Click to Trust, we’ll hear from Steven Brill, Co-CEO at NewsGuard, a journalistic organization whose mission is to counter misinformation online by rating the reliability and credibility of news and information. Since 2018, NewsGuard has been fighting an uphill battle against misinformation by collecting, updating, and deploying more than 6.9 million data points on more than 35,000 news and information sources.
2024 is a busy year for democracy! More countries will hold elections this year that at any point in the next two decades to come. So perhaps more than ever, it’s critical that we take a look at the growing amount of misinformation that threatens to influence or subvert these elections.
In this first episode in an ongoing series about misinformation, we speak with Tom Siegel to get a primer on what misinformation is, how it impacts our elections, and what we can do to mitigate those effects. Spoilers: Misinformation isn’t going away anytime soon.
On February 17, 2024, the Digital Services Act will go into effect, altering the way online platforms are held accountable for the content they host. Some see this legislative change as a necessary measure for safeguarding our online lives. Others see it as a potential hindrance to freedom of speech that has the unfortunate consequence of creating a less open internet.
But we can’t talk about the DSA without addressing the artificially generated elephant in the room. What impact will increasing online regulation have on artificial intelligence? In this episode of Click to Trust, we’ll start to answer that question with the help of Scot Pansing, principal of The Human Side of Technology. And in this last episode of our Digital Services Act investigation, we’ll check in with Tom Siegel for a look back at what we’ve learned and where we go from here with online regulation.
On February 17, 2024, the Digital Services Act will go into effect, altering the way online platforms are held accountable for the content they host. Some see this legislative change as a necessary measure for safeguarding our online lives. Others see it as a potential hindrance to freedom of speech that has the unfortunate consequence of creating a less open internet.
To better understand the impacts that the DSA could have, we’re examining regulations that are already in place in this episode of Click to Trust. We’ll hear from Ofcom’s Online Safety Senior Manager, Sophie Parker, about how the Online Safety Act (legislation similar to the DSA) is already changing the online landscape across the UK. And with the trend toward online regulation going global, we’ll check in with Australian eSafety Commissioner Julie Inman-Grant, about how her department balances the criticism that regulations stifle free speech with the reality that something must be done to keep the vulnerable safe online.
On February 17, 2024, the Digital Services Act will go into effect, altering the way online platforms are held accountable for the content they host. Some see this legislative change as a necessary measure for safeguarding our online lives. Others see it as a potential hindrance to freedom of speech that has the unfortunate consequence of creating a less open internet.
In this first episode of Click to Trust’s debut three-episode story, hosts Carmo Braga da Costa (Head of Content at TrustLab), and Tom Siegel (Co-Founder and CEO of TrustLab) interview journalist and Everything in Moderation founder Ben Whitelaw and TrustLab’s own Benji Loney to discover what legislation like the Digital Services Act will mean for the future of the internet, and how the change it brings will affect more than just the big platforms like Facebook and X.
Online platforms are plagued by harmful content, from hate speech to illegal activities, there seems to be a news headline every week where someone’s well being has been impacted. No wonder we’ve got trust issues.Click to Trust delves into the intricate world of safeguarding online spaces.
Guided by the wisdom of Tom Siegel, CEO and Co-Founder of Trust Lab, and (previously) the VP of Trust & Safety at Google, the show covers the challenges of monitoring harmful content, combatting digital threats, and empowering you to navigate the web with trust. Click to Trust is your go-to resource for promoting a safer and healthier digital environment for all.