When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
All content for ADAPT Radio is the property of The ADAPT Centre and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
How AI is Exposing the Hidden Dangers of Gambling Ads
ADAPT Radio
33 minutes 44 seconds
10 months ago
How AI is Exposing the Hidden Dangers of Gambling Ads
As AI becomes a bigger part of our world it can be utilised to power research and studies in ways that were never possible before.
Today we hear about new research that has used AI to expose the harms of gambling marketing in sport. Through analysis of sporting programmes, social media consumption and focus groups with young people, they discovered the shocking reach these companies have into our everyday lives.
Our expert guests are calling for better regulation and a step away from the idea of individual responsibility. Aphra Kerr is Adjunct Professor of Sociology at Maynooth University and Professor of Information and Communication Studies at University College Dublin. Dr Paul kitchen is a senior lecturer in the School of Sport and Exercise Science at Ulster University in Belfast.
THINGS WE SPOKE ABOUT
● Using AI to gather concrete data on gambling trends
● The rise in gambling exposure due to social media and deregulation
● Analysis through focus groups and media consumption
● Social impacts and considerations away from the individual
● Putting pressure on sporting bodies and gambling companies
GUEST DETAILS
Dr. Aphra Kerr is a Full Professor of Information and Communication Studies at University College Dublin and Senior Adviser at the UCD Centre for Digital Policy. She is adjunct Professor of Sociology at Maynooth University. She is a Co-PI at the ADAPT Centre for Digital Media Technology, scientific lead of the Transparent Digital Governance strand and co-lead of the Autonomy and Responsibility challenge. Her ADAPT research focuses on the ethics and values underpinning the design and governance of AI, AI related public policy and social expectations of AI. She is also working on projects related to algorithmic and AI literacy, young people’s engagement with media and gaming, histories of creative computing and examining media concentration and power. Aphra has over twenty years researching digital content and technology with a focus on digital media and digital games.
Paul Kitchin is a Senior Lecturer within the School of Sport and Exercise Science. His PhD investigated organisational change on managers, staff, and youth participants in para-sport. He is interested in how wider health and social outcomes are developed through sport. Topics of relevance to this include; Disability, Sport and Media, Marketing, Gambling, Youth. He is a Senior Fellow of the Advance HE and his teaching and supervision focus is on leadership, management, and justice in and through sport organisations.
MORE INFORMATION
All-Island report finds young people exposed to high levels of gambling marketing across sport and media: https://www.adaptcentre.ie/news-and-events/all-island-report-finds-young-people-exposed-to-high-levels-of-gambling-marketing-across-sport-and-media/
Adapt Radio is produced by DustPod.io for the Adapt Centre
For more information about ADAPT visit www.adaptcentre.ie/
ADAPT Radio
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker