When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
All content for ADAPT Radio is the property of The ADAPT Centre and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
The Irish language is an important part of Ireland's culture, but it is a minority language and like many ‘at risk’ languages around the world, Irish needs to be protected.
Today we're talking about how AI can help to boost the Irish language and the importance of diverse data collection in building robust translation systems. We also hear how researchers are using natural language processing and other tools to help maintain the richness of the language and make it more accessible and available to those who use it.
Our experts today are passionate about protecting minority languages and are working on technology to improve machine translation of the Irish language with the Adapt Centre. They are postdoctoral researcher, Dr Abigail Walsh and research assistant with eSTÓR, Gráinne Caulfield, both from Dublin City University.
THINGS WE SPOKE ABOUT
● Protecting minority languages with AI
● The limitations of data collection and the need for more diversity
● Using Natural Language Processing to collect the complexities of a language
● AI’s role in encouraging more use of an at risk language
● Getting social media platforms on board to make language more accessible
GUEST DETAILS
Abigail Walsh is a PhD student at the ADAPT Centre in Dublin City University. Her research focuses on improving NLP for Irish language, focusing on the treatment and automatic processing of Multiword Expressions (MWEs). Abigail’s interests include Irish language technology, MWEs, NLP for low-resource languages, linguistic analysis, data processing, Machine Translation, and Machine Learning.
Gráinne Caulfield is a recent graduate of Irish & French from Trinity College Dublin, currently working as a Research Assistant on the eSTÓR project. In this role, she executes the project’s outreach activities- carrying out site visits to relevant stakeholders, managing the social media platforms, newsletter writing etc, as well as translation and data processing duties.
MORE INFORMATION
Adapt Radio is produced by DustPod.io for the Adapt Centre
For more information about ADAPT visit www.adaptcentre.ie/
KEYWORDS
#irish #language #data #translation #machinetranslation #technology #ai
ADAPT Radio
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker