When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
All content for ADAPT Radio is the property of The ADAPT Centre and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
As AI continues to shape our society, how can we make sure that it doesn't harm minority groups or exacerbate inequalities?
Today we hear from two experts who are a part of a brand new research group that is working to keep AI accountable. The AI Accountability Lab focuses on critical issues across broad topics such as the examination of opaque technological ecologies and the execution of audits on specific models and training datasets. We hear how researchers are trying to join the dots between evidence and policy and how better AI awareness is key to preventing harm.
Our guests today are leading the way in AI accountability. Dr Abeba Birhane is a cognitive scientist and Research Fellow in the ADAPT Research Centre Ireland, and Dr Roel Dobbe is an Assistant Professor in Technology, Policy & Management at Delft University of Technology focusing on Sociotechnical AI Systems.
THINGS WE SPOKE ABOUT
● The AI Accountability Lab
● How AI has potential to entrench inequality in society
● Joining the dots between AI research and policies
● Addressing misrepresented concepts in AI training models
● Awareness of AI inaccuracy even with perfect data models
GUEST DETAILS
Dr Abeba Birhane is a cognitive scientist researching human behaviour, social systems, and responsible and ethical Artificial Intelligence (AI). Abeba recently finished her PhD, where she explored the challenges and pitfalls of automating human behaviour through critical examination of existing computational models and audits of large scale datasets. Abeba is currently a Senior Fellow in Trustworthy AI at Mozilla Foundation. She is also an Adjunct Lecturer/Assistant Professor at the School of Computer Science and Statistics at Trinity College Dublin, Ireland.
https://abebabirhane.com/
Dr Roel Dobbe is an Assistant Professor in Technology, Policy & Management at Delft University of Technology focusing on Sociotechnical AI Systems. He received a MSc in Systems & Control from Delft (2010) and a PhD in Electrical Engineering and Computer Sciences from UC Berkeley (2018), where he received the Demetri Angelakos Memorial Achievement Award. He was an inaugural postdoc at the AI Now Institute and New York University. His research addresses the integration and implications of algorithmic technologies in societal infrastructure and democratic institutions, focusing on issues related to safety, sustainability and justice. Roel’s system-theoretic lens enables addressing the sociotechnical and political nature of algorithmic and artificial intelligence systems across analysis, engineering design and governance, with an aim to empower domain experts and affected communities. His results have informed various policy initiatives, including environmental assessments in the European AI Act as well as the development of the algorithm watchdog in The Netherlands.
https://www.tudelft.nl/staff/r.i.j.dobbe/
MORE INFORMATION
Adapt Radio is produced by DustPod.io for the Adapt Centre
For more information about ADAPT visit www.adaptcentre.ie/
KEYWORDS
#AI #accountability #transparency #public #policies #data #auditing #bias
ADAPT Radio
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker