When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
All content for ADAPT Radio is the property of The ADAPT Centre and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
You cannot move an inch these days without encountering takes on the future of AI and technology, and concerns about how it may impact our lives for better or worse.
Today we're talking about new ways to approach the ethics of AI and digital technologies and what research is being done to answer the questions and the dilemmas these new technologies raise.
Our experts today are from the ADAPT Centre and are researching alternative methods of ethical and critical thinking in the design of digital technologies and AI.
They are Assistant Professor at the School of Information and Communication Studies in University College Dublin (UCD), Dr Marguerite Barry and postdoctoral researcher at the School of Information and Communication Studies at UCD, Dr Paul O’Neill.
THINGS WE SPOKE ABOUT
● Critical thinking and ethical design in AI
● Incorporating ethics at the research stage
● The importance of public engagement on future uses of digital technologies
● Beta Festival: Using art to encourage engagement and critical thinking
● Interdisciplinary, multidisciplinary and transdisciplinary research
GUEST DETAILS
Prof Marguerite Barry is Assistant Professor at the School of Information and Communication Studies in University College Dublin (UCD). Her research area is human-computer interaction (HCI) and digital media communication studies with a focus on ethical design and development in policy and practice. She is a funded investigator with ADAPT on the Transparent Data Governance strand where she is working on the Autonomy & Responsibility challenge. This involves interdisciplinary projects to support multi-stakeholder engagement in AI technologies from design to deployment.
Dr. Paul O’ Neill is an artist and researcher whose practice and research are concerned with the implications of our collective dependency on networked technologies and infrastructures. Paul is a postdoctoral research fellow at the ADAPT Centre for AI-driven Media Technologies at University College Dublin where he is focusing on the ethics and design of Artificial Intelligence systems. He is also a co-curator of the Dublin Art and Technology Association (D.A.T.A).
MORE INFORMATION
Adapt Radio is produced by DustPod.io for the Adapt Centre
For more information about ADAPT visit www.adaptcentre.ie/
KEYWORDS
#technology #ethics #data #research #ai #art
ADAPT Radio
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.
At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.
How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?"
THINGS WE SPOKE ABOUT
* “Bigger is better" AI myth protects hyperscaler monopolies, not users
* Agentic AI demands sweeping permissions creating existential privacy backdoor threats
* AI companions weaponize known psychological manipulation tactics against vulnerable minors
* "Open source AI" exploits software community goodwill without delivering benefits
* Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today
GUEST DETAILS
Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry.
At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs.
As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law.
Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research.
Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness.
Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions.
Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data.
Connect with the guests:
* Signal Foundation: signal.org
* AI Now Institute: ainowinstitute.org
* AI Accountability Lab: Contact through ADAPT Centre
* Follow their research and writing on AI accountability
MORE INFORMATION
You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/
Adapt Radio is produced by DustPod.io for the ADAPT Centre
For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/
KEYWORDS
#TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker