Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Fiction
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/ba/93/99/ba9399aa-d4ae-498c-0f50-99c7a342f87e/mza_14246718780721623205.jpg/600x600bb.jpg
ADAPT Radio
The ADAPT Centre
80 episodes
2 days ago
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.  At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.  How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?" 
 THINGS WE SPOKE ABOUT * “Bigger is better" AI myth protects hyperscaler monopolies, not users * Agentic AI demands sweeping permissions creating existential privacy backdoor threats * AI companions weaponize known psychological manipulation tactics against vulnerable minors * "Open source AI" exploits software community goodwill without delivering benefits * Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today GUEST DETAILS Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry. At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs. As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law. Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research. Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness. Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions. Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data. Connect with the guests: * Signal Foundation: signal.org * AI Now Institute: ainowinstitute.org * AI Accountability Lab: Contact through ADAPT Centre * Follow their research and writing on AI accountability MORE INFORMATION You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/ Adapt Radio is produced by DustPod.io for the ADAPT Centre For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/ KEYWORDS #TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
Show more...
Technology
RSS
All content for ADAPT Radio is the property of The ADAPT Centre and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.  At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.  How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?" 
 THINGS WE SPOKE ABOUT * “Bigger is better" AI myth protects hyperscaler monopolies, not users * Agentic AI demands sweeping permissions creating existential privacy backdoor threats * AI companions weaponize known psychological manipulation tactics against vulnerable minors * "Open source AI" exploits software community goodwill without delivering benefits * Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today GUEST DETAILS Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry. At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs. As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law. Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research. Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness. Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions. Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data. Connect with the guests: * Signal Foundation: signal.org * AI Now Institute: ainowinstitute.org * AI Accountability Lab: Contact through ADAPT Centre * Follow their research and writing on AI accountability MORE INFORMATION You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/ Adapt Radio is produced by DustPod.io for the ADAPT Centre For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/ KEYWORDS #TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker
Show more...
Technology
https://i1.sndcdn.com/artworks-nYiivdMLcMhuUtAV-abhWzg-t3000x3000.jpg
AI Accountability (Keynote)
ADAPT Radio
45 minutes 14 seconds
1 year ago
AI Accountability (Keynote)
AI is developing at such a rapid pace that we can get caught up in its potential capabilities and role in our future. However, there are still a lot of issues to rule out. ADAPT recently hosted the Annual Scientific Conference 2024 in Dublin and today we’re hearing one of the keynote speakers, Abeba Birhane. We learn about the potential dangers of large-scale datasets, such as AI hallucinations and the reinforcement of societal biases and negative stereotypes. She also explored strategies for both incremental improvements and guiding broader structural changes in AI. Our expert guest has been exploring strategies for both incremental improvements and guiding broader structural changes in AI. She is Senior Advisor for AI Accountability at Mozilla, Adjunct Professor at Trinity College Dublin and new ADAPT member, Abebe Birhane. THINGS WE SPOKE ABOUT ● How rumours of autonomous AI distract from real issues ● Hallucinations creating factually incorrect information ● AI ownership giving power to the hands of the few ● Data issues with collection, copyright and biases ● Creating standards for the safe use and development of AI GUEST DETAILS Abeba Birhane is a cognitive scientist, currently a Senior Advisor in AI Accountability at the Mozilla Foundation and an Adjunct Assistant Professor in the School of Computer Science and Statistics at Trinity College Dublin (working with Trinity’s Complex Software Lab). She researches human behaviour, social systems, and responsible and ethical artificial intelligence and was recently appointed to the UN’s Advisory Body on AI. Abeba works at the intersection of complex adaptive systems, machine learning, algorithmic bias, and critical race studies. In her present work, Abeba examines the challenges and pitfalls of computational models and datasets from a conceptual, empirical, and critical perspective. Abeba Birhane has a PhD in cognitive science at the School of Computer Science, UCD, and Lero, The Irish Software Research Centre. Her interdisciplinary research focused on the dynamic and reciprocal relationship between ubiquitous technologies, personhood, and society. Specifically, she explored how ubiquitous technologies constitute and shape what it means to be a person through the lenses of embodied cognitive science, complexity science, and critical data studies. Her work with Vinay Prabhu uncovered that large-scale image datasets commonly used to develop AI systems, including ImageNet and 80 Million Tiny Images, carried racist and misogynistic labels and offensive images. She has been recognised by VentureBeat as a top innovator in computer vision. MORE INFORMATION Adapt Radio is produced by DustPod.io for the Adapt Centre For more information about ADAPT visit www.adaptcentre.ie/ QUOTES Generative AI has been around for quite some time, but the introduction of DALL-E back in April 2022 can be noted as one of the landmarks where generative AI really exploded into the public space. - Abeba Birhane These hypothetical AI concerns, existential concerns, about the very idea of this attempt to build AGI has neither scientific nor engineering principles. Again, a lot of it is just hype, marketing, and PR, that is just really dominating the entire field. - Abeba Birhane The results are really worrying, over 50% of the output from these models was inaccurate, 40%, harmful, and incomplete - Abeba Birhane There is no such thing as fully autonomous AI, we will always need people, humans, in the loop. - Abeba Birhane KEYWORDS #ai #data #audit #research #chatgpt #ethicalai
ADAPT Radio
When your AI agent books a rental car, it needs your driver's license, credit card, calendar access and permission to message your contacts—creating what Meredith Whittaker calls "fundamental backdoor" threatening apps like Signal.  At ADAPT ADVANCE 2025, Signal Foundation President and AI Now Institute co-founder Meredith Whittaker joined Dr Abeba Birhane for a fireside chat dissecting why "bigger is better" serves hyperscaler monopolies not evidence.  How AI companions weaponise 1970s Eliza manipulation psychology on minors, why "open source AI" became marketing arbitrage exploiting software community goodwill, and what sovereign AI actually requires beyond anxiety signifiers—including democratic governance, trusted local data, and answers to "who owns deployment infrastructure?" 
 THINGS WE SPOKE ABOUT * “Bigger is better" AI myth protects hyperscaler monopolies, not users * Agentic AI demands sweeping permissions creating existential privacy backdoor threats * AI companions weaponize known psychological manipulation tactics against vulnerable minors * "Open source AI" exploits software community goodwill without delivering benefits * Sovereign AI requires democratic governance beyond geopolitical anxiety signaling today GUEST DETAILS Meredith Whittaker is President of the Signal Foundation and co-founder of the AI Now Institute—one of the most trusted voices in AI ethics, transparency and accountability. Her decade of work has profoundly shaped ethical AI frameworks, bringing impact from academia to industry. At Google, Meredith was core organizer for the 2018 Google Walkouts where over 20,000 employees protested military AI use (Project Maven), surveillance, and sexual misconduct—forcing Google to discontinue their military contract and oust implicated VPs. As AI Now Institute co-founder, her research cuts through AI hype, grounding discussions on what truly matters: power concentration, labour exploitation in AI pipelines, and protecting fundamental rights including privacy and rule of law. Her work exposes corporate capture, debunks "bigger is better" myths, reveals sustainability costs, and provides foundational open source research. Meredith has provided congressional testimony to US Congress and leads Signal—one of the most trusted privacy-friendly messaging apps. Her background building large-scale network measurement systems at Google gives her unique expertise in data quality, evaluation criteria manipulation, and how benchmark gaming serves hyperscaler interests over real-world effectiveness. Dr Abeba Birhane is founder and director of the AI Accountability Lab at Trinity College Dublin. Her groundbreaking research examines AI datasets, uncovering how larger datasets contain higher hateful content and pornography—debunking "bigger dissipates problems" assumptions. Her work on benchmarks and measurement demonstrates that purpose-built smaller models often outperform larger models in real-world contexts with appropriate contextual data. Connect with the guests: * Signal Foundation: signal.org * AI Now Institute: ainowinstitute.org * AI Accountability Lab: Contact through ADAPT Centre * Follow their research and writing on AI accountability MORE INFORMATION You can learn more about the Sea-Scan project and other cutting-edge research at Trinity College Dublin's ADAPT Centre here: www.adaptcentre.ie/ Adapt Radio is produced by DustPod.io for the ADAPT Centre For more information about ADAPT's groundbreaking AI and data analytics research visit www.adaptcentre.ie/ KEYWORDS #TrustedAI #AIaccountability #AIprivacy #AIgovernance #MeredithWhittaker