Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
Health & Fitness
Sports
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts126/v4/c3/dd/52/c3dd5201-40e2-bf39-8fc3-157c230499ac/mza_5033577426845021401.jpg/600x600bb.jpg
Always Listening: Can I trust my AI Assistant?
SAIS Research Project
4 episodes
9 months ago
How voice AI assistants work? What happens to my data? Are AI assistants secure? These are common questions and concerns to users of voice AI assistants. Whether using Alexa, Siri, Google Assistant or another AI Assistant, the technology underpinning these systems is similar, and have many of the same security and privacy protocols as well as concerns. “Always Listening - Can I trust my AI Assistant?” is podcast series produced with the goal of exploring these questions, as well as building a better understanding of how voice AI Assistants work. Created with the academic research project Secure AI aSsistants (SAIS), we get an insight into the latest developments in technology security and discuss the technology, privacy issues and future of the industry for a non-scientific audience. Highlights include explaining the AI Assistant ecosystem, looking at Amazon Alexa as an example, and highlighting security measures in place as well as areas to be aware of. This SAIS project is a cross-disciplinary research project between King’s College London, and Imperial College London, working with non-academic industry partners and policy experts, including Microsoft and Securys who you will hear from in this podcast. If you would like to find out more about SAIS visit us on https://secure-ai-assistants.github.io/ Tweet us @SecureAI_SAIS Find us on LinkedIn SAIS project
Show more...
Technology
Education,
Science
RSS
All content for Always Listening: Can I trust my AI Assistant? is the property of SAIS Research Project and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
How voice AI assistants work? What happens to my data? Are AI assistants secure? These are common questions and concerns to users of voice AI assistants. Whether using Alexa, Siri, Google Assistant or another AI Assistant, the technology underpinning these systems is similar, and have many of the same security and privacy protocols as well as concerns. “Always Listening - Can I trust my AI Assistant?” is podcast series produced with the goal of exploring these questions, as well as building a better understanding of how voice AI Assistants work. Created with the academic research project Secure AI aSsistants (SAIS), we get an insight into the latest developments in technology security and discuss the technology, privacy issues and future of the industry for a non-scientific audience. Highlights include explaining the AI Assistant ecosystem, looking at Amazon Alexa as an example, and highlighting security measures in place as well as areas to be aware of. This SAIS project is a cross-disciplinary research project between King’s College London, and Imperial College London, working with non-academic industry partners and policy experts, including Microsoft and Securys who you will hear from in this podcast. If you would like to find out more about SAIS visit us on https://secure-ai-assistants.github.io/ Tweet us @SecureAI_SAIS Find us on LinkedIn SAIS project
Show more...
Technology
Education,
Science
https://is1-ssl.mzstatic.com/image/thumb/Podcasts126/v4/c3/dd/52/c3dd5201-40e2-bf39-8fc3-157c230499ac/mza_5033577426845021401.jpg/600x600bb.jpg
Episode 4: Entering the Age of Datafication, Inferences and AI
Always Listening: Can I trust my AI Assistant?
21 minutes
2 years ago
Episode 4: Entering the Age of Datafication, Inferences and AI
In this episode of Always Listening we go beyond our discussion on how AI assistants work to instead understand how the current digital landscape is shaping us as a society. Our lives are becoming increasingly transparent because of the information collected about us, but the way that this information is used remains opaque. In this interview with SAIS researcher Dr Mark Cote we discuss AI assistants from a humanities perspective, from big data inferring what you want before you want it, to the business case for why personalisation is what users want. We discuss the tension between the ethics of inferences and their market value. Listen to learn about: - Where data is coming from and how inferences are made. - How data accumulation is affecting every area of our lives. - How the accumulation of data over time creates distinct and new market opportunities. - The disproportionate impact of inferences made about groups of people. - How far data protection laws go in protecting us from inferences from big data. We would love to hear your thoughts and comments on this podcast episode! Tweet us @SecureAI_SAIS Connect with us on LinkedIn SAIS project Visit us https://secure-ai-assistants.github.io
Always Listening: Can I trust my AI Assistant?
How voice AI assistants work? What happens to my data? Are AI assistants secure? These are common questions and concerns to users of voice AI assistants. Whether using Alexa, Siri, Google Assistant or another AI Assistant, the technology underpinning these systems is similar, and have many of the same security and privacy protocols as well as concerns. “Always Listening - Can I trust my AI Assistant?” is podcast series produced with the goal of exploring these questions, as well as building a better understanding of how voice AI Assistants work. Created with the academic research project Secure AI aSsistants (SAIS), we get an insight into the latest developments in technology security and discuss the technology, privacy issues and future of the industry for a non-scientific audience. Highlights include explaining the AI Assistant ecosystem, looking at Amazon Alexa as an example, and highlighting security measures in place as well as areas to be aware of. This SAIS project is a cross-disciplinary research project between King’s College London, and Imperial College London, working with non-academic industry partners and policy experts, including Microsoft and Securys who you will hear from in this podcast. If you would like to find out more about SAIS visit us on https://secure-ai-assistants.github.io/ Tweet us @SecureAI_SAIS Find us on LinkedIn SAIS project