How voice AI assistants work? What happens to my data? Are AI assistants secure? These are common questions and concerns to users of voice AI assistants.
Whether using Alexa, Siri, Google Assistant or another AI Assistant, the technology underpinning these systems is similar, and have many of the same security and privacy protocols as well as concerns.
“Always Listening - Can I trust my AI Assistant?” is podcast series produced with the goal of exploring these questions, as well as building a better understanding of how voice AI Assistants work. Created with the academic research project Secure AI aSsistants (SAIS), we get an insight into the latest developments in technology security and discuss the technology, privacy issues and future of the industry for a non-scientific audience.
Highlights include explaining the AI Assistant ecosystem, looking at Amazon Alexa as an example, and highlighting security measures in place as well as areas to be aware of.
This SAIS project is a cross-disciplinary research project between King’s College London, and Imperial College London, working with non-academic industry partners and policy experts, including Microsoft and Securys who you will hear from in this podcast.
If you would like to find out more about SAIS visit us on https://secure-ai-assistants.github.io/
Tweet us @SecureAI_SAIS
Find us on LinkedIn SAIS project
All content for Always Listening: Can I trust my AI Assistant? is the property of SAIS Research Project and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
How voice AI assistants work? What happens to my data? Are AI assistants secure? These are common questions and concerns to users of voice AI assistants.
Whether using Alexa, Siri, Google Assistant or another AI Assistant, the technology underpinning these systems is similar, and have many of the same security and privacy protocols as well as concerns.
“Always Listening - Can I trust my AI Assistant?” is podcast series produced with the goal of exploring these questions, as well as building a better understanding of how voice AI Assistants work. Created with the academic research project Secure AI aSsistants (SAIS), we get an insight into the latest developments in technology security and discuss the technology, privacy issues and future of the industry for a non-scientific audience.
Highlights include explaining the AI Assistant ecosystem, looking at Amazon Alexa as an example, and highlighting security measures in place as well as areas to be aware of.
This SAIS project is a cross-disciplinary research project between King’s College London, and Imperial College London, working with non-academic industry partners and policy experts, including Microsoft and Securys who you will hear from in this podcast.
If you would like to find out more about SAIS visit us on https://secure-ai-assistants.github.io/
Tweet us @SecureAI_SAIS
Find us on LinkedIn SAIS project
The relationships we build with our voice AI Assistants has been observed with great interest from the scientific community. These little machines have names, we talk to them like humans and trust them to do tasks for us. At the same time there is an inherent limit to what they can do for us and we might have decided not to trust them in certain situations. There are times when for example they have shared false information.
In this episode we discuss how accidental misinformation, and purposeful disinformation can be shared via an AI assistant. We look at the psychology behind the relationships we build with our assistants and find out about the SAIS exhibition that examines this through soundscape and installation.
Subjects in this episode:
How can false information be spread on AI Assistant
The difference between misinformation and disinformation
The psychology behind our relationships with voice AI Assistants
The future of AI assistants according to industry experts, including Microsoft and Securys
The one of a kind exhibition about our relationship with AI, by the SAIS project and Cellule Studio.
AI: Who’s looking after me? is a free exhibition and events series at Science Gallery London which explores artificial intelligence and its impact on our lives. The SAIS team have worked with a Cellule Studio to create an evolving soundscape inviting us to question our relationship with AI assistants, how and where we use our voices and the value we place on them. The exhibition opens in June 2023. You can find out more on the SAIS website.
If you would like to find out more about SAIS visit us on https://secure-ai-assistants.github.io/
Tweet us @SecureAI_SAIS
Find us on LinkedIn SAIS project
Contact the show producers, Helix Data Innovation, on sais-comms@kcl.ac.uk
Always Listening: Can I trust my AI Assistant?
How voice AI assistants work? What happens to my data? Are AI assistants secure? These are common questions and concerns to users of voice AI assistants.
Whether using Alexa, Siri, Google Assistant or another AI Assistant, the technology underpinning these systems is similar, and have many of the same security and privacy protocols as well as concerns.
“Always Listening - Can I trust my AI Assistant?” is podcast series produced with the goal of exploring these questions, as well as building a better understanding of how voice AI Assistants work. Created with the academic research project Secure AI aSsistants (SAIS), we get an insight into the latest developments in technology security and discuss the technology, privacy issues and future of the industry for a non-scientific audience.
Highlights include explaining the AI Assistant ecosystem, looking at Amazon Alexa as an example, and highlighting security measures in place as well as areas to be aware of.
This SAIS project is a cross-disciplinary research project between King’s College London, and Imperial College London, working with non-academic industry partners and policy experts, including Microsoft and Securys who you will hear from in this podcast.
If you would like to find out more about SAIS visit us on https://secure-ai-assistants.github.io/
Tweet us @SecureAI_SAIS
Find us on LinkedIn SAIS project