The Centre for Secure Information Technologies (CSIT)
9 episodes
1 week ago
Developed by The Centre for Secure Information Technologies (CSIT), Trustworthy AI Chronicles is a brand-new podcast dedicated to exploring the cutting edge of AI security, ethics, and innovation.
Hosted by Dr. Ihsen Alouani, a leading expert in AI trustworthiness, this series brings together voices from industry, academia, and government to uncover the risks, share breakthroughs, and spotlight the people shaping the future of secure and responsible AI.
All content for Trustworthy AI Chronicles is the property of The Centre for Secure Information Technologies (CSIT) and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Developed by The Centre for Secure Information Technologies (CSIT), Trustworthy AI Chronicles is a brand-new podcast dedicated to exploring the cutting edge of AI security, ethics, and innovation.
Hosted by Dr. Ihsen Alouani, a leading expert in AI trustworthiness, this series brings together voices from industry, academia, and government to uncover the risks, share breakthroughs, and spotlight the people shaping the future of secure and responsible AI.
Episode 3 | Yanjing Li, Assistant Professor, University of Chicago
Trustworthy AI Chronicles
39 minutes 31 seconds
7 months ago
Episode 3 | Yanjing Li, Assistant Professor, University of Chicago
Join Yanjing Li, Assistant Professor, University of Chicago,and Dr Ihsen Alouani for the third episode of Trustworthy AI Chronicles. Yanjing Li is at the forefront of research in robust and resilient computing systems.
Trustworthy AI Chronicles
Developed by The Centre for Secure Information Technologies (CSIT), Trustworthy AI Chronicles is a brand-new podcast dedicated to exploring the cutting edge of AI security, ethics, and innovation.
Hosted by Dr. Ihsen Alouani, a leading expert in AI trustworthiness, this series brings together voices from industry, academia, and government to uncover the risks, share breakthroughs, and spotlight the people shaping the future of secure and responsible AI.