Home
Categories
EXPLORE
True Crime
Comedy
Business
Society & Culture
History
Sports
Health & Fitness
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/00/52/b4/0052b42a-3c05-5512-476d-7794d9459b8c/mza_6609636775031998107.jpg/600x600bb.jpg
Mind the Machine
Florencio Cano Gabarda
10 episodes
5 days ago
Join Florencio Cano Gabarda in Mind the Machine, where we dive into the critical intersection of AI security and safety. Explore how to protect AI systems from cyber threats, use AI to enhance IT security, and tackle the ethical challenges of AI safety—covering issues like ethics, bias, and trustworthiness. Tune in to navigate the complexities of building secure and safe AI.
Show more...
Technology
RSS
All content for Mind the Machine is the property of Florencio Cano Gabarda and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
Join Florencio Cano Gabarda in Mind the Machine, where we dive into the critical intersection of AI security and safety. Explore how to protect AI systems from cyber threats, use AI to enhance IT security, and tackle the ethical challenges of AI safety—covering issues like ethics, bias, and trustworthiness. Tune in to navigate the complexities of building secure and safe AI.
Show more...
Technology
Episodes (10/10)
Mind the Machine
LLM code generation security

Welcome everyone to this tenth episode of Mind the Machine, a podcast about AI security and safety. I’m Florencio Cano. Today we are going to talk about the security risks and security controls of LLM code generators.

Show more...
7 months ago
9 minutes 59 seconds

Mind the Machine
What are AI models made of? Can they contain malware?

Today I’ll talk about a technical topic related to the composition of LLMs. Are LLMs only data (weights) or do they contain code? If they contain code, can this code contain malware? And one additional question, if they have code, can they have vulnerabilities like heap overflows? In this episode I analyze what we exactly download when we download a model with Ollama or with the Hugging Face API.


References

  • https://ollama.com/
  • https://huggingface.co
  • https://docs.vllm.ai/en/latest/
  • https://github.com/kserve/kserve
  • https://huggingface.co/docs/transformers/en/index
  • https://ollama.com/library
  • https://huggingface.co/ibm-granite/granite-3.1-2b-instruct
  • https://huggingface.co/microsoft/Phi-3.5-mini-instruct/tree/main
  • https://www.usenix.org/conference/usenixsecurity23/presentation/christou
  • https://arxiv.org/abs/2307.05642





Show more...
7 months ago
42 minutes 47 seconds

Mind the Machine
AI security track at RootedCon 2025

Welcome everyone to this eight episode of Mind the Machine, a podcast about AI security and safety. I’m Florencio Cano. Today I’ll talk about my attendance to RootedCon 2025. RootedCon is the biggest cybersecurity congress in Spain and one of the biggest in Europe. This year it had a specific AI security track organized by Fernando Rubio and I had the pleasure to attend and be a speaker. Let’s talk about the presentations I was able to enjoy there.


References

https://rootedcon.com/

https://smolagents.org/

https://www.crewai.com/

https://www.twitch.tv/claudeplayspokemon


Show more...
8 months ago
12 minutes 57 seconds

Mind the Machine
AI Applied to Cybersecurity

In this episode we talk about the different ways companies are using AI, and specially LLMs, to improve their cybersecurity processes. We will talk about information gathering, protection, detection and response and what are known applications of AI in each of these areas.

During this episode I mention multiple references that I'm sharing here:

IntelEX: A LLM-driven Attack-level Threat Intelligence Extraction Framework https://arxiv.org/abs/2412.10872 

Comparison of Static Application Security Testing Tools and Large Language Models for Repo-level Vulnerability Detection https://arxiv.org/abs/2407.16235 

Leveling Up Fuzzing: Finding more vulnerabilities with AI https://security.googleblog.com/2024/11/leveling-up-fuzzing-finding-more.html 

RedFlag https://github.com/Addepar/RedFlag

LLMSecConfig: An LLM-Based Approach for Fixing Software Container Misconfigurations https://arxiv.org/abs/2502.02009

AI and LLM Models to Analyze and Identify Сybersecurity Incidents https://ceur-ws.org/Vol-3746/Short_6.pdf

GenDFIR: Advancing Cyber Incident Timeline Analysis Through Retrieval Augmented Generation and Large Language Models https://arxiv.org/abs/2409.02572

Show more...
8 months ago
9 minutes

Mind the Machine
How cybercriminals are leveraging AI

In this episode we talk about how cybercriminals are using AI to improve their operations. For example, for creating phising emails, fake voice and fake video. Also to create disinformation and fake news. We also discuss what we can do as a society to reduce these risks and what can we do in our organizations to protect ourselves against these threats.

Show more...
10 months ago
9 minutes 8 seconds

Mind the Machine
Agentic AI Security

In this episode of Mind the Machine, host Florencio Cano talks about the concept of agentic AI, exploring what makes AI systems capable of autonomously performing tasks and the unique security challenges they present.

While agentic AI can revolutionize industries, robust security measures are essential to manage the security risks.

Two of the risks mentioned in the podcast are the risk of AI agents that interact with the operating systems and those that generate code.

References mentioned in this episode:

Security Runners article about RCE on Anthropic's Computer Use: https://www.securityrunners.io/post/beyond-rce-autonomous-code-execution-in-agentic-ai

Anthropic's Computer Use: https://docs.anthropic.com/en/docs/build-with-claude/computer-use

Sandboxing Agentic AI Workflows with WebAssembly: https://developer.nvidia.com/blog/sandboxing-agentic-ai-workflows-with-webassembly

Episode about Prompt Injection https://open.spotify.com/episode/0ZH9Q2PQXojnpb8UI2jhuS?si=bfx-QIlnT8eDUrl2a_zM-w

Show more...
10 months ago
15 minutes 1 second

Mind the Machine
AI Pentesting

In this episode we talk about AI Pentesting. We talk about the difference with traditional cybersecurity pentesting. We also talk about benefits and drawbacks of manual and AI automatic pentesting. In the case of AI automatic pentesting, we mention some open source tools to perform it. These are some URLs related to topic mentioned in the episode:

Breaking Down Adversarial Machine Learning Attacks Through Red Team Challenges https://boschko.ca/adversarial-ml/

Dreadnode’s Crucible CTF platform https://crucible.dreadnode.io/

PyRIT https://github.com/Azure/PyRIT

Garak https://github.com/NVIDIA/garak

Project Moonshot https://github.com/aiverify-foundation/moonshot

Show more...
10 months ago
23 minutes 46 seconds

Mind the Machine
Top 10 Security Architecture Patterns for LLM applications

In this episode, we talk about ten very important security architecture patterns to protect LLM applications.

Open source guardrails software mentioned during the episode:

  • TrustyAI
  • Llama Guard
  • Nemo Guardrails

Open source model evaluation frameworks mentioned:

  • lm-evaluation-harness
  • Project Moonshot
  • Giskard
Show more...
11 months ago
19 minutes 51 seconds

Mind the Machine
Prompt injection

In today's podcast, we will talk about what is prompt injection. We will talk about techniques to exploit it and security controls to reduce the risk of it happening.

Show more...
11 months ago
19 minutes 17 seconds

Mind the Machine
Presentation

In this first episode of Mind the Machine I introduce the podcast and myself, Florencio Cano. The podcast will be about AI security and safety. We will talk about security for AI and also about AI for security. I hope you enjoy it!

Please, don't hesitate on contacting me directly by sending me an email to florencio.cano@gmail.com or by contacting me at LinkedIn or Mastodon.

Show more...
1 year ago
21 minutes 40 seconds

Mind the Machine
Join Florencio Cano Gabarda in Mind the Machine, where we dive into the critical intersection of AI security and safety. Explore how to protect AI systems from cyber threats, use AI to enhance IT security, and tackle the ethical challenges of AI safety—covering issues like ethics, bias, and trustworthiness. Tune in to navigate the complexities of building secure and safe AI.