🚀 This week on the Tech ONTAP Podcast: We’re talking all things StorageGRID 12.0 — and this release is a game changer for AI and data-driven workloads.
We are joined by Vishnu Vardhan and Morgan Mears to unpack how StorageGRID is evolving into an AI-ready object storage powerhouse.
Here’s what’s new in StorageGRID 12.0 👇
💾 Bucket Branches – Think Git for data. Instantly create space-efficient dataset copies for AI/ML training, testing, and version control.
⚡ Caching Load Balancers – Bring data closer to compute with up to 1.2PB of local cache for high-performance AI pipelines.
📈 Scalability & Smarter Metadata – Handle hundreds of billions of objects with ease.
☁️ Flexible Deployment – Run StorageGRID your way: appliance, container, or virtualized.
Whether you’re building data lakes, training large language models, or managing global datasets, StorageGRID 12.0 helps you scale with performance, simplicity, and efficiency.
All content for TechONTAPPodcast is the property of NetApp and is served directly from their servers
with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
🚀 This week on the Tech ONTAP Podcast: We’re talking all things StorageGRID 12.0 — and this release is a game changer for AI and data-driven workloads.
We are joined by Vishnu Vardhan and Morgan Mears to unpack how StorageGRID is evolving into an AI-ready object storage powerhouse.
Here’s what’s new in StorageGRID 12.0 👇
💾 Bucket Branches – Think Git for data. Instantly create space-efficient dataset copies for AI/ML training, testing, and version control.
⚡ Caching Load Balancers – Bring data closer to compute with up to 1.2PB of local cache for high-performance AI pipelines.
📈 Scalability & Smarter Metadata – Handle hundreds of billions of objects with ease.
☁️ Flexible Deployment – Run StorageGRID your way: appliance, container, or virtualized.
Whether you’re building data lakes, training large language models, or managing global datasets, StorageGRID 12.0 helps you scale with performance, simplicity, and efficiency.
We’re diving deep into AI at scale with NetApp experts Bobby Oomen and David Arnette. From NVIDIA SuperPod (think AI factories powering massive LLM training) to FlexPod solutions that bring inference into everyday enterprise workloads, we unpack what’s happening at the cutting edge of AI infrastructure.
You’ll hear how NetApp and NVIDIA are collaborating to solve one of AI’s biggest challenges—data management—with tools like SnapMirror, FlexCache, and FlexClone. We also explore why inference is becoming just as important (if not more) than training, and what that shift means for enterprises looking to integrate AI into their operations.
Whether you’re curious about NVIDIA Cloud Partner (NCP) offerings, KV cache innovations, or how Cisco and NetApp are pushing FlexPod into the AI era, this episode is packed with insights you won’t want to miss.
Tune in to learn how enterprises can scale AI securely, efficiently, and flexibly—with NetApp at the core.
TechONTAPPodcast
🚀 This week on the Tech ONTAP Podcast: We’re talking all things StorageGRID 12.0 — and this release is a game changer for AI and data-driven workloads.
We are joined by Vishnu Vardhan and Morgan Mears to unpack how StorageGRID is evolving into an AI-ready object storage powerhouse.
Here’s what’s new in StorageGRID 12.0 👇
💾 Bucket Branches – Think Git for data. Instantly create space-efficient dataset copies for AI/ML training, testing, and version control.
⚡ Caching Load Balancers – Bring data closer to compute with up to 1.2PB of local cache for high-performance AI pipelines.
📈 Scalability & Smarter Metadata – Handle hundreds of billions of objects with ease.
☁️ Flexible Deployment – Run StorageGRID your way: appliance, container, or virtualized.
Whether you’re building data lakes, training large language models, or managing global datasets, StorageGRID 12.0 helps you scale with performance, simplicity, and efficiency.