In this podcast episode, I’m talking with Vadym Kazulkin, AWS Serverless Hero and Principal Cloud Architect.
He’s part of the AWS community for years now, speaking at events, testing new features early, and helping the community grow.
We talked about:
If you're building in the cloud or just trying to make smart decisions in a fast-moving space, this episode has a lot for you.
Connect with Vadym via LinkedIn: https://www.linkedin.com/in/vadymkazulkin/
In this episode, I’m joined by Christina Stathopoulos, a former Googler who now works independently as a data & AI evangelist, trainer, and advisor.
Not only did we talk about her path from data engineering into data science, and eventually into teaching and strategic consulting. We also covered:
and much more!
All in all: A thoughtful, honest conversation full of insights on AI, data culture, and the human side of technology.
Get in touch with Christina via LinkedIn:
In this episode, I’m talking with Tom Schamberger from the German consultancy msg. He leads their cloud data platform team and has a super interesting background: started coding Java at 12, co-founded startups, and now helps big companies design scalable data platforms.
We talk about:
If you're curious about consulting, data platforms, or just want to hear what a data engineer's job looks like behind the scenes, dive right into it!
In this episode, I sit down with Mehdi Ouazza - data tinkerer, indie hacker, and content creator - who's always up to something interesting in the world of data and AI.
We started with DuckDB but quickly veered off into much more exciting territory: side projects, voice-to-SQL with actual quacks, the power of local models, and why WebGPU might be one of the most underrated browser technologies today.
We also talked about how we teach and learn data engineering in 2025: the importance of fun, interactivity, and why we both dream of creating a data engineering game that’s part "Among Us" and part serious skills training.
Mehdi shares what tools he's using, where he sees GenAI actually helping—not replacing—engineers, and how he's building courses and meetups that inspire creativity in technical work.
Perfect for data folks who like to experiment, educators looking for inspiration, or anyone wondering how far a fun idea can go with the right mix of curiosity and tooling.
In this podcast episode, I’m joined by Simon Späti, long-time BI and data engineering expert turned full-time technical writer and author of the living book Data Engineering Design Patterns.
We talk about:
Simon also shares practical advice for building your own public knowledge base, and why Markdown and simplicity still win in the long run.
Whether you're into tools, systems, or lifelong learning, this one’s a thoughtful deep dive.
***
About Simon Späti:
Simon is a Data Engineer and Technical Author with 20+ years of experience in the data field. He's the author of the Data Engineering Blog (ssp.sh), curator of the Data Engineering Vault (vault.ssp.sh), and currently writes a book about Data Engineering Design Patterns (dedp.online). Simon maintains an awareness of open-source data engineering technologies and enjoys sharing his knowledge with the community.
In this episode, I’m joined by Johannes Koch, Principal Engineer and AWS DevTools Hero, to talk about the real DevOps mindset, the evolution of developer experience, and how community work changed his career.
Johannes shares how starting in QA and support gave him a unique edge in understanding users, why building a proper CI/CD pipeline should come before writing code, and how the AWS Community Builders program helped him grow into his current role.
We also dive into:
Whether you're starting out in DevOps or deep into cloud architecture, Johannes' insights are packed with value for data and AI professionals.
Links:
In this episode, I sit down with Deepak Goyal, the founder of AzureLib, to talk all things data engineering, cloud platforms, and how to teach the next generation of engineers.
We explore why Azure became his go-to, how real-world projects beat theory every time, and why tools like ChatGPT are great assistants, but no substitute for structured learning and solid fundamentals.
Follow Deepak on LinkedIn: https://www.linkedin.com/in/deepak-goyal-93805a17/
Check out AzureLib: azurelib.com
Learn Data Engineering with me: learndataengineering.com
In this episode of the Plumbers of Data Science podcast, I dive into the challenges of recruiting today, from overwhelming job application volumes to reaching out directly to recruiters.
I’m testing new strategies to make the process smoother for everyone involved, focusing on fresh job listings and fostering connections with hiring managers who need skilled engineers. My goal? To secure five job placements in Germany by year’s end!
Have thoughts on today’s job market, or tried the Easy Apply feature yourself? Drop a comment below—I’d love to hear your experience!
In this Hero Talk episode, I had the pleasure of chatting with Ben Rogojan, better known as the "Seattle Data Guy." Ben is a data engineer, YouTuber, and freelancer with a background at Facebook. He's become a go-to expert on freelancing for engineers, particularly in the data space.
We dive into Ben's journey from being a full-time engineer to making the switch to freelancing, how he built his own business, and the unique challenges freelancers face in this space.
We also explore how to break into freelancing, the value of specializing in a specific skill, and practical tips on landing your first freelance clients.
In this episode of the Plumbers of Data Science podcast, I'm sharing some exciting updates about the future of Learn Data Engineering and a big new service we’re launching—recruiting!
I explain how this new offering will help engineers find their next career move while connecting companies with top talent. Tune in to hear more about how the Academy, Coaching, and now recruiting fit together into one ecosystem designed to support your career growth.
Let me know your thoughts in the comments—are you excited about this new direction?
In this episode of the Plumbers of Data Science podcast, I’m sharing my thoughts on why data modeling isn’t as complicated as people make it out to be. You hear about courses and tutorials that stretch for hours—but is it really that hard?
I’ll break down the two main things you need to focus on when modeling data and explain why, once you’ve got those down, the rest falls into place.
In this Hero Talk episode, I talk with Mezue, a seasoned Data Engineer with expertise in Azure Databricks Data Engineering. We cover his journey from Electrical Engineering to Data Engineering and discuss the key skills, like Python, SQL, and Spark, that are essential in the field.Mezue also shares his experience running an Azure Databricks bootcamp and offers advice on how to break into Data Engineering, especially in Cloud environments. We also touch on the challenges of finding junior roles and how to stand out by working on practical projects.
In this Hero Talk episode, I chat with Susan Walsh, the “Classification Guru,” known for her expertise in cleaning and classifying messy data.
We dive into her unexpected journey into the data world, starting with a spend analytics job, and how that led to her founding her own business focused on dirty data. Susan shares the unique challenges businesses face with poor data quality, explaining why 99.9% of data problems are actually people problems.
We also explore practical ways to deal with these issues, such as finding those "crappy" data cleaning jobs to gain experience, and the importance of consistent data maintenance to prevent future headaches. From addressing dirty CRM systems to battling fraud, Susan’s stories highlight how critical clean data is for business success.
In this Hero Talk episode, I sit down with Paolo Lulli, an experienced Data Engineer, to explore some of the core challenges and decisions in API development and data management. We dive deep into the debate between serverless infrastructure versus traditional servers, discussing the pros and cons of both approaches, particularly in the context of scalability, cost, and maintenance.
Paolo also shares his hands-on experience with time series databases, explaining their advantages in handling massive amounts of data from IoT devices. We delve into vendor lock-in issues, highlighting how relying too heavily on cloud providers like AWS or Azure can impact long-term flexibility.
In this episode of the Plumbers of Data Science podcast, I’m diving into why testing can be so challenging for data engineers. The inspiration for this topic actually came from one of my recent Coaching sessions, where the question of test-driven development (TDD) came up during a Q&A. It stuck with me, so I thought it would be a great topic to dive deeper into.
I’ll explain the key benefits of TDD, like improved code quality and easier refactoring, and why, despite its advantages, it’s not always widely adopted—especially in fast-paced environments where time constraints dominate. We’ll also talk about the specific challenges data engineers face with TDD, such as handling large, unpredictable data, integrating with external systems, and adapting to ever-changing data.
In this Hero Talk episode, we dive deep into the fascinating world of synthetic data, a critical tool for development, testing, and training Machine Learning models. Joining me is Mario Scriminaci, Chief Product Officer at Mostly AI, who shares his expertise on how synthetic data can revolutionize the way we handle sensitive information, particularly in the context of privacy regulations like GDPR and CCPA.
We discuss the real-world applications of synthetic data, how it differs from traditional mock data, and its potential to drive innovation in AI and ML development. Mario also introduces Mostly AI's cutting-edge tools, highlighting how they make it easier than ever to generate realistic, privacy-safe datasets.
In this episode of the Plumbers of Data Science podcast, I’m diving into the debate between bootcamps and coaching programs, especially for those looking to advance in Data Engineering.
I’ll break down the pros and cons of each approach - from the structured, intensive nature of bootcamps to the personalized, flexible support of coaching, I’ll share insights to help you choose the right path for your career. I’ll also discuss the experiences of my current coaching students and what I’m focusing on to help them achieve their goals.
In this episode of the Plumbers of Data Science podcast, I’m diving into what truly matters when building data platforms and pipelines.
As engineers, it’s easy to get caught up in the latest tools, but real success starts with understanding your data sources and defining clear goals. I’ll walk you through the key questions to ask, from data retention to processing speeds and user needs.
In this episode, we explore the essentials of learning and mastering Apache Spark. Joining me is Philip, an experienced Spark developer and educator, who shares his expert roadmap for becoming proficient in Spark. We discuss why Spark is a crucial tool for data engineers, how to set it up effectively, and the best approaches to start your Spark journey.
Philip also highlights the importance of understanding Spark's internals, deploying real-world applications, and optimizing performance. He walks us through his six-part roadmap, focusing on hands-on practice and building confidence through real-world projects. We also touch on key topics like the Scala vs. Python debate, Spark's role in machine learning, and how it stands against emerging tools like Beam.
In this Hero Talk episode, we explore the crucial topic of data observability, a field that has become essential for Data Engineers dealing with complex data pipelines. I am joined by my special guest Ryan Yackel from DataBand, who shares his insights and expertise on the subject.
Ryan delves into the concept of data observability and its significance for Data Engineers, addressing common challenges faced in monitoring and maintaining data pipelines. He explains how DataBand helps in monitoring and improving data reliability, ensuring that data flows smoothly from source to destination.