Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
Health & Fitness
Technology
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts112/v4/ae/c7/e6/aec7e688-ddb0-dfd7-d3cc-3393afc2f780/mza_15193391861174205842.jpeg/600x600bb.jpg
Disseminate: The Computer Science Research Podcast
Jack Waudby
87 episodes
3 days ago

This podcast features interviews with Computer Science researchers. Hosted by Dr. Jack Waudby researchers are interviewed, highlighting the problem(s) they tackled, solutions they developed, and how their findings can be applied in practice. This podcast is for industry practitioners, researchers, and students, aims to further narrow the gap between research and practice, and to generally make awesome Computer Science research more accessible. We have 2 types of episode: (i) Cutting Edge (red/blue logo) where we talk to researchers about their latest work, and (ii) High Impact (gold/silver logo) where we talk to researchers about their influential work.


You can support the show through Buy Me a Coffee. A donation of $3 will help us keep making you awesome Computer Science research podcasts. 


Hosted on Acast. See acast.com/privacy for more information.

Show more...
Education
Technology,
News,
Tech News
RSS
All content for Disseminate: The Computer Science Research Podcast is the property of Jack Waudby and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.

This podcast features interviews with Computer Science researchers. Hosted by Dr. Jack Waudby researchers are interviewed, highlighting the problem(s) they tackled, solutions they developed, and how their findings can be applied in practice. This podcast is for industry practitioners, researchers, and students, aims to further narrow the gap between research and practice, and to generally make awesome Computer Science research more accessible. We have 2 types of episode: (i) Cutting Edge (red/blue logo) where we talk to researchers about their latest work, and (ii) High Impact (gold/silver logo) where we talk to researchers about their influential work.


You can support the show through Buy Me a Coffee. A donation of $3 will help us keep making you awesome Computer Science research podcasts. 


Hosted on Acast. See acast.com/privacy for more information.

Show more...
Education
Technology,
News,
Tech News
Episodes (20/87)
Disseminate: The Computer Science Research Podcast
Parachute: Rethinking Query Execution and Bidirectional Information Flow in DuckDB - with Mihail Stoian

In this episode of the DuckDB in Research series, host Jack Waudby sits down with Mihail Stoian, PhD student at the Data Systems Lab, University of Technology Nuremberg, to unpack the cutting-edge ideas behind Parachute, a new approach to robust query processing and bidirectional information passing in modern analytical databases.


We explore how Parachute bridges theory and practice, combining concepts from instance-optimal algorithms and semi-join filtering to boost performance in DuckDB, the in-process analytical SQL engine that’s reshaping how research meets real-world data systems.


Mihail discusses:

  • How Parachute extends semi-join filtering for two-way information flow
  • The challenges of implementing research ideas inside DuckDB
  • Practical performance gains on TPC-H and CEB workloads
  • The future of adaptive query processing and research-driven system design


Whether you're a database researcher, systems engineer, or curious practitioner, this deep-dive reveals how academic innovation continues to shape modern data infrastructure.


Links:

  • Parachute: Single-Pass Bi-Directional Information Passing VLDB 2025 Paper
  • Mihail's homepage
  • Parachute's Github repo

Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 days ago
36 minutes 34 seconds

Disseminate: The Computer Science Research Podcast
Anarchy in the Database: Abigale Kim on DuckDB and DBMS Extensibility

In this episode of the DuckDB in Research series, host Jack Waudby talks with Abigale Kim, PhD student at the University of Wisconsin–Madison and author of VLDB 2025 paper: “Anarchy in the Database: A Survey and Evaluation of DBMS Extensibility”. They explore how database extensibility is reshaping modern data systems — and why DuckDB is emerging as the gold standard for safe, flexible, and high-performance extensions. Abigale shares the inside story of her research, the surprises uncovered when testing Postgres and DuckDB extensions, and what’s next for extensibility and composable database design.


This episode is perfect for researchers, practitioners, and students interested in databases, systems design, and the interplay between academia and industry innovation.


Highlights:

  • What “extensibility” really means in a DBMS
  • How DuckDB compares to Postgres, MySQL, and Redis
  • The rise of GPU-accelerated DuckDB extensions
  • Why bridging research and engineering matters for the future of databases


Links:

  • Anarchy in the Database: A Survey and Evaluation of Database Management System Extensibility VLDB 2025
  • Rethinking Analytical Processing in the GPU Era


You can find Abigale at:

  • X
  • Bluesky
  • Personal site

Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 week ago
46 minutes 24 seconds

Disseminate: The Computer Science Research Podcast
Recursive CTEs, Trampolines, and Teaching Databases with DuckDB - with Prof. Torsten Grust

In this episode of the DuckDB in Research series, host Dr Jack Waudby talks with Professor Torsten Grust from the University of Tübingen. Torsten is one of the pioneers behind DuckDB’s implementation of recursive CTEs.


In the episode they unpack:

  • The power of recursive CTEs and how they turn SQL into a full-fledged programming language.
  • The story behind adding recursion to DuckDB, including the using key feature and the trampoline and TTL extensions emerging from Torsten’s lab.
  • How these ideas are transforming research, teaching, and even DuckDB’s internal architecture.
  • Why DuckDB makes databases exciting again — from classroom to cutting-edge systems research.

If you’re into data systems, query processing, or bridging research and practice, this episode is for you.


Links:

  • USING KEY in Recursive CTEs
  • How DuckDB is USING KEY to Unlock Recursive Query Performance
  • Trampoline-Style Queries for SQL
  • U Tübingen Advent of code
  • A Fix for the Fixation on Fixpoints
  • One WITH RECURSIVE is Worth Many GOTOs
  • Torsten's homepage
  • Torsten's X

Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 weeks ago
51 minutes 5 seconds

Disseminate: The Computer Science Research Podcast
DuckDB in Research S2 Coming Soon!

Hey folks! The DuckDB in Research series is back for S2!


In this season we chat with:

  • Torsten Grust: Recursive CTEs
  • Abigale Kim: Anarchy in the Database
  • Mihail Stoian: Parachute: Single-Pass Bi-Directional Information Passing
  • Paul Gross: Adaptive Factorization Using Linear-Chained Hash Tables


Whether you're a researcher, engineer, or just curious about the intersection of databases and innovation we are sure you will love this series.



Hosted on Acast. See acast.com/privacy for more information.

Show more...
2 weeks ago
2 minutes 6 seconds

Disseminate: The Computer Science Research Podcast
Rohan Padhye & Ao Li | Fray: An Efficient General-Purpose Concurrency JVM Testing Platform | #66

In this episode of Disseminate: The Computer Science Research Podcast, guest host Bogdan Stoica sits down with Ao Li and Rohan Padhye (Carnegie Mellon University) to discuss their OOPSLA 2025 paper: "Fray: An Efficient General-Purpose Concurrency Testing Platform for the JVM".


We dive into:

  • Why concurrency bugs remain so hard to catch -- even in "well-tested" Java projects.
  • The design of Fray, a new concurrency testing platform that outperforms prior tools like JPF and rr.
  • Real-world bugs discovered in Apache Kafka, Lucene, and Google Guava.
  • The gap between academic research and industrial practice, and how Fray bridges it.
  • What’s next for concurrency testing: debugging tools, distributed systems, and beyond.


If you’re a Java developer, systems researcher, or just curious about how to make software more reliable, this conversation is packed with insights on the future of software testing.


Links & Resources:

- The Fray paper (OOPSLA 2025):

- Fray on GitHub

- Ao Li’s research

- Rohan Padhye’s research


Don’t forget to like, subscribe, and hit the 🔔 to stay updated on the latest episodes about cutting-edge computer science research.


#Java #Concurrency #SoftwareTesting #Fray #OOPSLA2025 #Programming #Debugging #JVM #ComputerScience #ResearchPodcast


Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 weeks ago
58 minutes 45 seconds

Disseminate: The Computer Science Research Podcast
Shrey Tiwari | It's About Time: A Study of Date and Time Bugs in Python Software | #65

In this episode, Bogdan Stoica, Postdoctoral Research Associate in the SysNet group at the University of Illinois Urbana-Champaign (UIUC) steps in to guest host. Bogdan sits down with Shrey Tiwari, a PhD student in the Software and Societal Systems Department at Carnegie Mellon University and member of the PASTA Lab, advised by Prof. Rohan Padhye. Together, they dive into Shrey’s award-winning research on date and time bugs in open-source Python software, exploring why these issues are so deceptively tricky and how they continue to affect systems we rely on every day.


The conversation traces Shrey’s journey from industry to research, including formative experiences at Citrix and Microsoft Research, and how those shaped his passion for software reliability. Shrey and Bogdan discuss the surprising complexity of date and time handling, the methodology behind Shrey’s empirical study, and the practical lessons developers can take away to build more robust systems. Along the way, they highlight broader questions about testing, bug detection, and the future role of AI in ensuring software correctness. This episode is a must-listen for anyone interested in debugging, reliability, and the hidden challenges that underpin modern software.


Links:

  • It’s About Time: An Empirical Study of Date and Time Bugs in Open-Source Python Software 🏆 ACM SIGSOFT Distinguished Paper Award
  • Shrey's homepage

Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 month ago
1 hour 5 minutes 29 seconds

Disseminate: The Computer Science Research Podcast
Lessons Learned from Five Years of Artifact Evaluations at EuroSys | #64

In this episode we are joined by Thaleia Doudali, Miguel Matos, and Anjo Vahldiek-Oberwagner to delve into five years of experience managing artifact evaluation at the EuroSys conference. They explain the goals and mechanics of artifact evaluation, a voluntary process that encourages reproducibility and reusability in computer systems research by assessing the supporting code, data, and documentation of accepted papers. The conversation outlines the three-tiered badge system, the multi-phase review process, and the importance of open-source practices. The guests present data showing increasing participation, sustained artifact availability, and varying levels of community engagement, underscoring the growing relevance of artifacts in validating and extending research.


The discussion also highlights recurring challenges such as tight timelines between paper acceptance and camera-ready deadlines, disparities in expectations between main program and artifact committees, difficulties with specialized hardware requirements, and lack of institutional continuity among evaluators. To address these, the guests propose early artifact preparation, stronger integration across committees, formalization of evaluation guidelines, and possibly making artifact submission mandatory. They advocate for broader standardization across CS subfields and suggest introducing a “Test of Time” award for artifacts. Looking to the future, they envision a more scalable, consistent, and impactful artifact evaluation process—but caution that continued growth in paper volume will demand innovation to maintain quality and reviewer sustainability.


Links:

  • Lessons Learned from Five Years of Artifact Evaluations at EuroSys [DOI]
  • Thaleia's Homepage
  • Anjo's Homepage
  • Miguel's Homepage

Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 months ago
43 minutes 48 seconds

Disseminate: The Computer Science Research Podcast
Dominik Winterer | Validating SMT Solvers for Correctness and Performance via Grammar-based Enumeration | #63

In this episode of the Disseminate podcast, Dominik Winterer discusses his research on SMT (Satisfiability Modulo Theories) solvers and his recent OOPSLA paper titled "Validating SMT Solvers for Correction and Performance via Grammar Based Enumeration". Dominik shares his academic journey from the University of Freiburg to ETH Zurich, and now to a lectureship at the University of Manchester. He introduces ET, a tool he developed for exhaustive grammar-based testing of SMT solvers. Unlike traditional fuzzers that use random input generation, ET systematically enumerates small, syntactically valid inputs using context-free grammars to expose bugs more effectively. This approach simplifies bug triage and has revealed over 100 bugs—many of them soundness and performance-related—with a striking number having already been fixed. Dominik emphasizes the tool’s surprising ability to identify deep bugs using minimal input and track solver evolution over time, highlighting ET's potential for continuous integration into CI pipelines.


The conversation then expands into broader reflections on formal methods and the future of software reliability. Dominik advocates for a new discipline—Formal Methods Engineering—to bridge the gap between software engineering and formal verification tools. He stresses the importance of building trustworthy verification tools since the reliability of software increasingly depends on them. Dominik also discusses adapting ET to other domains, such as JavaScript engines, and suggests that grammar-based enumeration can be applied widely to any system with a context-free grammar. Addressing the rise of AI, he envisions validation portfolios that integrate formal methods into LLM-based tooling, offering certified assessments of model outputs. He closes with a call for the community to embrace pragmatic, systematic, and scalable approaches to formal methods to ensure these tools can live up to their promises in real-world development settings.


Links:

  • Dominik's Homepage
  • Validating SMT Solvers for Correctness and Performance via Grammar-Based Enumeration

Hosted on Acast. See acast.com/privacy for more information.

Show more...
3 months ago
43 minutes 38 seconds

Disseminate: The Computer Science Research Podcast
Haralampos Gavriilidis | Fast and Scalable Data Transfer across Data Systems | #62

In this episode of Disseminate, we welcome Harry Gavrilidis back to the podcast to explore his latest research on fast and scalable data transfer across systems, soon to be presented at SIGMOD 2025. Building on his work with XDB, Harry introduces XDBC, a novel data transfer framework designed to balance performance and generalizability. They dive into the challenges of moving data across heterogeneous environments—ranging from cloud systems to IoT devices—and critique the limitations of current generic methods like JDBC and specialized point-to-point connectors.


Harry walks us through the architecture of XDBC, which modularizes the data transfer pipeline into configurable stages like reading, serialization, compression, and networking. The episode highlights how this architecture adapts to varying performance constraints and introduces a cost-based optimizer to automate tuning for different environments. We also touch on future directions, including dynamic reconfiguration, fault tolerance, and learning-based optimizations. If you're interested in systems, performance engineering, or database interoperability, this episode is a must-listen.


Hosted on Acast. See acast.com/privacy for more information.

Show more...
4 months ago
56 minutes 46 seconds

Disseminate: The Computer Science Research Podcast
Haralampos Gavriilidis | SheetReader: Efficient spreadsheet parsing

In this episode of the DuckDB in Research series, Harry Gavriilidis (PhD student at TU Berlin) joins us to discuss Sheet Reader — a high-performance spreadsheet parser that dramatically outpaces traditional tools in both speed and memory efficiency. By taking advantage of the standardized structure of spreadsheet files and bypassing generic XML parsers, Sheet Reader delivers fast and lightweight parsing, even on large files. Now available as a DuckDB extension, it enables users to query spreadsheets directly with SQL and integrate them seamlessly into broader analytical workflows.


Harry shares insights into the development process, performance benchmarks, and the surprisingly complex world of spreadsheet parsing. He also discusses community feedback, feature requests (like detecting multiple tables or parsing colored rows), and future plans — including tighter integration with DuckDB and support for Arrow. The conversation wraps up with a look at Harry’s broader research on composable database systems and data interoperability, highlighting how tools like DuckDB are reshaping modern data analysis.


Hosted on Acast. See acast.com/privacy for more information.

Show more...
6 months ago
40 minutes 53 seconds

Disseminate: The Computer Science Research Podcast
Arjen P. de Vries | faiss: An extension for vector data & search

In this episode of the DuckDB in Research series, we’re joined by Arjen de Vries, Professor of Data Science at Radboud University. Arjen dives into his team’s development of a DuckDB extension for FAISS, a library originally developed at Facebook for efficient similarity search and vector operations.


We explore the growing importance of embeddings and dense retrieval in modern information retrieval systems, and how DuckDB’s zero-copy architecture and tight integration with the Python ecosystem make it a compelling choice for managing large-scale vector data. Arjen shares insights into the technical challenges and architectural decisions behind the extension, comparisons with DuckDB’s native VSS (vector search) solution, and the broader vision of integrating vector search more deeply into relational databases.


Along the way, we also touch on DuckDB's extension ecosystem, its potential for future research, and why tools like this are reshaping how we build and query modern AI-enabled systems.


Hosted on Acast. See acast.com/privacy for more information.

Show more...
6 months ago
46 minutes 14 seconds

Disseminate: The Computer Science Research Podcast
David Justen | POLAR: Adaptive and non-invasive join order selection via plans of least resistance

In this episode, we sit down with David Justen to discuss his work on POLAR: Adaptive and Non-invasive Join Order Selection via Plans of Least Resistance which was implemented in DuckDB. David shares his journey in the database space, insights into performance optimization, and the challenges of working with modern analytical workloads. We dive into the intricacies of query compilation, vectorized execution, and how DuckDB is shaping the future of in-memory databases. Tune in for a deep dive into database internals, industry trends, and what’s next for high-performance data processing!


Links:

  • VLDB 2024 Paper
  • David's Homepage

Hosted on Acast. See acast.com/privacy for more information.

Show more...
7 months ago
51 minutes 8 seconds

Disseminate: The Computer Science Research Podcast
Daniël ten Wolde | DuckPGQ: A graph extension supporting SQL/PGQ

In this episode, we sit down with Daniël ten Wolde, a PhD researcher at CWI’s Database Architectures Group, to explore DuckPGQ—an extension to DuckDB that brings powerful graph querying capabilities to relational databases. Daniel shares his journey into database research, the motivations behind DuckPGQ, and how it simplifies working with graph data. We also dive into the technical challenges of implementing SQL Property Graph Queries (SQL PGQ) in DuckDB, discuss performance benchmarks, and explore the future of DuckPGQ in graph analytics and machine learning. Tune in to learn how this cutting-edge extension is bridging the gap between research and industry!


Links:

  • DuckPGQ homepage
  • Community extension
  • Daniel's homepage

Hosted on Acast. See acast.com/privacy for more information.

Show more...
7 months ago
48 minutes 38 seconds

Disseminate: The Computer Science Research Podcast
Till Döhmen | DuckDQ: A Python library for data quality checks in ML pipelines

In this episode we kick off our DuckDB in Research series with Till Döhmen, a software engineer at MotherDuck, where he leads AI efforts. Till shares insights into DuckDQ, a Python library designed for efficient data quality validation in machine learning pipelines, leveraging DuckDB’s high-performance querying capabilities.


We discuss the challenges of ensuring data integrity in ML workflows, the inefficiencies of existing solutions, and how DuckDQ provides a lightweight, drop-in replacement that seamlessly integrates with scikit-learn. Till also reflects on his research journey, the impact of DuckDB’s optimizations, and the future potential of data quality tooling. Plus, we explore how AI tools like ChatGPT are reshaping research and productivity. Tune in for a deep dive into the intersection of databases, machine learning, and data validation!


Resources:

  • GitHub
  • Paper
  • Slides
  • Till's Homepage
  • datasketches extension (released by a DuckDB community member 2 weeks after we recorded!)

Hosted on Acast. See acast.com/privacy for more information.

Show more...
7 months ago
58 minutes 12 seconds

Disseminate: The Computer Science Research Podcast
Disseminate x DuckDB Coming Soon...

Hey folks!


We have been collaborating with everyone's favourite in-process SQL OLAP database management system DuckDB to bring you a new podcast series - the DuckDB in Research series!


At Disseminate our mission is to bridge the gap between research and industry by exploring research that has a real-world impact. DuckDB embodies this synergy—decades of research underpin its design, and now it’s making waves in the research community as a platform for others to build on and this is what the series will focus on!


Join us as we kick off the series with:

📌 Daniel ten Wolde – DuckPGQ, a graph workload extension for DuckDB supporting SQL/PGQ

📌 David Justen – POLAR: Adaptive, non-invasive join order selection

📌 Till Döhmen – DuckDQ: A Python library for data quality checks in ML pipelines

📌 Arjen de Vries – FAISS extension for vector similarity search in DuckDB

📌 Harry Gavriilidis – SheetReader: Efficient spreadsheet parsing


Whether you're a researcher, engineer, or just curious about the intersection of databases and innovation we are sure you will love this series.


Subscribe now and stay tuned for our first episode! 🚀


Hosted on Acast. See acast.com/privacy for more information.

Show more...
8 months ago
2 minutes 40 seconds

Disseminate: The Computer Science Research Podcast
High Impact in Databases with... Anastasia Ailamaki

In this High Impact in Databases episode we talk to Anastasia Ailamaki.


Anastasia is a Professor of Computer and Communication Sciences at the École Polytechnique Fédérale de Lausanne (EPFL). Tune in to hear Anastasia's story!


The podcast is proudly sponsored by Pometry the developers behind Raphtory, the open source temporal graph analytics engine for Python and Rust.


You can find Anastasia on:

  • Homepage
  • Google Scholar
  • LinkedIn



Hosted on Acast. See acast.com/privacy for more information.

Show more...
8 months ago
46 minutes 17 seconds

Disseminate: The Computer Science Research Podcast
Anastasiia Kozar | Fault Tolerance Placement in the Internet of Things | #61

In this episode, we chat with Anastasiia Kozar about her research on fault tolerance in resource-constrained environments. As IoT applications leverage sensors, edge devices, and cloud infrastructure, ensuring system reliability at the edge poses unique challenges. Unlike the cloud, edge devices operate without persistent backups or high availability standards, leading to increased vulnerability to failures. Anastasiia explains how traditional methods fall short, as they fail to align resource allocation with fault tolerance needs, often resulting in system underperformance.


To address this, Anastasiia introduces a novel resource-aware approach that combines operator placement and fault tolerance into a unified process. By optimizing where and how data is backed up, her solution significantly improves system reliability, especially for low-end edge devices with limited resources. The result? Up to a tenfold increase in throughput compared to existing methods. Tune to learn more!


Links:

  • Fault Tolerance Placement in the Internet of Things [SIGMOD'24]
  • The NebulaStream Platform: Data and Application Management for the Internet of Things [CIDR'20]
  • nebula.stream




Hosted on Acast. See acast.com/privacy for more information.

Show more...
10 months ago
49 minutes 2 seconds

Disseminate: The Computer Science Research Podcast
Liana Patel | ACORN: Performant and Predicate-Agnostic Hybrid Search | #60

In this episode, we chat with with Liana Patel to discuss ACORN, a groundbreaking method for hybrid search in applications using mixed-modality data. As more systems require simultaneous access to embedded images, text, video, and structured data, traditional search methods struggle to maintain efficiency and flexibility. Liana explains how ACORN, leveraging Hierarchical Navigable Small Worlds (HNSW), enables efficient, predicate-agnostic searches by introducing innovative predicate subgraph traversal. This allows ACORN to outperform existing methods significantly, supporting complex query semantics and achieving 2–1,000 times higher throughput on diverse datasets. Tune in to learn more!


Links:

  • ACORN: Performant and Predicate-Agnostic Search Over Vector Embeddings and Structured Data [SIGMOD'24]
  • Liana's LinkedIn
  • Liana's X


Hosted on Acast. See acast.com/privacy for more information.

Show more...
11 months ago
52 minutes 49 seconds

Disseminate: The Computer Science Research Podcast
High Impact in Databases with... David Maier

In this High Impact episode we talk to David Maier.


David is the Maseeh Professor Emeritus of Emerging Technologies at Portland State University. Tune in to hear David's story and learn about some of his most impactful work.


The podcast is proudly sponsored by Pometry the developers behind Raphtory, the open source temporal graph analytics engine for Python and Rust.


You can find David on:

  • Homepage
  • Google Scholar


Hosted on Acast. See acast.com/privacy for more information.

Show more...
12 months ago
1 hour 2 minutes 24 seconds

Disseminate: The Computer Science Research Podcast
Raunak Shah | R2D2: Reducing Redundancy and Duplication in Data Lakes | #59

In this episode, Raunak Shah joins us to discuss the critical issue of data redundancy in enterprise data lakes, which can lead to soaring storage and maintenance costs. Raunak highlights how large-scale data environments, ranging from terabytes to petabytes, often contain duplicate and redundant datasets that are difficult to manage. He introduces the concept of "dataset containment" and explains its significance in identifying and reducing redundancy at the table level in these massive data lakes—an area where there has been little prior work.


Raunak then dives into the details of R2D2, a novel three-step hierarchical pipeline designed to efficiently tackle dataset containment. By utilizing schema containment graphs, statistical min-max pruning, and content-level pruning, R2D2 progressively reduces the search space to pinpoint redundant data. Raunak also discusses how the system, implemented on platforms like Azure Databricks and AWS, offers significant improvements over existing methods, processing TB-scale data lakes in just a few hours with high accuracy. He concludes with a discussion on how R2D2 optimally balances storage savings and performance by identifying datasets that can be deleted and reconstructed on demand, providing valuable insights for enterprises aiming to streamline their data management strategies.


Materials:

  • SIGMOD'24 Paper - R2D2: Reducing Redundancy and Duplication in Data Lakes
  • ICDE'24 - Towards Optimizing Storage Costs in the Cloud




Hosted on Acast. See acast.com/privacy for more information.

Show more...
1 year ago
31 minutes 9 seconds

Disseminate: The Computer Science Research Podcast

This podcast features interviews with Computer Science researchers. Hosted by Dr. Jack Waudby researchers are interviewed, highlighting the problem(s) they tackled, solutions they developed, and how their findings can be applied in practice. This podcast is for industry practitioners, researchers, and students, aims to further narrow the gap between research and practice, and to generally make awesome Computer Science research more accessible. We have 2 types of episode: (i) Cutting Edge (red/blue logo) where we talk to researchers about their latest work, and (ii) High Impact (gold/silver logo) where we talk to researchers about their influential work.


You can support the show through Buy Me a Coffee. A donation of $3 will help us keep making you awesome Computer Science research podcasts. 


Hosted on Acast. See acast.com/privacy for more information.