Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Technology
Health & Fitness
Sports
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/0a/9e/e8/0a9ee8fa-8114-da2e-12cd-52f475986ec1/mza_1100765385283520488.jpeg/600x600bb.jpg
Distributed Data Management (WT 2018/19) - tele-TASK
Prof. Dr. Felix Naumann, Dr. Thorsten Papenbrock
26 episodes
15 hours ago
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization. Big Data Analytics is a multi-million dollar market that grows constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements. In this lecture, we take a look a various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...). Since workload distribution is a concept which is useful for many applications, we focus in particular on data analytics.
Show more...
Courses
Education
RSS
All content for Distributed Data Management (WT 2018/19) - tele-TASK is the property of Prof. Dr. Felix Naumann, Dr. Thorsten Papenbrock and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization. Big Data Analytics is a multi-million dollar market that grows constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements. In this lecture, we take a look a various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...). Since workload distribution is a concept which is useful for many applications, we focus in particular on data analytics.
Show more...
Courses
Education
Episodes (20/26)
Distributed Data Management (WT 2018/19) - tele-TASK
Lecture Summary
6 years ago
1 hour 32 minutes 33 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Distributed Query Optimization (1)
6 years ago
1 hour 26 minutes 3 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Distributed Query Optimization (2)
6 years ago
1 hour 23 minutes 23 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Processing Streams
6 years ago
1 hour 33 minutes 58 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Stream Processing
6 years ago
1 hour 27 minutes 31 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Transactions
6 years ago
1 hour 29 minutes 57 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Consistency and Consensus
6 years ago
1 hour 30 minutes 2 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Distributed Systems
6 years ago
1 hour 33 minutes 19 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Spark - Hands On
6 years ago
1 hour 28 minutes 41 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Apache Spark
6 years ago
1 hour 29 minutes 38 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Beyond MapReduce
6 years ago
1 hour 29 minutes 41 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Distributed File Systems and MapReduce
6 years ago
1 hour 27 minutes 15 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Batch Processing
6 years ago
1 hour 29 minutes 20 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Partitioning
6 years ago
1 hour 19 minutes 6 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Replication
6 years ago
1 hour 26 minutes 56 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Storage and Retrieval
6 years ago
1 hour 27 minutes 1 second

Distributed Data Management (WT 2018/19) - tele-TASK
Data Models and Query Languages
6 years ago
1 hour 24 minutes 5 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Patterns
6 years ago
1 hour 29 minutes

Distributed Data Management (WT 2018/19) - tele-TASK
Akka Actor-Programming Part 2
6 years ago
1 hour 29 minutes 57 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
Akka Actor-Programming Hands-on
6 years ago
1 hour 28 minutes 43 seconds

Distributed Data Management (WT 2018/19) - tele-TASK
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization. Big Data Analytics is a multi-million dollar market that grows constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements. In this lecture, we take a look a various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...). Since workload distribution is a concept which is useful for many applications, we focus in particular on data analytics.