Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Technology
Health & Fitness
Sports
About Us
Contact Us
Copyright
© 2024 PodJoint
Loading...
0:00 / 0:00
Podjoint Logo
US
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/33/07/d1/3307d1ed-8154-b1a0-1125-63d71224a1e2/mza_8237475620404726288.jpeg/600x600bb.jpg
Distributed Data Management (WT 2019/20) - tele-TASK
Dr. Thorsten Papenbrock
27 episodes
17 hours ago
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization. Big Data Analytics is a multi-million dollar market that grows constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements. In this lecture, we take a look a various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...). Since workload distribution is a concept which is useful for many applications, we focus in particular on data analytics.
Show more...
Courses
Education
RSS
All content for Distributed Data Management (WT 2019/20) - tele-TASK is the property of Dr. Thorsten Papenbrock and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization. Big Data Analytics is a multi-million dollar market that grows constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements. In this lecture, we take a look a various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...). Since workload distribution is a concept which is useful for many applications, we focus in particular on data analytics.
Show more...
Courses
Education
Episodes (20/27)
Distributed Data Management (WT 2019/20) - tele-TASK
Exam Preparation
5 years ago
1 hour 31 minutes 1 second

Distributed Data Management (WT 2019/20) - tele-TASK
Distributed DBMSs & Distributed Query Optimization
5 years ago
1 hour 29 minutes 20 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Distributed DBMSs
5 years ago
1 hour 27 minutes 24 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Stream Processing & Distributed DBMSs
5 years ago
1 hour 28 minutes 22 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Stream Processing
5 years ago
1 hour 26 minutes 2 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Spark Batch Processing & Stream Processing
5 years ago
1 hour 27 minutes 6 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Spark Batch Processing 2
5 years ago
1 hour 30 minutes 5 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Spark Batch Processing
5 years ago
1 hour 28 minutes 12 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Exercise Evaluation Assignment 1-3
5 years ago
1 hour 25 minutes 53 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Beyond MapReduce
5 years ago
1 hour 29 minutes 4 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Batch Processing: Distributed File Systems and MapReduce
5 years ago
1 hour 31 minutes 30 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Transactions & Batch Processing
5 years ago
1 hour 11 minutes 9 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Consistency and Consensus & Transactions
5 years ago
1 hour 26 minutes 42 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Distributed Systems & Consistency and Consensus
5 years ago
1 hour 29 minutes 52 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Distributed Systems
5 years ago
1 hour 32 minutes 13 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Replication & Partitioning
5 years ago
1 hour 28 minutes 37 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Replication 2
5 years ago
1 hour 24 minutes 21 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Storage and Retrieval & Replication
5 years ago
1 hour 20 minutes 49 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
The Graph Data Model
5 years ago
1 hour 29 minutes 5 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
Data Models and Query Languages
5 years ago
1 hour 28 minutes 29 seconds

Distributed Data Management (WT 2019/20) - tele-TASK
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization. Big Data Analytics is a multi-million dollar market that grows constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements. In this lecture, we take a look a various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...). Since workload distribution is a concept which is useful for many applications, we focus in particular on data analytics.