Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts125/v4/b7/0e/b5/b70eb544-bb1a-0dc5-8b2e-fb91771582be/mza_985220250474934708.jpeg/600x600bb.jpg
Distributed Data Analytics (WT 2017/18) - tele-TASK
Dr. Thorsten Papenbrock
14 episodes
1 hour ago
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization. Big Data Analytics is a multi-million dollar market that grows constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements. In this lecture, we take a look a various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...). Since workload distribution is a concept which is useful for many applications, we focus in particular on data analytics.
Show more...
Courses
Education
RSS
All content for Distributed Data Analytics (WT 2017/18) - tele-TASK is the property of Dr. Thorsten Papenbrock and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization. Big Data Analytics is a multi-million dollar market that grows constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements. In this lecture, we take a look a various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...). Since workload distribution is a concept which is useful for many applications, we focus in particular on data analytics.
Show more...
Courses
Education
Episodes (14/14)
Distributed Data Analytics (WT 2017/18) - tele-TASK
Course Summary
7 years ago
1 hour 2 minutes 3 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Stream Processing
7 years ago
1 hour 34 minutes 45 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Spark Batch Processing
7 years ago
49 minutes 14 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Batch Processing
7 years ago
1 hour 39 minutes 8 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Consistency and Consensus
7 years ago
1 hour 35 minutes 38 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Distributed Systems
7 years ago
1 hour 34 minutes 50 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Partitioning & Transactions
7 years ago
1 hour 31 minutes 12 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Replication
7 years ago
1 hour 23 minutes 21 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Akka Actor Programming
7 years ago
1 hour 23 minutes 26 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Formats for Encoding Data & Models of Dataflow
7 years ago
1 hour 32 minutes 38 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Storage and Retrieval
8 years ago
1 hour 6 minutes 8 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
The Document Data Model & The Graph Data Model
8 years ago
1 hour 32 minutes

Distributed Data Analytics (WT 2017/18) - tele-TASK
Foundations & Data Models and Query Languages
8 years ago
1 hour 31 minutes 23 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
Introduction & Foundations
8 years ago
1 hour 29 minutes 32 seconds

Distributed Data Analytics (WT 2017/18) - tele-TASK
The free lunch is over! Computer systems up until the turn of the century became constantly faster without any particular effort simply because the hardware they were running on increased its clock speed with every new release. This trend has changed and today's CPUs stall at around 3 GHz. The size of modern computer systems in terms of contained transistors (cores in CPUs/GPUs, CPUs/GPUs in compute nodes, compute nodes in clusters), however, still increases constantly. This caused a paradigm shift in writing software: instead of optimizing code for a single thread, applications now need to solve their given tasks in parallel in order to expect noticeable performance gains. Distributed computing, i.e., the distribution of work on (potentially) physically isolated compute nodes is the most extreme method of parallelization. Big Data Analytics is a multi-million dollar market that grows constantly! Data and the ability to control and use it is the most valuable ability of today's computer systems. Because data volumes grow so rapidly and with them the complexity of questions they should answer, data analytics, i.e., the ability of extracting any kind of information from the data becomes increasingly difficult. As data analytics systems cannot hope for their hardware getting any faster to cope with performance problems, they need to embrace new software trends that let their performance scale with the still increasing number of processing elements. In this lecture, we take a look a various technologies involved in building distributed, data-intensive systems. We discuss theoretical concepts (data models, encoding, replication, ...) as well as some of their practical implementations (Akka, MapReduce, Spark, ...). Since workload distribution is a concept which is useful for many applications, we focus in particular on data analytics.