Home
Categories
EXPLORE
True Crime
Comedy
Society & Culture
Business
Sports
History
Music
About Us
Contact Us
Copyright
© 2024 PodJoint
00:00 / 00:00
Sign in

or

Don't have an account?
Sign up
Forgot password
https://is1-ssl.mzstatic.com/image/thumb/Podcasts115/v4/27/95/2a/27952a21-7f6d-8c72-19e6-ea974b0a5d20/mza_17326336238218051173.png/600x600bb.jpg
Deep Learning Course ID:662
Prof. Dr. Andreas Maier
13 episodes
4 months ago

Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:

  • (multilayer) perceptron, backpropagation, fully connected neural networks

  • loss functions and optimization strategies

  • convolutional neural networks (CNNs)

  • activation functions

  • regularization strategies

  • common practices for training and evaluating neural networks

  • visualization of networks and results

  • common architectures, such as LeNet, Alexnet, VGG, GoogleNet

  • recurrent neural networks (RNN, TBPTT, LSTM, GRU)

  • deep reinforcement learning

  • unsupervised learning (autoencoder, RBM, DBM, VAE)

  • generative adversarial networks (GANs)

  • weakly supervised learning

  • applications of deep learning (segmentation, object detection, speech recognition, ...)

Show more...
Education
RSS
All content for Deep Learning Course ID:662 is the property of Prof. Dr. Andreas Maier and is served directly from their servers with no modification, redirects, or rehosting. The podcast is not affiliated with or endorsed by Podjoint in any way.

Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:

  • (multilayer) perceptron, backpropagation, fully connected neural networks

  • loss functions and optimization strategies

  • convolutional neural networks (CNNs)

  • activation functions

  • regularization strategies

  • common practices for training and evaluating neural networks

  • visualization of networks and results

  • common architectures, such as LeNet, Alexnet, VGG, GoogleNet

  • recurrent neural networks (RNN, TBPTT, LSTM, GRU)

  • deep reinforcement learning

  • unsupervised learning (autoencoder, RBM, DBM, VAE)

  • generative adversarial networks (GANs)

  • weakly supervised learning

  • applications of deep learning (segmentation, object detection, speech recognition, ...)

Show more...
Education
Episodes (13/13)
Deep Learning Course ID:662
13 - Deep Learning
7 years ago
1 hour 25 minutes 13 seconds

Deep Learning Course ID:662
12 - Deep Learning
7 years ago
1 hour 18 minutes 55 seconds

Deep Learning Course ID:662
11 - Deep Learning
7 years ago
1 hour 14 minutes 26 seconds

Deep Learning Course ID:662
10 - Deep Learning
7 years ago
1 hour 11 minutes 28 seconds

Deep Learning Course ID:662
9 - Deep Learning
7 years ago
1 hour 12 minutes 51 seconds

Deep Learning Course ID:662
8 - Deep Learning
7 years ago
55 minutes 50 seconds

Deep Learning Course ID:662
7 - Deep Learning
7 years ago
1 hour 7 minutes 27 seconds

Deep Learning Course ID:662
6 - Deep Learning
7 years ago
56 minutes 56 seconds

Deep Learning Course ID:662
5 - Deep Learning
7 years ago
1 hour 4 minutes 36 seconds

Deep Learning Course ID:662
4 - Deep Learning
7 years ago
1 hour 19 minutes 17 seconds

Deep Learning Course ID:662
3 - Deep Learning
7 years ago
1 hour 7 minutes 21 seconds

Deep Learning Course ID:662
2 - Deep Learning
7 years ago
1 hour 28 minutes 39 seconds

Deep Learning Course ID:662
1 - Deep Learning
7 years ago
1 hour 33 minutes 15 seconds

Deep Learning Course ID:662

Deep Learning (DL) has attracted much interest in a wide range of applications such as image recognition, speech recognition and artificial intelligence, both from academia and industry. This lecture introduces the core elements of neural networks and deep learning, it comprises:

  • (multilayer) perceptron, backpropagation, fully connected neural networks

  • loss functions and optimization strategies

  • convolutional neural networks (CNNs)

  • activation functions

  • regularization strategies

  • common practices for training and evaluating neural networks

  • visualization of networks and results

  • common architectures, such as LeNet, Alexnet, VGG, GoogleNet

  • recurrent neural networks (RNN, TBPTT, LSTM, GRU)

  • deep reinforcement learning

  • unsupervised learning (autoencoder, RBM, DBM, VAE)

  • generative adversarial networks (GANs)

  • weakly supervised learning

  • applications of deep learning (segmentation, object detection, speech recognition, ...)