Dueling Deep Q Learning is Simple in PyTorch

Описание к видео Dueling Deep Q Learning is Simple in PyTorch

Let's code a dueling deep q learning agent to beat the lunar lander environment. Dueling Deep Q learning is pretty cool in that it splits the Q network into a value function and an advantage function. This unique twist on the algorithm provides a significant improvement in convergence speed in the lunar lander environment from the OpenAI gym.

Dueling Deep networks can also be incorporated into double deep q learning, or really any variant of deep Q learning as the only real change is the splitting of the Q function into a value and an advantage stream.

#DuelingDeepQLearning #PyTorch #OpenAIGym

Learn how to turn deep reinforcement learning papers into code:

Get instant access to all my courses, including the new Prioritized Experience Replay course, with my subscription service. $29 a month gives you instant access to 42 hours of instructional content plus access to future updates, added monthly.


Discounts available for Udemy students (enrolled longer than 30 days). Just send an email to [email protected]

https://www.neuralnet.ai/courses

Or, pickup my Udemy courses here:

Deep Q Learning:
https://www.udemy.com/course/deep-q-l...

Actor Critic Methods:
https://www.udemy.com/course/actor-cr...

Curiosity Driven Deep Reinforcement Learning
https://www.udemy.com/course/curiosit...

Natural Language Processing from First Principles:
https://www.udemy.com/course/natural-...
Reinforcement Learning Fundamentals
https://www.manning.com/livevideo/rei...

Here are some books / courses I recommend (affiliate links):
Grokking Deep Learning in Motion: https://bit.ly/3fXHy8W
Grokking Deep Learning: https://bit.ly/3yJ14gT
Grokking Deep Reinforcement Learning: https://bit.ly/2VNAXql

Come hang out on Discord here:
  / discord  

Need personalized tutoring? Help on a programming project? Shoot me an email! [email protected]

Website: https://www.neuralnet.ai
Github: https://github.com/philtabor
Twitter:   / mlwithphil  

Комментарии

Информация по комментариям в разработке