Chevron Left
Retour à Sample-based Learning Methods

Avis et commentaires pour l'étudiant pour Sample-based Learning Methods par Université de l'Alberta

17 notes
7 avis

À propos du cours

In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna...

Meilleurs avis

Filtrer par :

1 - 9 sur 9 Examens pour Sample-based Learning Methods

par Manuel V d S

Sep 11, 2019

Course was amazing until I reached the final assignment. What a terrible way to grade the notebook part. Also, nobody around in the forums to help... I would still recommend this to anyone interested, unless you have no intention of doing the weekly readings.

par Yanick P

Sep 17, 2019

Course material for next course is not available but Coursera still charge me $$

par Stewart A

Sep 03, 2019

Great course! Lots of hands-on RL algorithms. I'm looking forward to the next course in the specialization.

par LuSheng Y

Sep 10, 2019

Very good.

par Ashish S

Sep 16, 2019

A good course with proper Mathematical insights

par Luiz C

Sep 13, 2019

Great Course. Every aspect top notch

par Alejandro D

Sep 19, 2019

Excellent content and delivery.

par Sodagreenmario

Sep 18, 2019

Great course, but there are still some little bugs that can be fixed in notebook assignments.

par Neil S

Sep 12, 2019

This is THE course to go with Sutton & Barto's Reinforcement Learning: An Introduction.

It's great to be able to repeat the examples from the book and end up writing code that outputs the same diagrams for e.g. Dyna-Q comparisons for planning. The notebooks strike a good balance between hand-holding for new topics and letting you make your own msitakes and learn from them.

I would rate five stars, but decided to drop one for now as there are still some glitches in the coding of Notebook assignments, requiring work-arounds communicated in the course forums. I hope these will be worked on and the course materials polished to perfection in future.