Chevron Left
Retour à Probabilistic Graphical Models 1: Representation

Avis et commentaires pour d'étudiants pour Probabilistic Graphical Models 1: Representation par Université de Stanford

1,316 évaluations
293 avis

À propos du cours

Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems. This course is the first in a sequence of three. It describes the two basic PGM representations: Bayesian Networks, which rely on a directed graph; and Markov networks, which use an undirected graph. The course discusses both the theoretical properties of these representations as well as their use in practice. The (highly recommended) honors track contains several hands-on assignments on how to represent some real-world problems. The course also presents some important extensions beyond the basic PGM representation, which allow more complex models to be encoded compactly....

Meilleurs avis

12 juil. 2017

Prof. Koller did a great job communicating difficult material in an accessible manner. Thanks to her for starting Coursera and offering this advanced course so that we can all learn...Kudos!!

22 oct. 2017

The course was deep, and well-taught. This is not a spoon-feeding course like some others. The only downside were some "mechanical" problems (e.g. code submission didn't work for me).

Filtrer par :

251 - 275 sur 286 Avis pour Probabilistic Graphical Models 1: Representation

par Ian M C

26 déc. 2018

Writing on the ppt is not clear to see.

par Soumyadipta D

16 juil. 2019

lectures are too fast otherwise great

par Sunsik K

31 juil. 2018

Broad introduction to general issues

par Tianyi X

20 févr. 2018

Lack of top-down review of the PGM.

par Sunil

12 sept. 2017

Great intro to probabilistic models

par Nikesh B

6 nov. 2016


par Tianqi Y

19 juin 2019

too hard

par Yashwanth M

5 janv. 2020


par Ricardo A M C

9 janv. 2021


par Max B

19 déc. 2020

This review is for the whole Specialization, not just course 1. The lectures & subject matter are fascinating, but the course itself has some serious limitations:

1) Two of the most common example problems the instructor uses are image segmentation & speech recognition, both of which have been completely superseded thanks to neural networks (CNNs for the former, RNNs for the latter). The course was written in 2011 or 2012, and the lectures haven't been updated since.

2) The textbook is extremely useful, but they do not provide a PDF, though it is easy to find via Google. The professor does not give explicit "readings", you just have to find them on your own.

3) The Discussion Forums are effectively dead, nobody involved with the construction of the course has gone through them in 4 or 5 years, and most learner comments are several years old as well. In other words, you're on your own as far as figuring things out.

4) Quizzes & exams have no partial credit, often have "gotcha" questions, and enforce time delays between attempts (1 hour for quizzes, 24 hours for exams).

5) By far the biggest problem however is the programming assignments: they must be done in Matlab/Octave. I've taken many other courses outside this Specialization, so I say with confidence that the lion's share of the learning occurs in solving programming assignments. In the 3rd course especially, the programming assignments are exactly the same ones assigned to students taking the course in real-life at Stanford, where it was assumed that students would work together in groups to solve them. They are not of a reasonable difficulty level, from a pedagogical standpoint, for a distributed, asynchronous, online course.

All of these problems ultimately stem from the fact that this was among the first courses on Coursera (Daphne Koller is one of the founders of Coursera), before they really understood how to properly convert between a university course and an online course. Unfortunately, where Koller's colleague Andrew Ng has put in a lot of work updating his Coursera courses, Daphne appears to have abandoned this (to be fair she is very busy running companies doing fascinating work).

I recommend Andrew Ng's Deep Learning Specialization and University of Alberta's Reinforcement Learning Specialization for learning ML content, though the former can be quite hand-holdy at times.

Good luck,


par Paul C

31 oct. 2016

I found plenty of useful information in this course overall but lectures often spent too much time dwelling on the detail of simpler concepts while more complex areas, and sometimes critical information that was later built upon, were only touched briefly or sometimes skipped entirely. I missed a sense of continuity as we skipped from model to model with a minimum of time spent on how the models complement each other and their relative strengths and weaknesses in application.

The way data structures were defined in the code was particularly difficult to deal with. The coding exercises all suffered as a result. It ended up taking way too much time to figure how to decode the data and trace logic around it. This meant that grasping concepts and learning from the questions came in a distant second priority to debugging.

Dr Koller mentioned that the material is aimed at postgraduates. I felt that the level of content covered here would just as easily be grasped by most undergraduates in technical disciplines if it had been delivered in a more structured manner with clearer progression across models (conceptually and mathematically) and better code examples. When delivering in this format, allowances need to be made for the facts that tutorial sessions do not exist and the possibilities for informal Q&A are limited so any gaps become very difficult for students to fill in themselves.

Despite the above shortcomings I'm glad I did the course and I would still recommend it to someone interested in graphical models as it does cover the basics well enough to make a decent start. I'm not sure whether or not I'd recommend the programming exercises as they are a significant time sink but at the same time, without spending time attacking the programming problems the concepts are not likely to gel based on the video and quizzes alone.

par Nicholas E

29 oct. 2016

The course was very interesting and thought-provoking. I found the introduction to probabilistic graphical models (PGMs) and their properties struck a nice balance between intuition and formalism. The discussions highlighted exciting aspects of their power in simplifying complex problems involving uncertainty. However, I still do not feel I could propose convincing PGMs for real-world problems. There are examples in the course, but they are far removed from being concrete applications. I would have preferred there be an in depth analysis of an application of PGMs in the literature over the lengthy programming assignments. I am an experienced programmer with over 5 years of experience in many languages including MATLAB/Octave and I sometimes found it uninspiring to solve toy problems, not due to the difficulty in using the programming language, but rather because after the assignment had been completed I felt I had not really learnt much more than I would have from just watching the lectures, although, if you are interested in getting experience with MATLAB/Octave, the programming assignments are good practice. I qualify this in stating that I have not yet completed the next two courses on PGMs; this course may present an essential foundation that is necessary for the upcoming courses, and in any case provoked my interest in learning more about them

par Mahendra K

4 oct. 2017

The course is highly theoretical. Would have been great if it was paced well and driven from real world examples. I am not saying that there are no examples. But it'd have been better if the concepts were driven via some real world examples instead of first talking about the concept and then its applications.

What would have been even better if Python was an option for PAs. Octave can't be used in industry setting where the amount of data is really large. Both Python and Octave should have been an option so that the student can decide for themself.

par John E M

31 mars 2018

Lectures were OK and quizzes and exams appropriately difficult. But Labs were pretty difficult especially lab 4 which I ended up surrendering on. This means I didn't do the accompanying quiz and gave up on the possibility of honors recognition as well.

While labs don't have to be as hand-holding as the DeepLearning class by Coursera, it would be nice to get more help and maybe not submit errors for the parts I haven't tackled yet when submitting (as DeepLearning and MachineLearning courses figured out how to do).

par Kervin P

5 janv. 2017

This is an amazing course, and taught by an extremely talented and accomplished professor. I believe it's a must for anyone in AI/ML or Statistical Inference. The problem is that you're essentially on your own the entire course. There isn't any community or TA help to speak off. And the project is done in Matlab, so you end up wrestling with Matlab or Octave instead of actually doing and learning. I still recommend the course, but that's only because the material is so extremely important.

par Daniel S

11 déc. 2019

Prof. Koller is exceptional. However, the focus of the course is toward the "theory" and less towards applications, unless one chooses to complete the Honors section of the course. I personally did not have the time to learn a new language syntax to attempt the Honors section...which is a shame. I do hope that this course is updated where R/Python replaces Octave/MatLab, because it would allow professional analysts more opportunity to explore the Honors content. Thanks!

par Volodymyr D

11 avr. 2020

Useful course on great subject, but poorly explained and supported. It was quite hard for me to get implicit ideas and Honors assignments. I ended up skipping Honors assignments since they're explained really really poorly and most of the time I spent trying to figure out what I'm required to do. Forums are inactive and no mentors reply to the posts. I don't recommend taking this course if you don't have someone to guide and help you.

par Sami J

22 avr. 2020

Material is interesting but needs updating. Programming assignments have been marked as "Honors Assignments", which is a thinly veiled attempt to shirk responsibility for fixing bugs and providing student support. Quiz questions are vaguely worded. Overall the course is challenging, but only sometimes for the right reasons.

par Shen C

14 juil. 2020

this course is a very difficult one. takes a lot of time and effort. forum is really useful (i wouldn't have passed without it). that said, it is also because there is little help from the lecturer and instructors. would appreciate more help.

par Vladimir R

12 janv. 2021

Great topic, the professor is a top expert in the field, but the grading interface badly needs an upgrade. It is not acceptable for students to have to manually hack JSON submissions just to get around grader errors.

par Christos G

9 mars 2018

Quite difficult, not much help in discussion forums, some assignmnents had insufficient supporting material and explanations, challenging overall, I thought at least 3-4 times to abandon it.

par Siavash R

10 août 2017

For me this was a difficult course not because of the material, but because of the teaching style. I don't think Dr. Koller is a very good teacher.

par Roman G

4 nov. 2016

The audio is VERY VERY poor.

That makes it very hard to understand what Prof Kohler is trying to impart on us..

I often lost track

par Xingjian Z

2 nov. 2017

Fun topic. But the explanation of the mentor is somewhat vague and the material is sometimes outdated and misleading.

par Ujjval P

13 déc. 2016

Concepts covered in quiz and assignments are not covered well in the lecture videos, can be much better.