Chevron Left
Retour à Machine Learning: Clustering & Retrieval

Avis et commentaires pour d'étudiants pour Machine Learning: Clustering & Retrieval par Université de Washington

4.6
étoiles
2,008 évaluations
340 avis

À propos du cours

Case Studies: Finding Similar Documents A reader is interested in a specific news article and you want to find similar articles to recommend. What is the right notion of similarity? Moreover, what if there are millions of other documents? Each time you want to a retrieve a new document, do you need to search through all other documents? How do you group similar documents together? How do you discover new, emerging topics that the documents cover? In this third case study, finding similar documents, you will examine similarity-based algorithms for retrieval. In this course, you will also examine structured representations for describing the documents in the corpus, including clustering and mixed membership models, such as latent Dirichlet allocation (LDA). You will implement expectation maximization (EM) to learn the document clusterings, and see how to scale the methods using MapReduce. Learning Outcomes: By the end of this course, you will be able to: -Create a document retrieval system using k-nearest neighbors. -Identify various similarity metrics for text data. -Reduce computations in k-nearest neighbor search by using KD-trees. -Produce approximate nearest neighbors using locality sensitive hashing. -Compare and contrast supervised and unsupervised learning tasks. -Cluster documents by topic using k-means. -Describe how to parallelize k-means using MapReduce. -Examine probabilistic clustering approaches using mixtures models. -Fit a mixture of Gaussian model using expectation maximization (EM). -Perform mixed membership modeling using latent Dirichlet allocation (LDA). -Describe the steps of a Gibbs sampler and how to use its output to draw inferences. -Compare and contrast initialization techniques for non-convex optimization objectives. -Implement these techniques in Python....

Meilleurs avis

BK

Aug 25, 2016

excellent material! It would be nice, however, to mention some reading material, books or articles, for those interested in the details and the theories behind the concepts presented in the course.

JM

Jan 17, 2017

Excellent course, well thought out lectures and problem sets. The programming assignments offer an appropriate amount of guidance that allows the students to work through the material on their own.

Filtrer par :

226 - 250 sur 328 Avis pour Machine Learning: Clustering & Retrieval

par 邓松

Jan 04, 2017

very helpful

par Jiancheng

Oct 27, 2016

Great intro!

par Thuong D H

Sep 23, 2016

Good course!

par Karundeep Y

Sep 18, 2016

Best Course.

par Pradeep N

Feb 22, 2017

"super one,

par clark.bourne

Jan 09, 2017

内容丰富实际,材料全。

par VITTE

Nov 11, 2018

Excellent.

par Gautam R

Oct 08, 2016

Awesome :)

par Subhadip P

Aug 05, 2020

excellent

par Alan B

Jul 03, 2020

Excellent

par RISHABH T

Nov 12, 2017

excellent

par Iñigo C S

Aug 08, 2016

Amazing.

par Mr. J

May 23, 2020

Superb.

par Bingyan C

Dec 27, 2016

great.

par Cuiqing L

Nov 05, 2016

great!

par Job W

Jul 23, 2016

Great!

par Frank

Nov 23, 2016

非常棒!

par Pavithra M

May 24, 2020

nil

par Alexandre

Oct 23, 2016

ok

par Nagendra K M R

Nov 11, 2018

G

par Suneel M

May 09, 2018

E

par Lalithmohan S

Mar 26, 2018

V

par Ruchi S

Jan 24, 2018

E

par Kevin C N

Mar 26, 2017

E

par Asifur R M

Mar 19, 2017

For me, this was the toughest of the first four courses in this specialization (now that the last two are cancelled, these are the only four courses in the specialization). I'm satisfied with what I gained in the process of completing these four courses. While I've forgotten most of the details, especially those in the earlier courses, I now have a clearer picture of the lay of the land and am reasonably confident that I can use some of these concepts in my work. I also recognize that learning of this kind is a life-long process. My plan next is to go through [https://www.amazon.com/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370], which, based on my reading of the first chapter, promises to be an excellent way to review and clarify the concepts taught in these courses.

What I liked most about the courses in this specializations are: good use of visualization to explain challenging concepts and use of programming exercises to connect abstract discussions with real-world data. What I'd have liked to have more of is exercises that serve as building blocks -- these are short and simple exercises (can be programming or otherwise) that progressively build one's understanding of a concept before tackling real-world data problems. edX does a good job in this respect.

My greatest difficulty was in keeping the matrix notations straight. I don't have any linear algebra background beyond some matrix mathematics at the high school level. That hasn't been much of a problem in the earlier three courses, but in this one I really started to feel the need to gain some fluency in linear algebra. [There's an excellent course on the subject at edX: https://courses.edx.org/courses/course-v1%3AUTAustinX%2BUT.5.05x%2B1T2017/ and I'm currently working through it.]

Regardless of what various machine learning course mention as prerequisites, I think students would benefit from first developing a strong foundation in programming (in this case Python), calculus, probability, and linear algebra. That doesn't mean one needs to know these subjects at an advanced level (of course, the more the better), but rather that the foundational concepts are absolutely clear. I'm hoping this course at Coursera would be helpful in this regard: https://www.coursera.org/learn/datasciencemathskills/