A truly great course, focuses on the details you need, at a good pace, building up the foundations needed before relying more heavily on libraries an abstractions (which I assume will follow).
A neatly organized course introducing the students to basics of Processing text data, learning word embedding and most importantly on how to interpret the word embedding. Great Job!!
par sukanya n•
Viterbi algorithm could be explained better and Week 4 seemed very rushed with lots of details just glossed over. The assignment of week 4 compared to previous weeks seemed pretty easy.
par Oleh S•
This course is more mature than the first one. Materials are very interesting and provide nice intuition for the probabilistic models. One can study the basics of auto-correction, Markov and Hidden Markov Models as well as the N-gram models and very important approach - Word2Vec, which is the essential part of the modern deep learning algorithms. I really enjoyed this part.
However, there are some minor suggestions:
1. Lectures duration could be longer - it will help to provide more depth in materials in both math and code side. I know that this is a simple version of real academic course, but in order to increase the quality you should consider the increasing duration;
2. Programming assignments are not balanced and there are still some minor ambiguities. For instance, the first and HMM assignments are tough, whereas the last one is a piece of cake.
3. The course can be enhanced with the additional part dedicated to Probability Theory, maybe a few lectures more.
I recommend this course to everyone interested in NLP. Note, you should read and study the additional resources to reinforce your knowledge, here is just the starting point for a good researcher. Keep going, guys!
par Dan C•
Lots of Quality Control issues. using paid customers as proofreaders is tacky.
Homework is too easy. The answers are pretty much given to us.
par Gabriel T P C•
To lessons are shallow, exercises to repetitive.
par Zhendong W•
A great course indeed! However, it would be even nicer to have the lecture videos in a slower pace, maybe go through the examples in more detail. Sometimes it felt too quick to jump directly through the theory to examples.
par Mark M•
This second course like the first feels like a first or second year university course. Sometimes the explanations are weak or missing. There was no explanation for why the Viterbi algorithm works, no explanation for how to decide which embedding extraction method (W1 columns, W2 rows, or average of the two) method to use. There seemed to be little or no TA support. Many people were posting questions and not receiving answers from TAs. I posted the mistakes I identified in the course content, but I don't think anyone is going to act on this. It would have been good if the last exercise were repeated in Tensorflow. Also it would have been good to actually use the embeddings for something in the last exercise. From the PCA graph, the embeddings looked pretty poor.
par Manik S•
Although the content is great but the way of teaching is lacking relative to how Andrew teaches in Deep Learning specialization. More interactive teaching with a pen tablet would be more engaging. The whole course seems like a recital of the slides. And the intructor's voice is a little bit irritating to listen to over longer durations. Otherwise the course provides a lot of learning if you can bear it.
par Kabakov B•
It is the worst course on deeplearning.ai ever. It is too simple for those who already took DL specialization and too difficult for new ones. The 'lectures' are too superficial and you will barely understand a thing. But tasks are huge -- a lot of spaghetti code with few levels of enclosed IF's, with constructions like A[i][j:k][l+1]. You will spend your time doing the bad implementation of 100K times implemented things that will not provide you with enlightenment on how they are implemented because of a lack of the theory. And nobody will teach you to use standard tools on simple and understandable examples.
par Greg D•
The lecture videos are slow and shallow with little focus on building intuition. Similar with the assignments, instead of relying on existing libraries (that are popular for a reason) it painfully goes through implementing things in detail which doesn't really help you in any way later on.
100% recommend to save your time and money (and the sanity wasted on
meticulously hand-rolling things) on this and do something else instead
par François D•
Great teacher, good pace in lectures and assignments. There are of course some redundancies wrt the previous specializations but it's nice to feel that you understand the content a bit better every time. Didn't find the forums (internal & slack) very useful, could be better structured. Can't wait for the next 2 courses.
par Manzoor A•
Excellent! I know this course is the beginning of my NLP journey, but I can't expect more than this . The ungraded labs are very useful to practice and then apply it to the assignment. I am giving 5 star because There is only 5.
par Sohail Z•
Brilliant course!!!! love it every aspect of the course. i am really grateful to the deeplearning.ai team for such amazing courses. they are easy to digest and provide sufficient math knowledge to understand the models.
par Alan K F G•
Professor Younes really makes easier for me to go along the lectures and to be focus. The structure of the course helped me a lot to constantly review the same concepts as I went further in order to learn new things.
par Saurabh K•
I have a wonderful experience. Try not to look at the hints, resolve yourself, it is excellent course for getting the in depth knowledge of how the black boxes work. Happy learning.
par Kritika M•
This course is great. Actually the NLP specialization so far has been really good. The lectures are short and interesting and you get a good grasp on the concepts.
par Andrei N•
A great course in the very spirit of the original Andrew Ng's ML course with lots of details and explanations of fundamental approaches and techniques.
par Minh T H L•
Thanks for sharing your knowledge. I am happy during the course and I also leave a couple of feedback for minor improvement. All the best.
par Ajay D•
Course was very insightful about the latest enhancements in the field of NLP. The exercises designed was very hands on and I loved that. However I felt a bit incomplete as I didn't see any large dataset in action, maybe my expectation was wrong. I was also wondering if I can get to see some more applications of these language model and word embeddings in the course.
par Kravchenko D•
Nice course, but assignments in this course are less practical than in the first course of this specialization. The last assignment in this course was implementing the word embeddings generation using your own neural network. The whole process of writing your own neural network is nice except the resulting word embeddings that look very bad and ungrouped on the plot and the text in the notebook says: "You can see that woman and queen are next to each other. However, we have to be careful with the interpretation of this projected word vectors" without an explanation of what's wrong with the results. So I think that the last assignment should be reworked by the reviewers to have illustrative results at the end, not just "Word embeddings at the end are bad. Bye-Bye, see you in the next course"
par John A J•
It was a good course to introduce on AutoCorrect, AutoComplete, and Create your own Word Embeddings. However, I feel that the instructor focused too much on the implementation details. The concepts of why the pioneers are able to formulate the solution or train of thought for the different algorithms is lost. Although it taught me a little bit of implementation, but for me the implementation is just cherry on top as these things can easily be googled. So, It would have a better impact if it also teaches the concepts/thinking behind these algorithm so that I can re-use its underlying idea. Overall, it is good course to get started.
par Simon P•
Simply, it's not great.
The assignments are long and complex, with insufficient checks to debug when there's an error. The theory is poorly explained in both the videos and the labs. They clearly do not know who this course is aimed at; is it software engineers who want to better understand NLP? In which case they may find the assignments easy but the content lacking. Is it people with a basic understanding of NLP but want to take it further? In which case they will not get that, given that the concepts are only briefly discussed. Is it as a general introduction to NLP? In which case the coding aspect is at too high a level, you have to be familiar with all the little python tricks they know and to think in the same way they do. This leads to a frustrating experience.
By hovering the cursor over the names of contributors in the discussion forums, it is clear that most of the people who start this course never finish it. This level of attrition reflects poorly on the course creators.
par Dave J•
This course gradually ramps up the sophistication and interest from the first course in the NLP specialization.
Week 1: Autocorrect and Minimum Edit Distance is OK, nothing to write home about but gives you a sense of how a basic autocorrect mechanism works and introduces dynamic programming.
Week 2: Part of Speech Tagging explains Hidden Markov Models and the Viterbi algorithm pretty well. More of a sense here of learning something that will be a useful foundation.
Week 3: Autocomplete and Language Models explains what a language model is and builds a basic N-gram language model for autocompleting a sentence. Again, good foundations.
Week 4: Word embeddings with neural networks was for me the most interesting part of the specialization so far. The amount of lecture & lab content is considerably higher than in the previous weeks (which is a good thing in my view). The pros and cons of different ways of representing words as vectors are discussed, then different ways of generating word embeddings, from research papers dating from 2013 to 2018. The rest of the week focuses on implementing the continuous bag-of-words (CBOW) model for learning word embeddings with a shallow neural network. The whole process, from data preparation to building & training the network and extracting the embeddings, is explained & implemented in Python with NumPy, which is quite satisfying.
I found that the labs and assignments worked flawlessly. They are largely paint-by-numbers though, I would have liked to have been challenged and made to think more. The teaching is pretty good, though there's room for improvement. It tends to focus a little narrowly on the specific topic being covered and has the feel of reading a script. What I would like to see is more stepping back, thinking about and explaining the larger context of how the topic fits into current NLP and the student's learning journey; then engaging with the learner on this basis. I did feel this course was a little better than course 1 in that regard. Overall 4.5 stars but as there are no half stars, I'm going to let week 4 tip it up to 5.
par Yuri C•
I enjoyed very much this second course in the NPL specialization! I must say, once again the balance between mathematical formalism and hands-on coding is just on point! This is also not easy to achieve. I quite enjoyed also the infographics about the word embedding model developed during the course. I have been reading blog posts and papers about the technique for some time now and I did not see any best explanation than the one in this course, chapeau! Nevertheless, there are also points of improvement to consider. One of my main concerns is that at the end of some assignments, there is very little discussion about the validity and usefulness of what we get at the end. Although in the motivation a lot is being put forward. For example, while building the autocomplete, there were a lot of time dedicated to motivating why is this useful and why one should learn, but at the very end of the week, when we finally build one with tweeter data, there is very little analysis on these results. This is a bit frustrating. Of course, one cannot build very useful models while in an assignment in a Jupyter notebook, nevertheless I am positive that you can find also here a good balance between analyzing the model's outputs and inquiring if indeed we achieved the goal we set at the beginning, and if no, why not, etc. Clearly, assignments are not research papers, but a bit more careful treatment on that end will make this course achieve its full potential. Keep up the good work!
par Leena P•
I enjoyed Younes's teaching style and the specializations course structure of asking the quizzes in between the lectures. Also the ungraded programming notebooks give grounding and hints while allowing the graded work to be challenging and not completely obvious. Thanks to all the coursera team for sharing such deep knowledge so universally and easily. This knowledge sharing to all that seek it, is what I think is the hope for AI to stay relevant and not get lost in hype.