Chevron Left
Back to Natural Language Processing with Sequence Models

Learner Reviews & Feedback for Natural Language Processing with Sequence Models by DeepLearning.AI

4.5
stars
1,094 ratings

About the Course

In Course 3 of the Natural Language Processing Specialization, you will: a) Train a neural network with GLoVe word embeddings to perform sentiment analysis of tweets, b) Generate synthetic Shakespeare text using a Gated Recurrent Unit (GRU) language model, c) Train a recurrent neural network to perform named entity recognition (NER) using LSTMs with linear layers, and d) Use so-called ‘Siamese’ LSTM models to compare questions in a corpus and identify those that are worded differently but have the same meaning. By the end of this Specialization, you will have designed NLP applications that perform question-answering and sentiment analysis, created tools to translate languages and summarize text, and even built a chatbot! This Specialization is designed and taught by two experts in NLP, machine learning, and deep learning. Younes Bensouda Mourri is an Instructor of AI at Stanford University who also helped build the Deep Learning Specialization. Łukasz Kaiser is a Staff Research Scientist at Google Brain and the co-author of Tensorflow, the Tensor2Tensor and Trax libraries, and the Transformer paper....

Top reviews

SA

Sep 27, 2020

Overall it was great a course. A little bit weak in theory. I think for practical purposes whatever was sufficient. The detection of Question duplication was a very much cool model. I enjoy it a lot.

AB

Nov 11, 2021

This is the third course of NLP Specialization. This was a great course and the instructors was amazing. I really learned and understand everything they thought like LSTM, GRU, Siamese Networks etc.

Filter by:

1 - 25 of 230 Reviews for Natural Language Processing with Sequence Models

By Oleh S

Aug 5, 2020

Well, very weak and oversimplified course. All videos are indecently short (from 1 to 4 minutes in majority) and do not give any intuition or understanding of the sequence models. GRU's and LSTM's are explained too briefly. Weird decision to choose Trax framework, it offers no reasonable advantages over Keras in this course. Also, I do not think that implementing the data generator function in the programming assignments gives anyone better intuition in understanding the core material. The last two assignments can be completed even without watching the lecture videos. To sum up, this course can be valuable as just the short intro to recurrent-based networks, but do not expect to deepen your knowledge.

Seriously, the weakest part from the first three courses, quickly prepared and lacks of quality. Totally disappointed.

By Tomasz S

Jul 30, 2020

You can't learn anything with 3 minute videos, especially if 1 minute is wasted on repeating the previous video and saying what is going to be said and the last minute is wasted on saying what was said... That works for proper coursera lectures which last 15 minutes, and there are a few hours of material per week. And it's really low to have someone listed as a lecturer, when all he does is appear for 10 seconds to say "today I will teach/explain/show you", while he never actually does any teaching, explaining or showing - everything is left to the assistant...

By Roberto C

Aug 25, 2020

Very bad class, a week of material can be done in around 2 jours, exercises are uninteresting, just write down what it is said in the comments (like: add 1 to the index, ...). Also, every exercise is similar to each other, write down the data generator, initialize the model using trax, train the model, test the model...

I think it is a lost opportunity, the majority of the course is just familiarise with the trax API and blindly apply neural network architectures using the API. The videos are very poor, not much information given and they repeat themselves a lot - you have the feeling that a lot of information is being repeated.

Instead of doing this course it is better to do the original Sequence Model course from deeplearning.ai.

By Han-Chung L

Aug 14, 2020

Programming assignments not well constructed; need to restart notebooks to get the same exact output as expected. Not all expected outputs are printed out or have test functions similar to course 1. Some of the concepts are too quickly glanced over in lecture.

By Jingying W

Aug 2, 2020

I have previously completed Deep Learning and AI for Medicine specialization provided by deeplearning.ai and here are some of my thoughts about this course:

1. I like that the Python tutorials and assignment helps me learn the state of the art DL framework Trax and be more familiar with the working mechanism under the hood. If you read through the python scripts carefully and look up the linked documentation, they are really nice study resources.

2. This course improves my understanding of some models that I learned in other specialization courses such as Siamese model (e.g. hard negatives) for natural language.

3. Although the slides are made fancier than before, I'm not sure I enjoy the way it is explained. It's like reading the script and not really talking TO the students. Maybe some direct (live) notes on the slides helps students actually "dive in" ;) .

4. I watched your live discussion on YouTube on 29. July and feel the lectures talks very naturally there, but in the teaching video they behave in a quite unnatural way.

5. What I love about Andrew's DL specialization is that he also talks about his insights in a sincere (and personal) way, but in this course, it's just tooooo official.

Anyway, just my personal thoughts. I learned a lot in this course and hope the next course will be better ;)

By Adam S A

Aug 16, 2020

I think the assignments should have gone deeper. We should have built and LSTM instead of just creating a model in trax in my opinion. I can always learn an API, but it would be nice to learn the nitty gritty of the model during the course. I also would prefer longer lecture videos that go into more detail.

By Brian B

Sep 20, 2020

Programming notebooks contain a lot of errors and poor writing is the explanations (in text cells and in comments in the code cells). The use of Trax instead of TensorFlow or PyTorch also reduces the usefulness of this course for picking up experience with frameworks I am most likely to use.

By Prashant G

Aug 16, 2020

This course is very mechanical, expected more reasoning based course which incites logical thinking. Since most of the topics covered in this course are an active area of research, a discussion from "why or why not" point of view would have been more beneficial than just telling how to use a certain library like any other blog on the internet.

By Marcin Z

Aug 17, 2020

Course materials and lectures are fine, but exercises are boring - you have to implement data loaders every week. Moreover, the course uses Trax, like there were no other popular deep learning frameworks... so you are forced to learn yet another syntax. PyTorch FTW!

By Zhaohua F

Aug 31, 2020

I think it should be better if we use TensorFlow 2.x or Pytorch instead of Trax, which is seldom used in other places. Also, mathematical derivation for why LSTM is better than simple RNN should be better put in the video.

By Moritz F

Aug 12, 2020

Videos are annoyingly short and provide little depth. Assignments are basically just typing / copy&paste exercises. The whole NLP specialization again starts with absolute basics of python and ml - I wouldn't find this bad if there weren't already enough foundational courses available on coursera. If I inscribe an NLP specialization I don't expect/want to do a python course. Almost everything shown here has already been covered in the deep learning specialization.

By Kabakov B

Sep 8, 2020

Do not waste your time on this course, unless you just want a fancy certificate. It has the same problems as all previous courses in this specialization -- the theory is very superficial and the programming tasks are awful and separated from real ones. Take the Sequence models course from DL specialization instead, they even put links to it in this course for optional deeper education.

By Rabin A

Aug 10, 2020

The course is fine but if you've taken the course on Sequence Models by deeplearning.ai before then this won't add much to your knowledge except the Siamese Network. The main con of this course is the use of Trax instead of Keras of Tensorflow. I am not an opposer of Pytorch, but since deeplearning.ai has courses of Tensrflow, it would have been easier for many students to grasp the knowledge instead of learning a new framework again.

By Mansi A

Sep 18, 2020

The course is oversimplified and provides very little deeper knowledge into the techniques and networks used in NLP. Good course to get an overview, but if you want to have a deeper knowledge, you'll have to invest time yourself. Also, the usage of the Trax library was of no advantage. Keras or Tensorflow should have been used instead.

By Ala T

Aug 6, 2020

Personally I'd rather prefer tensorflow over trax. I think I'm a bit lost between different tools, since different specializations in deeplearning.ai use different tools. Other than that I think it was a quite good short course.

By Kweweli T

Sep 25, 2020

The lectures are well planned--very short and to the point. The labs offer immense opportunity for practice, and assignment notebooks are well-written! Overall, the course is fantastic!

By Sreejith S

Aug 5, 2020

The assignments use Trax library and I found it a bit difficult to understand and implement it. Would have been very much better if they had used Tensorflow 2x

By Julian D

Sep 1, 2021

In comparison to the earlier courses I found rewriting the generator each assignment a little bit repetitive. In contrast, I had to understand little of the Tripleloss to complete the task as it was very specific in the instructions. Either don't make the loss function this intricate, provide it or let the learners try hard a little but this way it is a bit scripted instead of figuring stuff out. Overall, I love the series and would encourage to extend it to include some background knowledge. Like 10 % more math :-).

By Li Z

Sep 16, 2020

LSTM explanation is not very clear. Have to revisit some external links. The coding exercises are frustrating, even run properly step by step, got much glitches when submitting them. Time spent on fixing the submission issues is longer than taking the lessons.

By Paul J L I

Oct 23, 2020

There were a lot of strange errors. Issue with the model. Weird language about elementwise vector addition (all vector addition is elementwise). Quizzes with incorrect language.

By Arun C

Aug 5, 2020

The use of RNN using TRAX is a bit abstract. It should be explained further. For example, the time sequence is not clearly visible in training the model using TRAX.

By Ahmad O

Sep 12, 2020

In the assignemnts, the grader doesn't return feedback till the last question, can't help me in debugging my code!

By Corine M

Sep 6, 2020

The excercises were very easy and did not really give the understanding of NLP.

By Rutvij W

Sep 16, 2020

the course could be more in depth

By Vincent R

Jan 12, 2022

Very superficial information about different types of neural networks and their uses. Use of Trax makes it nearly impossible to Google anything helpful - a lot of the assignments just tell you to read the documentation. To finish the assignments, you can basically copy-paste the code you're given to set up Trax neural networks and generators without having any idea what you're doing (because, again, the content doesn't go over it).

For example, Week 1 in this course introduces you to Trax (can't be run on Windows), covers aspects of object-oriented programming, and talks at a very high level about how to do things in Trax before moving onto a cursory discussion of generators. Then the assignment has you increment counters, set limits in FOR loops, copy negative-sentiment code from positive-sentiment code that's already completed, and fill in some code that's basically given right above where you write it.

Overall, any code you write is usually very simple but often easy to get wrong because of the lack of direction, e.g. model(x,y) vs model((x,y)). The discussion boards are invaluable because the mistake might have been 3 functions earlier that the built-in tests didn't catch. It feels like a good effort was put in all around to establish this course, but it feels like a first draft course that was never updated.