Today we are going to talk about a technique known as RANSAC, Random Sample Consensus. As we saw, one of our favorite algorithms is the D square algorithm, and then we often use the single valve decomposition to find solutions to the D squared problem and this has become a repeated algorithms hat we use many many time in these lessons. Before you go out use this method it's important to know the noise in the data that we don't have to hand held correspondence points in the image. Our hand annotates information on the picture all the time. As robot move in space, it's computing features on the fly. And some of the computation will be wrong. So therefore we need to have a method to remove noise before we apply the solution to our problem. And this can be demonstrated in a simple least square problem of line fitting. So, line fitting is a simple form of the least square, where we have set of XY points almost on the line. Ad we will fit a line, represented by a three parameter family called genus corteous over the line, efg. And this is represented as linear equation where each row is the homogenous coordinates of .xy1 and the column back to the looking for is a line equation efg. For K points or n points, we have n lows at three so that's a matrix a times x equal the zero. And we know how to solve that using our favorite tools single value decomposition. And the solution to that is taking SVD of A, taking the vector V the last call on the V and that give the solution of FG. So we can verify this, we have a line that we picked in the space, and on the line we add noise to its, such the points start deviating on the lines. We compute our solution using the least square method with SVD. And plot that out. As you can see the least square line blue is very close to the red line which is the ground truth. So this is a very good method except when we have one single outlier, a noisy point. What if for one of the data point, it really move very far away from the line, just one out of million, one out of a thousand, one of the points is off. What happens when we run least squared? Bad thing happens. What we obtain is a blue line that stay way off the course. So in fact, this is a danger of using this square method. It's a very powerful method that can compute efficiently but is super sensitive once there's an outlier. And this is because the cause function were minimizing in this case, is the distance between the point to the line So if I have another correct line position. This one single outlier in fact has a very large error. And this large area when squared becomes even bigger. So your entire process of line fitting with a square error Will be very sensitive to a point which is way off course. So one bad apple would ruin the whole estimation. So how do we fix this? We can say well least squared is a bad method. In fact there's a simple way to find a fix to this difficult problem, which is. Simply remove some points. Before we say we want to fit all the points to the line, now we're saying we don't need to fit all the points in the line. We are willing to throw away some of the point. We just want to find one line that satisfy most of the people and you don't need to find the one line that satisfy all the people all the time. If someone has a huge error, beyond a certain point, we just ignore it. The basic principle is a the simple one, that in the world, there are two types of objects. One type of object's that fits to a model, and one type of object that doesn't fit to a model. The objects that fits the model usually looks like the same. Meaning all the overlap look the same, and the thing that doesn't fit to the model, is bad apples, a different in a unique way. They all look different. So instead of concentrating on the bad apples we just simply want to focus on how do we group those points that look that same. And once we find those objects, fits only to a model of those objects. And this procedure when known as random samples consensus is a very powerful way, a very practical way of finding this group of elements. Let's see how we do this Random Sample Consensus. As the name suggested, there is a step called random sampling. The question is, how many sample do I want to pick? We want to find elements that belong to the group group. In fact, if we just run to the pick up points, there's a good chance to belong to group. If you take multiple points, the chances all of them belong to group diminish rapidly. If p is the probability of each point belong to the group, p to the power of k is the probability of k times A sample all remain in these samples. So because we want this probability to be high, in theory we want to pick as few samples as possible to be sure the probability to remain high. If we pick more points the probability will profuse. So the idea for this case of a life fitting is we want to find a minimal points to sample that provides an idea what a good apple looks like. So as we know two points is the minimal constraint needed to fit a line. So in this case we pick two points. Randomly pick one points another points. In fact the local looks reasonable points. The second step is we build a model, we assume these two points belong to the good model. So we fit a v squared problem to these two points. Find the line crossing the two points. And so we saw the solution to this, the line we found is, in fact, the incorrect one. Now we have third step of this, which is simply measure how well this model is conforming to the points around me. It is a checking procedure. There's no need to run another lead square. It simply says here's the line and all the other points can they be conforming to this line or not. So in our case we simply measure the distance of each point to this line. And with that distance measure we can threshold it. I have a threshold of sigma which tells me if the distance within the sigma, then it's a good fit. So in this case we count it. There's seven points in the data set that fits through the slot. As the name suggests, when the random sample again. Now we have these two points, we go to model again for this two points. The line looks reasonable but looks again slightly off. And the way we can tell this is simply by checking the points of all the points to this line and simply threshold it. And count how many points is within a threshold. So now I have an inlier of 10. [BLANK AUDIO] We repeat this process many, many times. Every time we hope that the two points we picked belong to the good group, not only belong to the good group they belong to the centre parting of the good group. And it give me a good model that fit most of the other parts. So, if we repeat this process, sufficient amount of time, eventually we will get lucky. In this case we have a line, estimated on the two point we sampled and then we count inlier, again by computing all the point that's into this line. And the count, whether these distance are within the sigma. Now we have 23 inliers. Then repeat this process many, many times. And stop at certain points and check who are the two points that give me the best estimate of a line that everybody agrees. So, again, the principle is that we can sample points that all belong to the good group, that model itself can easily be checked. And the checking of that good versus bad model is simple by voting, majority voting. A good model will receive more votes. What people will agree to the model than the bad model. And the sampling step again we want to find the minimal number of the points we need to sample to generate this model. It's case of the hypothesis step. It's on sampling. And the verification step. By checking the fitting to this model followed by a threshold. [BLANK AUDIO] So then, the question is how many times do I need to sample this? Do I need to sample a hundred time, a thousand times? Well, it clearly depends how many points I have, but more importantly it depends on the probability our allier. That is if I have a hundred points. How many of the points within this 100 points we can assume is a good points. Let's use a estimate that can computed by your featured correspondence algorithm. It can experiment to verify how many of those points can be correct. Call the probability here w, is number of a correct. Points, that fits the model, divide the number of sample points. The probability w to the power of n is simply the number of samples, n samples I can pick, all belong to the good model. So we want to pick a minimal number of endpoints. Such in this case a two is n. So one of two points define model. So w to power 2 is the probability that both points I pick belong to a good model. Then if you look at this quantity. Which is 1- w power to the n. The quantity itself power to the variable k will tell me that what a probability There is a picked K time of those two points. I get to a very unlucky situation that none of us belong to the model. One minus W to the N, is a probability of not being lucky once. And power that to K, is not lucky K times. So, that's the probability that, if I pick two points, in this case, k times, I fail to find the model. So, one minus that is the probability of being lucky, at least, once. And, all we need to be lucky once to realise we have found the correct model So then random sample we really need is K elements can be computed in the equation as follows. So we can do that empirically looking at a probability of the outlier, a probability of the inlier, given the probability of that. And given probability of what are the chances we want to entertain for successfully finding one, at least one, correct sample from the process where we can deduce the number of iterations we want to run.