Let's look at another one, this one came from A&M actually, probably 5, 10 years ago, so it's much more recent. But it's a nice application, this doesn't solve our problem, it formulates the estimation problem in a different way. In fact, it uses the Cayley transform, the Cayley theorem we've seen, where we can take a rotation matrix and write it as 1 plus the skew-symmetric matrix of the CRPs inverted times 1 minus the skew-symmetric matrix of CRPs, right? That was the Cayley theorem that we had. There's different ways to arrange this. But here's the cool insight on how you can do this, so I'm just trying to show you different methods. We know fundamentally we're looking to map inertial observations into body relative measurement vectors, right, that we have here. So we're finding this white BN matrix, if you plug in, instead of parameterizing BN in terms of as Davenport does with his Q method, we simply use the Cayley theorem. Plug it in, you've got a matrix inverted times a matrix. That inverse, we're simply going to take over to the left hand side. So, this part you see appearing here now in front of the VB, and this part stays times BN. You go, okay, where are we going with this? Well, if you start carrying out, identity times VB, identity times VN, you bring all those over to the left hand side. That gives you VB minus VN, because identity times the sets gives you back the set again. Here we have Q~ times VB minus Q~ times VN. I bring this over to the righthand side, so I can factor out minus Q~, and there's the sum of these two sets. Now remember, first lecture we're talking about vectors and how to decompose vectors. Here, all of a sudden, we're adding B frame components to end frame components. We're treating this as purely matrix math, right? We're not saying this is somehow geometrically meaningful. It is purely matrix math. That's why they shouldn't raise warnings here. We're not trying to represent a vectorial addition. In fact, this is the same vector, just express different frames. That's why you can see this. But it's nice because now you can say the s's are the sums of the observed and the inertial known heading quantities. And these are the differences between the observed and the known inertial quantities. So for every observation, I can do the sum of them. I can do the difference of them. And then I could rewrite this equation like this, where I'm saying okay, the part I care about, which is again, CRP, so like with the quest method. He's linearly dependent on this set of stuff, s's tildes that depend on the measurements and the known quantities and the d's. So with this Cayley transform, I have converted the attitude estimation problem into a rigorously linear problem. And that's pretty cool. Now, can you invert this S~ matrix and find the attitude directly? Are tilde matrices invertible? No, right? The skew symmetric one is always rank deficient. The zero on the diagonal's a dead giveaway. That's not going to be full rank. So, I mean, no, one observation, we shouldn't be able to do that. If we could do that with one observation, I would be able to get a fold attitude. But we know, as we mentioned earlier, we have to have two, right? One is not enough. So that's mathematically manifested here. So one measurement is not enough. What you do is you now have to have a stack of these measurements and per measurement, you get a 3 by 1 D and a 3 by 1 S, and they're all multiplied in the end times the same, so you're stacking this up. This becomes a 3n times 3 matrix times this 3 by 1 set of CRP sets that we have. And this becomes a 3n times 1 set of these, it's kind of like your observations in that sense. You can add a weight matrix, so instead of doing a classic least squares, this is a linear problem. y is equal to ax plus b, what are as and bs? So you can stack this up and do a classic least squares fit. In this case, we want to do a weighted least squares fit, where we can add weights to each observation. And as before, the actual absolute value of the weights doesn't matter. It just matters if 1 is 1 and 1 is 10, it's the relative magnitude, not the absolute magnitude. But this is the way you do this in matrix math. If you haven't seen this, you can go look it up on the wiki. But this is the classic form of a matrix math version of taking a linear system that you'd have here and do a weighted least squares inverse problem. So that's the OLAE, optimal linear attitude estimator. It's kind of Spanish, and you go OLAE, flamenco, something. I don't know what, it came from Texas. But who knows? But it's a very neat algorithm that uses the Cayley transform to map it this way. This has a lot of applications, also, again, computer vision, tracking fiducials, all this kind of stuff.