In the previous video we discussed one way to build a Markov chains that converged to a zag distribution, namely the Gibbs sampling. And we have discussed that it has some downsides. It produces highly correlated samples and it converts somewhat slowly to the distribution. And both of these downsides came from one property, that Gibbs is doing kind of local and small steps in the sample space. In this video, we're going to discuss some other scheme to build Markov chains, which instead of producing just one predefined Markov chain, will give you a whole family of Markov chains. And you can choose the one that has the desired properties like, it converges faster and or maybe it produces less correlated samples. This family of techniques is called Metropolis-Hastings and the idea is to apply the rejection sampling idea, two Markov chains. So let's start with sum Markov chain which maybe doesn't have anything to do with the desired distribution B. But on the bright side, this Markov chain Q is kind of through freely in the sample space. It's doing very large steps and thus it can converge faster and it produces almost uncorrelated samples. It's a nice Markov chain but it doesn't produce what we want, dissembles from the distribution B. And now let's introduce a critic. And a critic of something that says, wait a minute you can go there right now, so please stay where you are and wait until you generate some reasonable direction to go. And the critic make sure that the Markov chain we have doesn't go in some regions where the probability of our desired distribution B is low. Well, at least it doesn't go there too often. And we will accept the proposal, the generated sample from the transition probabilities of the Markov chain Q with probability given by the critic and otherwise we will object and will stay where we were at the previous equation. So, this is the general idea and in the following part we will discuss how to build such a critic for a given Markov chain Q and the desired distribution B. So first let's discuss how this overall scheme will look like in terms of a Markov chain. So we had some Markov chain Q and we changed its procedure to generate points. And so now this overall scheme generates its points like this. It transitions from the point X to X prime with probability. Well, first of all we need to generate this X prime from the proposal distribution Q from the first Markov chain Q and then we have to accept this point. And this happens with probability given by the critic. So A of X to X prime. This is true for all points that are not equal. So if X not equal to X prime, and if they're equal we have something a little bit more complicated because we again could have proposed to move to X and then accepted that this proposal. Or we can propose to move anywhere else and then reject that proposal, which happens with probability one minus the probability given by the critic, one is accepted in probability. So, this is the transitional probability of overall Markov chain defined by the process where we propose some points and then accept them with some probability. And now we want our Markov chain to generate points from the desired distribution. So, we want some distribution point to be stationary for this Markov chain T. And the idea here is to let alone the choice of the underlying command of Markov chain Q. So give this freedom to the user of the method and try to choose A, such that this quota holds. So we have as input, the Markov chain Q and the desired distribution pie. And you want to choose the critic such the distribution pie is indeed stationary for the final Markov chain G. How can we do it? Well, it's not easy because this equation is kind of complicated. It has summation inside and this complicated definition of the transition probability T. So, to solve this problem we'll have to introduce one more concept, which is called the Detailed Balance equation. So, recall the definition of the stationary distribution, it's like this. And Detailed balance equation is the following equation, which gives you a sufficient condition for the stationary distribution. So, if the detailed balance holds then the distribution pie isn't necessarily a stationary one for the transition probability D. And what does the Detailed Balance equation says us. Well, it says that if we started from the distribution pie and made one step, one timestep of our Markov chain G, then the probability that we started from X and move to X prime is the same as doing the reverse. Is the same as starting from X prime and moving to X. And if it's true, then the probability distribution pie is stationary for G. And it's kind of easy to prove so let's just write down the right hand side of the definition of the stationary distribution and then substitute inside that definition substitutes by effects times the transition probability by the first equation, by the detailed balance equation. And thus we'll get a summation is the X of this expression and the point expressed doesn't depend on x at all. So we can move pie of X prime outside the summation to get something with this. And finally summation of T of X brand X, with respect to X is just one because this is some kind of probability distribution. And if we sum out all possible outcomes, it will be one. So, finally we have proven that if the Detailed Balance holds then this solution isn't necessarily a stationary one. So, returning to our problem. We have a Markov chain Q, which doesn't produce us samples which we all want and we have a distribution of pie which we want to simplify. Now we want to define a critic such that the Markov chain that was created by this critic, so the Markov chain T, will indeed produce samples from the distribution pie. So, the distribution pie is stationary. And since this property is hard to check, instead we're going to just check that the distribution pie will be, the detailed balance equation will be true for it. So, in the next video, we're going to use Blackboard to find the critic such that this detailed balance equation holds for the given Markov chain Q and the distribution pie.