So, we've shown the connection between independencies and the factorization of the distribution in the context of Bayesian networks. Now, we're going to show that that same kind of connection holds also in the case of Markov networks. So how are, so, first of all, we need to come up with a similar notion of what kind of independencies are encoded by by the structure of the graphical model? So, in this case, we're going t have a notion of separation, which is the analogous notion to de-separation. Except that now there's no D for directed, it's just separation. And then, actually it's a much simpler notion because there's not multiple kinds of different flows of influence. You only have one type of edge, the undirected edge, so it's very simple. And so, we have the X and Y are separated in H given Z, if there's no active trail in H and the notion of the active trail is just, you know, if no node along the trail is observed. So so, let's look at some example of separation properties. So, for example, what does it take to separate A from E? Well, we can separate A from E in several different ways. So, we have that A and E are separated given for example B and D because B and D block both trails. But also given just D and also given B and C, because B and C, again, walk both trails between A and E. So, here's some example of separation properties. Now, we can go ahead and prove an almost identical theorem to the one that we proved in the context of Bayesian networks. And it tells us that if we have a separation property, X And Y are separated in H given Z. And we have a distribution P that factorizes over H, then, P satisfies the independent statement, X is independent of Y given Z. And so, just as in Bayesian networks, we can go ahead and define the independencies that are induced by the graph H, as the ones that are defined by the separation property. And just as in the context of Bayesian networks, we can go ahead and define this as a term. We can define the notion of an I-map. And say that in, and say that H is an I-map or independency map of P if P satisfies all the independencies that we can read off the graph structure of H. And the theorem that we just proved, restated, says, that if P factorizes over H, then H is an I-map for P because we have shown that if P factorizes over H, then it satisfies all of the independencies that one can read off the graph H. Now, in the context of Bayesian networks, we also had the converse theorem holding. the converse also sort of holds in the context of Markov networks. The converse being the independence that if P satisfies the independent statements associated with the graph and it factorizes over the graph. So, stated otherwise, if H is an I-map for P, that is, P satisfies I of H, then P factorizes over H. The difference here is that it doesn't hold always. It only holds for a positive distribution P, which means a distribution P which is strictly greater than zero for all, whose probability is strictly greater than zero for all assignments X. That is, if you have a distribution that involves a terministic relationships, this property no longer holds. So, you almost have the converse, that you, you, that we had in the context of Bayesian networks but it requires one additional and important assumption. So, once again, we have two equivalent, almost equivalent to use of graph structure. Factorization in which H allows P to be represented and again, the notion of an I-map which is that I can read from H independencies that hold the P. So, once again, if I tell you that a distribution P factorizes over a graph, we can read from the graph any independencies that must, we can read from the graph a set of independencies that must hold in P.