Now with the z-distribution, we absolutely have to know what the population standard deviation is. Because we need to know how many standard errors away from the mean a value is. We have to calculate the standard error and for that we use the population standard deviation. In most instances of research though we do not know the population standard deviation. We only know the standard deviation of the sample set. So we can't use the normal z-distribution. Now we swap over to the t-distribution. Now, it is an equation that is very difficult if you were to write this out, look at that equations, quite a bit more involved than the z-distribution. Fortunately we have computer software that can do all of that for us. But it does introduce a new and interesting topic and that is called degrees of freedom. Now we haven't heard of that before. Now the degrees of freedom to work that out it's actually quite simple. It's just the total number of participants minus the number of groups there are. So if you were to do a trial or need a trial that compared the means of two groups and there were 30 participants in each group. 30 plus 30 is 60, minus the 2 groups. Minus 2 is 58. So, that'll be 58 degrees of freedom. And that gets built into this equation. It's actually really nice, if you look at the different graphs here. Every one of the graphs there, that's all from a t-distribution, but they all have different degrees of freedom. So, it's going to use that knowledge, how many groups you have in your analysis, and how many participants are in that analysis, to work with a different graph. And, you can see the different. Some of them will peak with flatter tails, and that's how it goes. What we're trying to achieve here and really closely look like the z-distribution, the normal distribution, is to have a very high degree of freedoms. To have many participants in the study. To have large sample sizes. So we begin to introduce the t-test. We're gonna look at how t-tests are constructed, but it really comes from this same principle as the z-distribution, we want to convert to standard errors away from the mean. And it's going to really depend on how big the sample sizes are, the degrees of freedom that we have. But we still want to know how many standard errors away from the mean we are. Now most of the time in our z-distribution example we had everything on one side. Most of the time we're going to have things on the other side and we're going to have a deep discussion about this very important topic. But we're going to have so many standard errors away from the mean would be 2.5% on the one side, 2.5% on the other side, combining them to be at 5% of the area under the curve. And if we do it in this way, we also are going to duplicate our standard errors away from the mean. If we've found 1.74, we'll also have a negative 1.74 on the other side, and combine those areas under the curve to give us a p value.