Now, if it's been a little while since you've done

a confidence interval or a hypothesis test,

let's go back and remember what this is all about.

In a standard hypothesis test we have a null hypothesis and an alternative hypothesis,

traditionally labeled H sub zero, and H sub one.

The null hypothesis will be no difference,

just that the mean response is going to be the same for both drugs.

The alternative since we're doing two tail tests will be that it's not the same.

Alpha is what people,

or researchers often set up before they conduct the tests,

probability of a type one error.

The probability that we're going to reject a true null hypothesis is

fairly standard to say that alpha equal to

the two values point zero five or point zero one.

The t value that we calculated here,

is we're going to look at the average of

the differences which is the same

if you follow the language as the difference of the averages.

So, we're looking at the bar here.

Essentially, we're just taking the average,

the mean value with the first group,

and subtracting off the average on the second.

It's a very intuitive thing to do.

We'll compare that to our null hypothesis value of zero.

Downstairs we're going to look at variability.

So, we're looking at the variability of the averages here not of individuals.

We're going to take s sub d. So,

this is the sample standard deviation of the differences.

Now, be careful, if you take

the differences for these 10 individuals between the two drugs,

take their response on the first drug and subtract off the response in the second,

and do that for all 10,

and then take the standard deviation of that,

that's the standard deviation of the differences.

That's not going to be the same number generally speaking,

as if you take the standard deviation of the first data set,

and subtract off the standard deviation of the second.

We just have to be a little bit careful here.

But the standard test is to look at the standard error down here,

standard deviation divided by the square of N,

to give us a measure of variability.

And that's how we calculate,

you can follow through the numbers,

negative for t value.

So as we just said,

d bar is the average of the differences,

or the difference of the averages however you like,

and Sd is the standard deviation of the differences from the sample.

Now R also got a p-value.

So what's the p-value?

It's the likelihood of seeing data this extreme under the null hypothesis.

And, what we'll do here is look at twice,

it was a two tailed test, twice.

Now the t distribution is the one in play for us,

that's that letter t right there.

And p is just short for probability.

So, what we're trying to do is get some tail areas here.

So, we're going to take twice the tail area,

I'll look down at the left tail,

I'll pop my negative four in there.

Nine degrees of freedom and calculate a p-value.

If your p value is small,

you'll reject your null hypothesis and our p value is really quite small.

So we rejected the null hypothesis.

In general, if you have a hypothesis test,

different books have different details here,

but it's all they all rhyme.

They're all basically telling the same thing.

You're going to state clearly what your variables are

so that everybody knows including you what you're talking about.

State your null and a whole alternative hypotheses,

and then divide, decide upon rather a level of significance.

Once you've got that basic framework down those organizing principles,

go ahead and look at your data,

compute a test statistic,

and you'll run across very often z's and z's chi square as an f's.

These are kind of the big four in an elementary statistics course.

You'll find the p value corresponding to your test statistic,

and then you'll form a conclusion,

you'll reject or not reject typically.

Confidence intervals are,

there is a difference between the word confidence and probability.

Many people get very sticky on this and say that once an event has occurred,

you really can't talk about probability anymore.

Instead you must talk about confidence.

The basic idea is we're trying to give

a good indication of where we believe the actual meaning would be,

here it's going to be a mean difference.

The common form that you'll see for many confidence intervals is estimate.

That was our D Bar here, plus and minus,

some sort of table value multiplied by an estimated standard error.

It's not hard to really demonstrate where this comes from.

In our particular case,

we'll look at d bar,

plus and minus the T value,

times our standard error.

We already saw that R will print this out for you.

If you like to follow along and do a hand calculation yourself,

we've got the numbers right here.

It's just a direct substitution.