Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

Loading...

From the course by Johns Hopkins University

Mathematical Biostatistics Boot Camp 2

34 ratings

Johns Hopkins University

34 ratings

Learn fundamental concepts in data analysis and statistical inference, focusing on one and two independent samples.

From the lesson

Two Binomials

In this module we'll be covering some methods for looking at two binomials. This includes the odds ratio, relative risk and risk difference. We'll discussing mostly confidence intervals in this module and will develop the delta method, the tool used to create these confidence intervals. After you've watched the videos and tried the homework, take a crack at the quiz!

- Brian Caffo, PhDProfessor, Biostatistics

Bloomberg School of Public Health

Okay.

Â So, for the relative risk, our PA hat worked out to be 11 over 20 which is.

Â 55.

Â PB hat worked out to be 5 over 20 which is.

Â 25 so the relative risk is. 55 over.

Â 25 for just, I always think its a good

Â habit to write that we're comparing A over B.

Â In the relative risk, just to remind ourselves what order

Â we divide it in and we're set to be 2.2.

Â quite a, quite a large difference quite a large

Â indication of a difference, but is it actually statistically significant?

Â Is it, is it something that would be of interest that, in

Â the sense that it could be more than just a chance association?

Â Okay.

Â So, let's calculate the standard error of the log relative risk.

Â Here, I plug into the formula, I get. 44.

Â The interval for the log relative risk is then log 2.2, log of our relative

Â risk plus or minus 1.96. The standard normal, 97.

Â This quantile, times.

Â 44. That gives us negative.

Â 07 to 1.65.

Â We're interested on the log scale in

Â comparing wether or not this interval contains zero

Â and then if we were to exponentiate it

Â back, the interval for the relative risk is.

Â 93 to 5.21. which again shows an

Â indication that drug A has a greater propensity for side effects than drug B.

Â But isn't exactly significant because this interval contains one

Â and on the log scale the interval contains zero.

Â Of course, because you know, log is a monotonic function if it contains

Â 0 on the log scale, it will contain 1 on the natural scale and vice versa.

Â So the, you know, whether you check

Â for 0 on the log scale or 1 on

Â the unlog scale will always yield the identical answer.

Â Okay, let's go over the odds ratio.

Â The odds ratio for A divided by B.

Â Well, let's just do this cross product formula.

Â 11 times 15 divided by 9, times 5. That gives us 3.67.

Â The standard error then is square root one

Â over the addition of one over the cell counts.

Â That works out to be 0.68, so the

Â interval for the log ulti ratio is log 3.67 plus or minus 1.96 times 0.68.

Â That works out to be negative 0.4 to 2.64.

Â The interval for the odds ratio is 0.96 to 14.01.

Â Now so this is on the natural scale. Okay.

Â And then, just to finish off our thinking about this problem consider the

Â risk difference of As well, so the risk

Â difference would be subtract, here you know, I

Â like, I think it's a good idea to put that you're subtracting A minus B there.

Â PA hat minus PB hat.

Â That works out to be 0.30.

Â The standard error of the risk difference is given in this formula here.

Â It works out to be 0.15 and the interval is, is again given here.

Â in this, this issues with

Â the risk difference formula as well. And we covered

Â some of that before. And, and showed that you can, maybe,

Â improve on it's performance a little bit by adding in adding one to every cell

Â for example. and that was covered when we when

Â we in the last lecture I believe and then the final thing I wanted

Â to show were were two plots. just to discuss, just to finish

Â some thoughts from the last lecture where we talked about Bayesian Analysis.

Â So, if you recall.

Â If you look back to the last lecture, what we did was, we

Â postulated a prior for P 1 and P 2 that were independent beta priors.

Â We found that if we did that then we got independent

Â beta posteriors for P 1 and P 2. We saw that an inefficient

Â way to explore the posterior was to do a simulation from it.

Â And that would allow us to calculate things like

Â the posterior mean, the posterior variance and so on.

Â So, if you go back to that, to the lecture

Â you'll, you'll hopefully be reminded of, of exactly what we did.

Â when you

Â conduct these posterior simulations, you get a PA and a PB.

Â P 1 and P 2.

Â you get lots and lots of pairs of

Â those things that represent draws from the posterior distribution.

Â And it's convenient to do that because it, it's

Â a, just a convenient numerical way to investigate the posterior.

Â If you take the, the arithmetic mean of those posterior draws, right?

Â you would get the posterior mean for PA, and the posterior mean for PA.

Â You could get the posterior mean then, for the risk difference.

Â The posterior mean for the odds ratio by taking every pair, PA and PB.

Â and, calculating the odds ratio and, and so on.

Â Well, here, what I did is I calculated

Â for every PAPB pair simulated from the posterior.

Â I then simply plotted a histogram of them.

Â And this is just a approximation of the posterior.

Â Where the accuracy of the approximation only

Â depends on how many Monte Carlo samples.

Â I elected to to for the computer to generate.

Â So, if I let the computer run for a really long

Â time I get a near perfect representation of the post error.

Â So, this in this case is the posterior for the risk ratio.

Â So again, I took PA divided

Â it by PB right and then for all of

Â those pairs I I, I plotted a, a density estimate.

Â And there's the density, this gives you a lot of information

Â about where the evidence concerning the risk ratio the relative risk lies.

Â Here I drew the blue lines for the 95% credible interval where there's 2.5%.

Â Above either below the lower blue line and, and above the upper

Â blue line and I put a reference line at one, at, ze, at 1 in this case.

Â And as we saw when we calculated the

Â [UNKNOWN]

Â interval the lower end point is just below one.

Â so that, that if you were sort of interested in

Â something like significance, you, you, you wouldn't, you wouldn't get significance.

Â But, but I think, you know, the posterior

Â displays a lot of information and, and, does a

Â lot, gives you a lot more than just a

Â confidence interval or the result of a hypothesis test.

Â Or, just even worse given

Â the asterisks on how significant the P value is, which some software does.

Â okay.

Â and then on the next slide what I'm showing

Â here is the same calculation done for the odds ratio.

Â So, this is just to give you a flavor of

Â what this sort of output or the desired output from simple

Â Bayesian Analysis would be. The posterior is the quantity that you

Â would use to, to, investigate the the relative proportions

Â either through the odds ratio or through the relative risk.

Â Coursera provides universal access to the worldâ€™s best education, partnering with top universities and organizations to offer courses online.