This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Loading...

En provenance du cours de Johns Hopkins University

Principles of fMRI 1

336 notes

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

À partir de la leçon

Week 4

The description goes here

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

Hi. In this module,

we're going to continue talking about the multiple comparison problem in FMRI.

In particular, we're going to focus on methods that correct for

the family-wise error rate.

So the family-wise error rate is the probability of making one or

more Type I errors in a family of tests, under the null hypothesis.

And a Type I error is when we reject a null hypothesis and

we really shouldn't have done that.

So there are a number of family-wise error rate controlling methods

that are used in neural imaging.

They include the classic Bonferroni correction, Random Field Theory,

and permutation tests.

In this module, we'll talk a little bit about Bonferroni correction and

Random Field Theory.

Now let's let Hnot of i be the hypothesis that there's no activation in voxel i

where I take values from 1 to m, where m is the total number of voxels.

So basically Hnot i is just the voxel wise null hypothesis of no activation.

And now let's let Ti be the test statistic at voxel i.

So we conducted a test of the null hypothesis at each voxel, so

we have Ti And those are the values that make up our statistical map.

Now the family-wise null hypothesis, which I'm just going to call Hnot here,

state that there's no activation in any of the m voxels.

So basically we're just assume that there's no activation anywhere

across the brain.

If this is true, then Hnot is true.

So, for Hnot to be true, there has to be activation in none of the areas.

So, mathematically, we can write this as the intersection of the Hnot of i's.

So, the H knot has to be true in each i, in each voxel.

So, if we reject the single voxel null hypothesis,

we're going to reject the family-wise null hypothesis.

So because basically if all the, if any of the null hypothesis are,

the individual null hypothesis are rejected,

then the family-wise null hypothesis is also rejected.

So a false positive at any voxel will give a family-wise error.

So let's assume that the family-wise null hypothesis is true.

Then we want to control the probability of falsely rejecting Hnot at

some level alpha.

So basically what we want to do here is we want to control

the probability that any of the test statistics at

any of the voxels over the entire brain is above some value u.

And so we want, because if it's above u we're going to reject that

the null hypothesis at that voxel, and we don't want to do that,

because then we're going to get a false positive.

So basically we want to make the probability that Ti is bigger than u,

make that controlled by some value alpha.

Say 0.05.

So what we need to find is find the value of u which controls

the family-wise error rate at this particular level.

So the Bonferroni correction is the classic way of doing that.

And so in Bonferroni Correction we choose a threshold u, so

that the probability that any of these test statistics is above u

is less than alpha over m where m is the total number of voxels.

So if this is true,

then this controls the family-wise error rate as well because the family-wise error

rate it the probability that any of the test statistics is above u.

Which according to Boore's inequality it's the sum that any of them are above u,

which according to a threshold we choose is controlled by alpha.

So, the Bonferroni correction this simple math shows that it controls

the family-wise error rate at alpha.

So for example if we have ten tests and

we want to control the family-wise error rate at 0.05, then we should control each,

we should choose u, so that each test is going to control that 0.005 and

now if we have 100,000 tests we needed to divide by 100,000 so that

the threshold will become increasingly stringent as we do more and more tests.

So here's an example, let's say that we generate iid normal

(0,1) data, so over a 100x100 grid.

So in this case we have 10,000 pseudo voxels here,

each that follow a standard normal distribution.

And so here's a picture of that.

Now if we threshold this at u=1.645,,

this would be the 95th percentile of the standard normal distribution.

In this case we would get 500 false positives because this

is 0.05 times the total number of voxels, which is 10,000.

So we're going to get the salt-and-pepper pattern where white indicates that

something was above the threshold, and black indicates that it was below.

In this case we're not really controlling very well for false positives, so

what we need to do is a more stringent way.

So, we have approximately 500 false positives here.

So, to control the family-wise error rate at 0.05 the Bonferroni

correction would have to be at 0.05/10,000.

So we have to control for the fact that we're doing 10,000 tests.

And so if we do that.

Now the threshold instead of being 1.645 is now equal to 4.42.

So it's a much more stringent amount of evidence that needs for

us to reject a null hypothesis.

And if we do this we get no false positives at all.

So indeed, if we were to repeat this sort of simulation 100 times, on average only

5 out of every 100 generated data sets would have one or more values above u.

And so basically, the probability of us getting any false positives among

the 10,000 tests is only 5%, so we'd have to do this whole exercise 100 times.

And only one set of every 20 would we get one or more false positives.

So this is a very, very stringent control over the false positive rate.

And so of course this is really great if you're worried

about getting false positives.

However, if you have true activations in this grid,

it's going to be very hard to detect them.

So there's sort of a tradeoff here between the ability to detect activations and

control the family-wise error rates.

This is going to be a very stringent way.

So, we're going to wind up losing a lot of activations if we use

the Bonferroni correction.

So, the Bonferroni correction, as I just mentioned, is very conservative,

it results in very strict significance levels.

So, this leads to a decrease in the power of the test.

And this the probability of correctly rejecting a false null hypothesis and

greatly increases the chance of getting false negatives.

And so in general, it's also not optimal for correlated data, and

most fMRI data has significant spatial correlation.

So the number of independent tests are actually much fewer than the number

of voxels.

So we may be able to choose a more appropriate threshold by using information

about the spatial correlation present in the data.

One way of doing this is to use random field theory.

Random field theory allows one to incorporate the correlation

in spatial correlation into the calculation of the appropriate threshold.

And it's based on approximating the distribution of the maximum statistic

over the entire image.

So what does this mean?

Well what's the link between the family-wise error rate and

the maximum statistic?

Well the family-wise error rate is the probability of getting

a family-wise error.

So this is the probability that any of these T values,

T statistics, exceeds u under the null hypothesis.

So now I want to claim that this is equal to the max, the probability that

the maximum t statistic is above u, because if the maximum is above u,

then there is a t statistic that is above the threshold.

If the maximum statistic is below u,

then there is by default no tests statistics that are above u.

Because the maximum is the biggest value.

So if we're interested in the probability of any t statistic exceeding u,

it's enough for

us to look at the probability of the max t statistic exceeds u under the null.

So if you want to control the family wise error rate we simply need to find

the distribution for the max t statistic and threshold using that.

So we choose the threshold u,

such as the max only exceeds it alpha percent of the time.

So how do we do that?

Well random field theory is one way of approximating

the tail of the max statistic.

And so a random field is a set of random variables defined at every point

in some D-dimensional space.

In our case it's usually a three-dimensional space of the brain.

And so we're mostly working with what is called Gaussian random fields.

And so Gaussian random field has a Gaussian distribution, or

a normal distribution, at every point and every collection of points.

And so a Gaussian random field is like any normal distribution defined by its mean

and covariance.

In this case, it's the mean function and the covariance function.

What we do in neuro imaging is we consider a statistical image,

the one with all the t statistics,

to be a lattice representation of the continuous random field.

And using random field methods we're able to approximate the upper tail

of the maximum distribution,

which is the part we need in order to find the appropriate threshold.

And also simultaneously we can account for

the spatial dependence inherent in the data.

And so that's a useful thing in the neuron imaging context.

Let's consider that we have some random field z(s) defined on some space, and

in our example let's just assume that it's a two dimensional space.

And so here we have the random field, and then on the left we see sort

of a heat map of the random field, and on the right, we see a mesh plot of it.

So basically, every spot in the two dimensional lattice,

we have some statistic value that follows a random field.

So when we work with random field theory,

we have to define something called the Euler Characteristic.

The Euler Characteristic is the property of a random field of an image

after it's been thresholded.

So basically what the Euler Characteristic does, in layman's terms,

is it counts the number of blobs.

The number of coherent areas minus the number of holes.

And at the high threshold it just counts the number of blobs.

So what does this mean?

Number of blobs, the number of holes?

Well let's look at the random field that we have here to the left and

let's say that we threshold it at the value u equal to .5.

That means that any value that's above .5 was set equal to one and

anything below .5 is set equal to zero.

Then we get the map on the right top here,

which is just a lot of white within the black there.

So here the Euler characteristic is going to be 27 because

28 coherent islands of activation here, which I'm calling blobs.

And there's one hole, you see in the bottom.

There's a slight hole in one of the blobs.

And so it's going to be 27 different blobs minus holes,

so that's the Euler characteristic in that case.

If we go to the middle one here, we're thresholding at 2.75.

In this case, we only get two blobs and no holes, so the Euler characteristic is two.

Finally, if we go to u = 3.5, we get a single blob, and

the Euler characteristic is 1.

So the Euler characteristic is a property of this image after we've thresholded it.

So how do we use the Euler characters to control for the Family Wise Error Rate.

They seem to be far removed from each other.

Well it turns out that we've already determined that

there's a link between the family-wise error rate and the max T statistic.

So, the Family Wise Error Rate is equal to the probability

that the max T statistic is above U.

I claim that if the max statistics is above u, then we're going to one or

more blobs.

Because, if we're thresh holding at u, we're going to have one or

more areas that are white.

That are going to be deemed significant.

And so, in this case, basically if the max statistic's above u,

we're going to have one or more blobs.

And let's just assume for sake of argument that no holes exist.

In this case, we're actually interested in the probability that

the Euler Characteristic is bigger than or equal to one.

That means that we have one or more blobs.

If we assume that there's never more than one blob, then this probability is

approximately equal to the expected Euler Characteristic.

So now we have that the link between the family-wise error rate and

the Euler Characteristic.

So the family-wise error rate is actually just the expected Euler Characteristic.

Now, this seems to have complicated the problem a lot,

because how would we know what the expected Euler Characteristic is?

Well the good news is that, actually closed form results exist for

the expected Euler Characteristic for Z T, F and X squared continuous random fields.

So we can kind of stand on the shoulders of the people who have already derived

these results and use them to control the family-wise error rate.

So for three dimensional Gaussian Random Fields this is the result for

the expected Euler Characteristic.

It takes a same-what complicated formula, where R is V over

FWHM in each of x, y, and z direction.

So V is the volume of the search region, so the number of voxels basically that

we're searching over and the full width at half maximum represents the smoothness

of the image estimated for the data in each direction.

So R is sometimes in the nomenclature is called a resolution element, or resel.

So basically, using this result, we can find that for large u,

the family-wise error rate is roughly equal to this.

So we can choose a threshold u to control the family-wise error rate.

And so what are some properties of this equation?

Well, As u increases, as the threshold increases,

you can see that the family-wise error rate will decrease if u is large.

So, that's a good thing, because if we make the threshold more stringent,

the family-wise error rate should go down.

So this is a useful property.

Similarly as V increases so the number voxels that we're controlling for

increases, the family-wise error rate will also increase and this is again

a useful property because the more test that we're comparing simultaneously,

the more likely are we to make a family-wise error.

Finally as the smoothness increases, then the family-wise error rate will decrease.

And this is again a useful property because if we have a very smooth image,

we would expect adjacent voxels to behave similarly and

will have less independent tasks.

So even though this formula looks kind of ugly and hard to grasp,it has

a lot of properties that are useful for controlling the family-wise error rate.

So what are some assumptions that we need to hold in order to use this random

field theory?

Well, we have to assume that the entire image is either a multivariate Gaussian,

or derived from some multivariate Gaussian image.

So that includes chi-squared, T, and F-distributions, so those are the kind of

distributions that we're often interested Sit and working with.

The statistical image must also be sufficiently smooth to

approximate a continuous random field.

And so the family-wise error rate has to be at least twice the voxel size, and

so typically to analyze error rates, we want

the smoothest to be at least 3 to 4 voxel sizes, for this to work really well.

And also the amount of smoothness is assumed to be known, and

if the estimate can be biased when the images are not sufficiently smooth.

Also as we saw when deriving these results,

there are several layers of approximations that have to be made.

So that's the end of this module where we talked about two different methods for

controlling the family-wise error rate.

The first is the classic Bonferroni correction.

Which winds up being a little bit conservative because it doesn't take its

spacial relationships into consideration.

And then we also talked about random field theory, which is probably the most popular

way of controlling for family-wise error rate in neuro imaging.

And this controls a little bit for the spacial smoothness.

In the next module we'll talk about another type of way for controlling for

multiple comparison, which is the false discovery rate.

Okay, I'll see you then, bye.

Coursera propose un accès universel à la meilleure formation au monde,
en partenariat avec des universités et des organisations du plus haut niveau, pour proposer des cours en ligne.