This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

Loading...

En provenance du cours de Johns Hopkins University

Principles of fMRI 1

341 notes

This course covers the design, acquisition, and analysis of Functional Magnetic Resonance Imaging (fMRI) data. A book related to the class can be found here: https://leanpub.com/principlesoffmri

À partir de la leçon

Week 4

The description goes here

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

In this module we're going to talk about group analysis.

Especially about fixed and random effects which is a pervasive and

important issue with neuroimaging analysis.

So let's recap the multi-level model.

There's two levels in our typical model.

The first level deals with individual subjects and

it conducts a model within one subject.

The second level deals with groups of subjects and

either constitutes one sample T-test across subjects or

an analysis across groups, like patients versus controls, for example.

So this is a schematic of a group again

where we have a model on the time series within each subject.

That's the first level and those are nested within the group.

And, again,

all inferences here are performed in the massive univariate setting.

So, we're still dealing with an analysis at one voxel.

Multi-level models have been developed for

analyzing hierarchically structured data like this.

And they allow different variance components to be introduced.

So, we'll talk about several different variance components soon.

But they essentially reflect variants within subject and

variants between subjects or individual differences.

And these provide a framework for conducting mixed effects analyses.

So, mixed effects in models, hierarchical models and

random effects models in neuroimaging all refer to the same concept.

They have model multiple sources of variation which are also called variance

components, and these models stand in contrast to what we

call fixed effects models in neuroimaging with only one variance component.

And we mean something very specific in neuroimaging when we say fixed effect

model or random effect model.

Really what we mean is we model subject or participant as a random effect or

as a fixed effect.

And I'll explain that more in the following slides.

So in the mixed effects analysis,

let's assume the signal strength varies across sessions and subjects.

There are two sources of variation.

One is measurement error.

That's all the stuff I can account for with the experimental design,

could be head movement related, could be lots of things.

The second is random response magnitude.

So every subject, or

every subject in every session has a random magnitude for their true response.

All we're saying here is that all the subjects are different from one another.

So those are our two basic sources of variation, or

our two basic variants' components.

Always, with these models, the population mean is fixed.

So we're assuming that there is some fixed population parameter for estimation, for

activation, let's say, or famous vs non-famous face differences.

So, now let's look at a sample fMRI time series with one source of error.

This is animation over replications of one subject's experiment

where the only thing that's varying is the fMRI noise.

So what we see here is a fixed population effect,

and that's the black line, that's this block, on-off, and then every time we

sample the actual fMRI data, we get the red line, the black line plus error.

So here the only source of variation is the measurement error itself.

The true response magnitude is fixed.

That's the on-off pattern in the black line that we're trying to estimate, and

we're estimating that with the error around that.

So in this case, the significance test would be based on the estimated response

relative to the measurement error variance,

that variation around the black line in the red.

That's only within subjects noise.

Now let's look at the same thing but with two sources of error.

Now the green line is the true response for

an individual subject which is sampled around the black population mean line.

And now when we sample the fMRI data,

which is the red line, we're sampling with error around the green line.

So now there are two sources of variation.

One's the measurement area, the scan to scan variability in red,

sampled around the true individual differences which is the green line.

And the green line has true individual differences

that vary around the population mean.

So only by including both sources of variation in my error term

in a sedisical model can I generalize to unobserved subjects.

And that's what it means to treat subject as a random effect.

So let's look more deeply at fixed effects and random effects.

So a fixed effect is always the same, from experiment to experiment, and

levels are not drawn from a random variable.

They're not assumed to be drawn from a random variable.

So some examples are sex, male or female.

There's only two, usually, alternatives.

And another example might be drug type.

I might be interested in the effects of Prozac versus control,

not the effects of some new, unobserved, randomly selected drug.

So in that case, the fixed effects model is appropriate.

Let's look at random effects now.

Typical random effects are those that are assumed to be,

whose levels are assumed to be sampled at random from a population.

So the quintessential thing that should be modeled as a random effect is subject or

participant.

We observe some subjects, but

we assume that we selected subjects at random from the population.

Another example is Word in experiments with verbal materials.

So let's say you're studying the effect of positive and negative words.

And you only choose one positive word which is puppies and

one negative word which is murder.

And you do a scan where you compare puppies and murder.

Well they differ in positive versus negative, but

they also differ in many other features as well.

So, you might treat, you might first include a poll population of words,

many kinds of positive and negative words.

And then you might want to model a word as a random effect, so

that you can generalize to unobserved words as well,

drawing from the population of negative or positive words.

One of the key points is in a mixed effects model,

we choose whether to model each effect as fixed or random.

So here are the implications of that choice.

The variance across each level of a random effect is included as a source

of error in the model.

So when I construct a t-statistic,

I take the estimate of the effect divide it by its standard error, and

that standard error includes variability from individual to individual or

from, from level to level of anything I've modeled as a random effect.

And this allows us to generalize to unobserved levels.

If an effect is treated as fixed,

error terms in the model don't include variability across those levels.

So, we can't generalize to unobserved levels in that case.

The upshot of this, is if I treat subject as fixed I cannot generalize

to new subjects, which is something that we virtually always want to do in science.

It's hard to imagine a case where we don't want to generalize to

other people besides the ones we actually included in our study, that's science.

So this is a group analysis using a summary statistics approach,

which is a simple kind of random effects model.

And this is the one that's used most of the time.

So on the left, what you see is a first level analysis which is

a GLM within each person, and I take that forward to find a contrast and

come up with a contrast image for each person.

Then I go to the second subject, and I repeat that.

Now I take the contrast images from all of those individual people, and

I put them into a second-level design matrix.

And what you see here is an image of what that design matrix looks like in this

case, and the design matrix looks like a white square because it's a constant.

It's all values of 1, and that's just a one-sample t-test.

So I conduct that, and then I can get a group result, and then I make inferences.

Now this is the most common approach because it has several

important advantages.

One, it's easy to do.

It's easy to add new subjects or participants later and

rerun the group analysis, for example.

It's optimal if the within person precisions are all equal for every person,

and that implies that the design matrix is identical, its efficiency is identical.

We'll talk more about that in future lectures.

And the errors are all equal.

It's fairly robust to violations of some of those assumptions in terms of

false positives, but we can lose sensitivity in some cases.

So that's a schematic overview of what the random effects analysis looks like.

Here's a schematic of what a fixed effects analysis looks like,

which is the wrong approach.

So this is what I call the grand GLM approach, and

this was done in the very early days of neuroimaging.

And this is a GLM on data that's concatenated across subjects.

So what you see here is an image of the design matrix

where I've got this blocked on, off design modeled separately for each subject.

So every subject gets one estimate for their individual slope.

I've got the intercepts for each subject, and I've got some nuisance covariance,

some filtering covariance, high pass filtering covariance for each subject.

And there are three example subjects here, so I've concatenated all of them.

So I'm making a number of assumptions here.

So every subject gets their own slope, but

when I calculate the error in that model, I'm going to average the subjects, and

I'm going to compare that to the error within subject.

The error on the time series only.

So this assumes that the only source of error is within

persons scanner noise, and that's not accurate.

So this tests the mean effect against that within-subject error, and

it doesn't account for the individual differences at all.

So even though I'm coming up with one estimate of the slope per subject,

that doesn't get reflected in the error term,

and I'm not making inferences that can be used to generalize to a population.

That's the end of this module.

In the next module Martin's going to talk more about

the multi-level GLM from a statistical or structural modeling perspective.

Coursera propose un accès universel à la meilleure formation au monde,
en partenariat avec des universités et des organisations du plus haut niveau, pour proposer des cours en ligne.