This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

Loading...

En provenance du cours de Johns Hopkins University

Principles of fMRI 2

80 notes

This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

À partir de la leçon

Week 3

This week we will focus on brain connectivity.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

So in the past few modules we've been talking about functional connectivity.

Now we're going to shift focus to effective connectivity.

So effective connectivity is defined as the directed influence of one brain region

on the physiological activity recorded in other brain regions.

This claims to make statements about causal effects among tasks and regions.

Usually, effective connectivity makes anatomically motivated assumptions and

restricts inference to networks comprising of a number of pre-selected

regions of interest.

So in this particular example, we have three regions of interest here.

And we believe that activation in V1 leads to activation in V5,

which leads to activation in PPC.

So notice the directed graphs, here.

Methods for performing effective connectivity include structural equation

models and also mediation and moderation, Granger causality,

dynamic causal modeling, and Bayes nets.

Let's start by talking a little bit about structural equation models.

Here structural equation models comprise a set of regions and

a set of directed connections.

So here we have three regions, A, B and C and three links between the regions.

So there's an arrow from A to C, from A to B, and from B to C.

Path coefficients are defined between these pairs of nodes,

describing the strength of the connections.

So, here we have a link bAC, bAB, and bBC.

Directional assumptions are assumed a priori and

are often given a causal interpretation.

And we'll talk more about that in subsequent modules.

So mathematically how do we deal with this type of problem?

Well, we can set up the problem as follows.

So the signal over A at a certain time point is you.

The signal over B is yB.

And the signal over C is yC.

So we have a vector here, you, yB, yC,

which are the measurements at a certain time points.

We can now set up the following model where we linked the activation A, B,

and C to the other nodes.

And this can be written as yt = M times y of t + e of t,

where e of t is normally distributed with mean 0 and variance covariance matrix R.

So basically, what we're saying here is that activation at time t,

represented by y of t, depends on the other nodes through this matrix M.

If we change the structural equation model, and change the directions of nodes,

we would change the matrix M.

So M contains all the path coefficients of interest.

Now we can rewrite this equation slightly and

get the all the yts on the same side of the equation.

And we can do this by moving, subtracting M yt on both sides and

then multiply by the identity- M inverse.

So then we get that y of t = the identity- M inverse e of t.

We can now compute the covariance matrix of y t as follows.

And this is now an equation that depends on M, which depends on the path

coefficients and also on R, which is the variance covariance matrix.

Now, the parameters theta here are the unknown elements of the matrices M and R.

And those are the elements that we want to estimate.

In particular, we're very interested in estimating the elements of M,

because that's going to tell us about the strength of the relationship

between the different regions.

Now the covariance of the data represents how activities in these two or

more regions are related.

And in SEM, the structural equation models, we seek to minimize the difference

between the observed covariance matrix, the one that we see from the data,

and the one that's implied by the structure of the model.

That's the one I showed on the previous slide.

And so the parameters of the model are adjusted to minimize the difference

between the observed and the implied covarlance matrix.

And typically maximum likelihood estimation is used to estimate

the parameters.

So that's structural equation model.

Another method that's often used in the neuroimaging and

functional MRI is dynamic causal modeling.

And so we're going to talk about this in more detail in subsequent modules, but

right now I'm just going to give you a brief overview.

So DCM attempts to model latent neuronal interactions using

the hemodynamic time series that we measure using fMRI.

And this is based on neuronal model of interacting regions,

supplemented with a forward model showing how the neuronal activity is formed in

the hemodynamic responses.

Here effective connectivity is parameterized in terms of the coupling

among the latent neuronal activity in the different regions of the brain.

We can estimate these parameters by perturbing the system and

the measuring, and observing the response.

Here's an example of a simple dynamic causal model.

We have two regions, 1 and 2, we're calling here.

And we have neuronal activation z1 in region 1, and z2 in region 2.

Now, let's say that there's a link from z1 to z2, and from z2 to z1.

Those are the arrows that we see there.

If we perturb this system with two perturbations here,

we have u1 which affects region 1 and

u2 which affects the relationship between regions 1 and 2.

Then, basically what we do is this gives rise to changes in the neuronal

activation, which in turn, which we can't measure,

which is latent to us because we can't measure that with fMRI.

But in turn this gives rise to changes in the hemodynamic response

which we denote by y1 and y2 which we can measure.

So the whole idea here is that we propose a model like this.

And then we perturb the system, and then we measure the hemodynamic response.

And then we try to back track and figure out what was the underlying latent

neural activation that could have given rise to this hemodynamic response.

So that's sort of the general idea behind DCM.

And so basically one uses a bilinear state equation to model

a cognitive system at the neuronal level.

And the modeled neuronal dynamics is then transformed into BOLD signals using

a hemodynamic forward model.

And we'll talk about all these things in a later module.

And so the aim of DCM is to estimate parameters at the neuronal level such that

the modeled BOLD signal are maximally similar to the experimentally

measured BOLD signals, okay?

So, we're going to get a BOLD signal through the model.

And we try to tweak the parameters of the model to make the observed BOLD

response as similar as possible to the modeled BOLD response.

The final technique that we're going to talk about is Granger causality.

Granger causality is a technique that was originally developed in the econometrics

literature, but has recently being applied to neuroimaging data as well.

Now Granger causality doesn't rely on an a priori

specification of a structural model like the two other models I just mentioned.

But rather is an approach for quantifying the usefulness of past values

from various brain regions in predicting current values in other regions.

Now let's take a look at an example.

Let's let x and y be two time courses of length t.

And now let's model each of these time courses

using a linear first order autoregressive model.

So for example, say x[n] = a times x[n- 1] + an error term.

So basically, x depends on its past value through the term a.

Similarly, y[n] = some constant b times its own

value at times n -1 + an error term.

So this is called a first order autoaggressive model.

So this is just telling us how the current value of a time course

depends on its past.

Now in Granger causality we expand this using the auto

regressive terms from the other signal.

So basically in the second model we now say,

well x at time n depends on the past value of x.

But it also depends on the past value of y through this linear equation.

And then similarly, y depends not only on its own past value,

but the past value of x.

And so, the idea behind Granger causality is to test whether the history of x has

predictive value on the current value of y and vice versa.

So basically, going back to those two equations, we want to see if by including

the other time course into the equation, we get a better predictive model.

And if the model fit is significantly improved by the inclusion of these

cross-autoregressive terms, it provide evidence that the history of

one of them can be used to predict the current value of the other.

And thus a Granger-causal relationship is inferred.

So that's the end of this module.

We've just talked a little bit about effective connectivity and

talked about different models to assess effective connectivity.

In the next couple of modules we'll look at mediation, moderation.

And then I'll take a look at a little bit more in depth under the hood look at

DCM and Granger causality.

Okay, I'll see you then.

Bye.

[SOUND]

Coursera propose un accès universel à la meilleure formation au monde,
en partenariat avec des universités et des organisations du plus haut niveau, pour proposer des cours en ligne.