This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

Loading...

En provenance du cours de Johns Hopkins University

Principles of fMRI 2

104 notes

This course covers the analysis of Functional Magnetic Resonance Imaging (fMRI) data. It is a continuation of the course “Principles of fMRI, Part 1”

À partir de la leçon

Week 3

This week we will focus on brain connectivity.

- Martin Lindquist, PhD, MScProfessor, Biostatistics

Bloomberg School of Public Health | Johns Hopkins University - Tor WagerPhD

Department of Psychology and Neuroscience, The Institute of Cognitive Science | University of Colorado at Boulder

[SOUND] So we often use multivariate decomposition

methods to study functional connectivity.

These methods provide a decomposition of the data into separate components.

And they can be used to find coherent brain networks and

provide information on how different brain regions interact with one another.

The most common decomposition methods are principal components analysis,

or PCA, and independent components analysis, or ICA.

So throughout, we're going to organize the fMRI data in an MxN matrix X.

The row dimension is the number of time points, and

the column dimension the number of voxels.

So we're going to just put all the data together in a time by voxel matrix.

So here, in contrast to, say, the GLM, where we analyze

each voxel at a time, here, we're going to analyze all the voxel simultaneously.

So principal components analysis is a multivariate procedure

concerned with explaining the variance-covariance structure of a high

dimensional random vector.

So, in PCA, a set of correlated variables are transformed into a set of uncorrelated

variables ordered by the amount of variability in the data that they explain.

So, in fMRI, principal components analysis involves finding the spatial modes or

eigenimages in the data.

These are the patterns that account for

most of the variance-covariance structure in the data.

And they're ranked in order of the amount of variation that they explain.

The eigenimages can be obtained using singular value decomposition, or

SVD, which decomposes the data into two sets of orthogonal vectors

that correspond to patterns in space and time.

So the singular value decomposition is an operation that decomposes the matrix

X into three other matrices.

So we write X = USV transpose,

where V transpose V is equal to the identity matrix,

and U transpose U is also equal to the identity matrix.

And S is a diagonal matrix, whose elements are called singular values.

So pictorially, we can write this as follows.

We can take our matrix X, which, again, was time by voxels, and

decompose it into three matrices, U, S, and V.

Pictorially, we can represent the singular value decomposition as follows, here,

we take the matrix X, which is, again, time by voxels.

And we can separate it into U, S, and V transpose.

So here, what I'm going to claim is that the columns of V, or

the rows of V transpose, are the eigenimages.

And the columns of U represent the corresponding time courses.

So these are the time courses that correspond to the respective eigenimages.

So we can write this in as X = USV transpose, but

because of the diagonal nature of the S,

we can also decompose it into each column of U and V.

We can write s1u1v1 transpose, etc, for each of the subsequent columns.

Here we see a real data example.

Here we have x as the first image here, and this can be decomposed into a number

of sub-matrices as indicated on the previous slide.

The first sub-matrix consists of s1, which is a scalar times u1,

which is the time course corresponding to the first eigenimage, and

v1 transpose, which is the first eigenimage.

Then we have the second sub-matrix, which is s2 times u2,

which the time course corresponding to the second eigenimage,

v2 transpose, which is the second eigenimage, etc, etc.

Now each of these vs have the length of the number of voxels, and they can be

sort of reconstructed into images corresponding to the spatial modes here.

So here we see the first eigenimage, which is a v1 transpose,

and we have u1 transpose, which is the corresponding time course.

Similarly, we get the second eigenimage and its corresponding time course,

etc, etc.

So if we do this, we can get several different

temporal components corresponding to each of the columns of view.

And we get the corresponding eigenimages below.

And here's an example of a PCA analysis.

Here we see the first four eigenimages on the bottom.

In the bottom panel, we see four rows, one for

each eigenimage, and on the top panel, we see the corresponding time courses.

And to the right of these time courses, you see percentages.

Those percentages are the percent of variation explained by each

of the components.

And they're related to the values of S in the singular matrix.

So Independent Component Analysis, or ICA, is a family of techniques used to extract

independent signals from some source signal.

ICA provides a method to blindly separate the data into spatially independent

components.

Here the key assumption is that the data set consists of p

spatially independent components, which are linearly mixed but spatially fixed.

The ICA model differs a little bit from what we used in PCA,

here, the matrix X is decomposed into two matrices, A and S.

Here, A is referred to as the mixing matrix and S the source matrix.

So our goal is ultimately to use this information to find an un-mixing matrix W,

such that Y=WX provides a good approximation to S,

which are these independent sources.

If the mixing matrix is known, the problem is straightforward and almost trivial.

However, ICA tries to solve this problem without knowing the mixing parameters.

So instead, what it does is it exploits some key assumptions.

First it assumes that there's linear mixing of sources.

Then it assumes that the components si are statistically independent of one another.

And it assumes that the components are non-Gaussian, or at most,

one can be Gaussian.

When applying ICA for

fMRIs, assume that fMRI data can be modelled by identifying sets of voxels

whose activity vary both over time and are different from activity in other sets.

We try to decompose the data into spatially independent component maps

with a set of corresponding time courses.

Here's the kind of cartoon image here, we have X, which, again,

is time by voxels, that's our data.

And we have two matrices, A and S.

So again, S represents the spatially independent components, one for each row.

The columns of A represent the time courses

corresponding to the spatially independent components.

So the first column corresponds to the first row of S.

And what we want to do here is to use an ICA algorithm to find both A and S.

Here's an example of fitting ICA, and here's two different components.

So this corresponds to two different spatial components

from the ICA decomposition.

And first, we see a task related component, and in the second,

we see a more noise component.

And see here, you can see that this is a noise component by a lot of activation

around the edges of the brain, which is probably due to motion related artifacts.

Here is an example of eight of the most common and

consistently identified resting state networks, which are identified by ICA.

And so this we looked up the previous lecture on resting data from fMRI.

And we showed these results, and

here, we can now come back to this and say this was obtained using ICA.

So what's some differences between PCA and ICA?

Well, PCA assumes an orthonormality constraint.

In contrast,

ICA assumes statistical independence among a collection of spatial patterns.

So independence is a stronger requirement than orthonormality.

However, in ICA, the spatially independent components are not ranked in order of

importance, such as they are when performing PCA.

So it behooves you to go through all the components after the fact, and find out

which ones are important and which ones are just related to noise and whatnot.

Okay, so that's the end of this module.

Here we've introduced principal component analysis and

independent component analysis.

So these are two ways of taking the full time by voxel data and

finding interesting patterns of activation in it.

And so these are commonly used in functional connectivity analysis.

Okay, in the next module, we'll talk a little bit about dynamic connectivity.

See you then, bye.

[SOUND]

Coursera propose un accès universel à la meilleure formation au monde,
en partenariat avec des universités et des organisations du plus haut niveau, pour proposer des cours en ligne.