This is a five-section course as part of a two-course sequence in Research Methods in Psychology. This course deals with descriptive methods and the second course deals with experimental methods.

Loading...

En provenance du cours de Georgia Institute of Technology

Descriptive Research Methods in Psychology

2 notes

This is a five-section course as part of a two-course sequence in Research Methods in Psychology. This course deals with descriptive methods and the second course deals with experimental methods.

À partir de la leçon

Module 2: Observation

- Dr. Anderson D. SmithRegents’ Professor Emeritus

School of Psychology

[SOUND] Hello,

Anderson Smith here.

And this is a course on descriptive research methods.

In this section, we're going to talk about observation methods.

But before we do that,

I want to talk a little bit more about descriptive statistics,

the statistics that are used to better describe the data that we're looking at.

When we're measuring behavior.

Now, we've already talked about the measure of central tendency.

Things like the mean which is simply the average score in a group of scores.

The median which is the middle score.

In the mode which is the most frequent score in a distribution.

We've also talked about variability, things like the range which is simply

the difference between the highest and the lowest score.

Or the variance, which is a squared average difference between any score and

the mean of a distribution and the standard deviation.

Which is simply the square root of the variance, measures to the extent to which

it has variability among the measures and the distribution.

Today we're going to talk about Z scores, and we'll talk about correlations.

Now Z-scores are really a measure that looks at the distance between any one

score and the population mean.

And we do that in units of standard deviations so,

we can compare scores from different distributions.

The z-score distribution in a z-score distribution,

it has a mean of zero and a standard deviation of one because,

that's the way in which we scale the particular measures.

Now a z-score tells us where in a distribution a particular score falls.

And so, it's a technique which is used a lot to be able to compare, like I said,

scores from different distributions.

So you take the z-score is the score minus the mean

divided by the standard aviation because it is in standard aviation units.

Now let me give you an example, here is a picture of a distribution,

a normal distribution and you can see the mean is the average of the scores

of distribution and then we are going to use the standard aviation units so

the standard aviation unit is 1 standard deviation away from the mean and

so that would be 1 z-score,

2 standard deviations away with the mean would be 2 z-score.

And we know from our distribution that in one center deviation from the mean,

from 1 standard deviation to below 1 standard deviation,

there is 68% of all the variability in that distribution.

Knowing that allows us to compute the z-score.

For example, Z again, is the score minus the mean,

divided by the standard deviation.

So, if we had a score of 70 on some particular scale, the mean would be 60 and

the standard deviation is 15.

Then we compute the z-score which is simply the score minus the mean, which

would then be 10 divided by the standard aviation which is 15 which would be .667.

So, the standard score or the z-score is .667 for

this particular measure Let me show that again.

We have a mean, an average of 60.

The score is 70 in the distribution, which means that

the z-score is .667, which is simply the score minus the mean,

divided by the standard deviation, which is .667.

So then we can compute knowing the amount of the distribution which falls between

the mean in one standard deviation, and we can compute the probability that any

score is above or below the z-score that we're using.

So we know for example that from the mean below the mean,

we have 50% of the distribution.

And we know that above the mean to one standard deviation,

we have 34% of the distribution.

So the z-score .667 times .34,

the percentage of the distribution that falls at one standard deviation.

Then we can compute the amount of the distribution that falls below.

The score that we have,

which is in this case 72.7% of the scores will fall below that.

So again Z is the score subtracted from the mean,

which is 10, the standard deviation is 15,

so the z-score is .667.

So, the percentage below s-score is 50% of

the distribution plus .667 of the amount of the distribution

that falls between zero and one standard deviation.

And so, that computes to 72.7, which is it,

a score above, below the distribute, below the score, I'm sorry.

And then, if you subtract that from 100,

you get the percentage of the scores, 27.3 to fall above the score that we have.

So z-scores are very useful then,

in looking at where in the distribution a particular score falls.

So we have central tendency, variability,

Z-score sometimes called standard score, then we also have correlations.

In correlation is where we look at

the association between any two variables that we measure.

Examples would be, reported life satisfaction,

health in a survey that we have.

We have a rate to health, we rate that life satisfaction then we look at

the correlation or the relationship among those two variables.

Or we could look at multi-tasking ability as an adult and

whether you played video games when you were a child, when you were ten years old.

Again a correlation between two variables, a relationship between two variables.

Another would be, what is the relationship between family income and

working memory performance when you were 18 years of age?

If there's a relationship between those two variables,

it would show up in a correlation.

Now, again a correlation as to what extent two or

more variables co-vary or do they have a relationship.

The important point here is that correlation shows a relationship but

does not imply causation.

Let me give you an example of that, not too long ago there was a paper that showed

that playing video games is related to acquired capability for suicide.

And the implication of this paper was that playing video games actually had some

causal relationship with capability of suicide when you were older.

That was a relationship there which is causal but, it could easily be because all

we know is that there's a relationship that in fact, acquired

capability of suicide causes someone to play video games when they were earlier.

There's a bidirection of melody problem that we find there.

Also it could be that a third variable is accounting for that.

A third variable that we're not measuring could actually be accounting for

both of the effects that we see.

And that's a problem when you use correlations.

Let me give you another example.

Here's directionality problem, sleep and stress are related, there's a correlation

with the amount of sleep that you have and the amount of stress that you have.

That is the more stress you have, the less sleep,

the less stress you have the more sleep but, we don't know which causes which,

maybe the lack of sleep causes more stress or stress causes insomnia.

If the correlation is not tell us causation.

It also there's a third variable problem let's look at that.

Maybe texting while driving is correlated with driving accidents.

And the implication is, if you you text while driving,

you're more likely to have accidents.

Well, maybe there's a third variable, which is risk taking.

And risk taking causes some people to text while driving.

And risk taking also causes people to drive dangerously.

So it's the variable of risk taking, which produces those two effects,

not the one effect causing the other or vice versa.

So we have to be carefully interpreting a correlation because, of these two factors.

They probably don't know the direction of the effect and that could always be

a third variable is really producing the effect that you're looking at.

The correlation coefficient is a quantitative measure

of the relationship between two variables.

And the correlation coefficient which is sometimes reduced to the letter R is can

go from minus 1.0 a negative correlation, to +1.0 which is a positive correlation.

And the way we look at that is we take a score on one variable and

a score on the other variable.

We plot those for each individual on a scatter plot of the two variables.

For example, here's where we see it,

each one of these dots represent an individual and a score on one variable,

and a score on the other variable, represented by the two axis.

And, as you can see, as a person has a high score on one variable,

they have a low score on the other variable.

If they have high score on one variable,

they have a low score on the other variable.

That's a negative correlation.

In this case, they fall on an imperfect straight line,

which is a perfect correlation of minus 1.0.

As the scatter plot shows a little bit more variability,

then the correlation goes down.

Here's a correlation of minus 0.70.

A little bit less of the relationship minus 4.0.

And then here, we see no linear relationship at all correlation of 0.

The two variables are not related or the two variables can be positively related.

The trend line would go in a positive way and we see a correlation of plus 0.3.

A little bit more plus 0.8 and then a perfect correlation plus 1.0.

Where every dot falls right on the same line showing that as

you increase in one variable, you get an increase in the other variable.

Now the correlation coefficient varies depending on what the two variables are.

For example here's a positive correlation.

Between a parent's IQ and a child's IQ,

because it's a genetic component in developing of intelligence, and

we should see a positive correlation between those two.

Here's an example of a negative correlation.

The time it takes you to read War and

peace is going to depend upon your reading ability.

So as you're reading ability goes up.

To the time it takes you to read war and peace goes down.

Again, a negative correlation.

We know correlation at all like

a rated book quality has nothing to do with book length except in my six year old

who thinks that the smaller the book the better it is.

But that's not Usually the case in rating the quality of the book.

Then we see no correlation at all, between these two variables.

So, correlation then,

is the relationship, measured in z-scores, between the two variables.

Now, there are lots of formulas for computing r,

computing correlation and I'm just giving you a very simple one here.

But basically what you're doing is you're getting a z-score so

that you can pair the two between both of the variables, x and y, and then you

dividing that by the number of subjects that you have, and that produces r.

This is a very simple formula for correlation coefficient,

they're usually much more complicated.

In getting a precise measurement on what this coefficient actually is.

And here just some examples of correlations.

Smoking and lung cancer 0.394.

A negative one, condom use and AIDS, .202.

Another negative one, lead exposure and IQ, minus .121.

In a positive one.

Nicotine patch and smoking stopping, the smoke .150.

These are correlations that have been

computed based on the relationship between two variables.

So those are the statistics that are used to help describe

what we measure when we're using descriptive methods.

Thank you.

Coursera propose un accès universel à la meilleure formation au monde,
en partenariat avec des universités et des organisations du plus haut niveau, pour proposer des cours en ligne.