0:00
Hi, we're talking about validity.
And today, I want to talk about external validity.
We've already discussed internal validity,
whether there are confounding variables or not,
and deciding what are independent variables are.
We've talked about construct validity,
and that can be the content, is it valid?
Does it predict what it's supposed to predict?
And to discriminate what it predicts,
and what it's not supposed to predict.
And now, we want to talk about external validity.
An external validity is basically,
whether or not our differences that we find in
the experiment generalize to other situations.
That is there a generalization,
the inference causal inference we make from our experiment that to generalize
to other situations in which that's supposed to occur.
And that's called, external validity.
In other words, are the results of this study,
can it be generalized to other samples that we use?
Will it generalize to other situations since we find ourselves?
This is external validity.
Whether it generalizes or not.
And that's important because if it doesn't generalize,
it might be because of a very,
very specific situation in which our experiment was designed,
which doesn't apply to other kinds of situations.
So, external validity is how well can we generalize.
And there are different ways in which we can do that.
Does it generalize to the larger population?
Remember, we're just using a sample in our experiment.
Does that generalize to
a larger population of the large population we want our sample to represent.
And that's one way of external validity.
The second way, would be to generalize to other settings.
We use a very specific setting maybe in
the laboratory where we have people sitting in a chair,
watching in a computer screen.
Given the construct that we're studying,
will that generalize to other situations?
And that's called ecological validity.
So we have population validity,
and they have ecological validity.
And then, we can also say there's a generalize
to other independent variables measuring the same construct.
An independent variable is a measure that we
decided to measure our hypothetical construct that we want to study.
What about other measures of the same construct?
And then, they can also have outcome that is the dependent measure
that we'd select to measure a particular behavior.
Does that generalize to other representations to that same behavior?
So outcome validity, design validity,
ecological validity, and population validity.
Different ways of asking the question,
how well do our results generalize to other situations?
In my memory research,
memory research is often criticized.
For example, most memory research uses college students.
And college students used to sitting at a desk having
some information presented to them in some particular kind of way.
Does that generalize to the population as a whole?
Is that a good sample to use?
My sort of convenient sample of using people,
here at my university that I can use.
Does that generalize to the community as a whole?
That's a good question,
and that's sort of notion of population validity.
Ecological validity and ecological validity can be
demonstrated by Dick Neisser who is at Emory for a very long time.
A colleague of mine who sometimes is called,
The Grandfather of Cognitive Psychology.
His big premise was that,
experimental research is typically done in the laboratory,
like the research I do, is artificial.
My measures of memory that I get from having
somebody sit at the front of a computer screen,
and answer questions about what they remember.
Does that really generalize to the situations that they're in real life?
Where people are having to remember everyday things,
in everyday life, and everyday settings. Ecological validity.
And he criticized most experimental research based on that.
That sort of confounding of artificial sort of studies with the real world out there.
So, how well can we generalize?
Let's look at outcomes.
There are many measures I could have selected,
to measure the particular behavior I'm interested in.
Will in fact the design of the experiment,
and the way in which I pick my measure is what I'm looking at,
will that really produce differences depending on what particular measures I select.
So, we make choices when we design an experiment.
We have to make selections about what the measures are,
and what the outcomes are.
And in fact, to those
generalized to other possible measures or designs that we could use.
So we have design and outcome problems with generalization.
So with outcomes, do we get the same findings that we have other measures?
And are we really interested in what the relevant outcomes are?
We pick particular outcomes but they're the ones that are most relevant to the construct.
And an example of that would be brain training where you watch TV,
you see there are all kinds of things you can do,
you can buy programming,
and get some food nutrients that sort of train your brain to do better.
The notion is that we'll generalize,
and things like general intelligence.
But in fact, what we know is that if you train on a particular task,
that will generalize to that particular kind of cognition that that task measures.
But it doesn't generalize to cognition as a whole.
In fact, there have been arguments against the brain training programs for that reason.
That is just not the right dependent variables.
It doesn't transfer to the right kinds of,
it doesn't predict the right kinds of variables.
So, it's very much like predictive validity.
Then we can also have problems with the design of the experiment.
What we we're actually manipulating.
For example, we might say,
we want to measure a positive attitude about X.
Is Atlanta, Georgia, great place to live?
And I've recently seen studies that have been done
about what are the best cities to live in America.
Well, what do you measure when you're picking what is the city you want to live in?
Do you measure the schools?
The weather? The jobs? The housing?
The cost of living? Civic policies?
Clean air and water?
Commute and traffic? Public transportation?
Recreation opportunities?
Taxes? Crime? Sports teams?
This list could go on and on.
How do we possibly come up with a measure that would generalize
to all the possible things where people might determine,
this is where I want to live?
It's very complex.
So, ecological validity is where we say,
"what we're measuring in this setting,
might not generalize to other settings."
So, is the setting that we're taking,
artificial of other settings,
where this particular behavior might occur.
And do we get the same responses with different settings?
Like, if I'd used a paper and pencil tests instead of a computer screen.
If I'd measured in the dorm room,
rather than a laboratory in my studies that do with college students,
what about a face to face,
rather than on the Internet?
It can be all kinds of situations and settings that I find myself in.
Memory in the laboratory versus memory in everyday life.
All these are ecological possible differences that
might get when I look at the phenomenon I want to study.
Also, concerns about whether the situation is public or private.
Is anonymity protected?
That might have big effects on what I actually measure in my experiment.
And like I said, a major criticism of
psychological laboratory studies is sort of artificial nature
of laboratories versus real life that Dick Neisser has talked about.
Ecological validity, probably the biggest problem with external validity.
And we also have population validity whether or not
a sample really represents the population.
How well can we generalize the sample to the population.
As I mentioned, college students,
I tend to use in my young groups,
and college students, I tend to use a memory research where I'm not studying aging.
Are they the appropriate research participants we're really looking at memory,
because my population is not college students.
My population is everybody.
So, we have to understand that we have a sample.
And that sample that we pick has to be representative of the population as a whole.
And whether that occurs or not is the question of external validity. Thank you.