0:29
Let's take this question as one example.
"Which one of these qualities is the most desirable for a child to have?"
And here comes a long list of qualities and some
of them you might have thought about before, others not.
This is all formulated in the male version, but of course
it should, in this question, relate to both males and females.
Good manners, tries hard to succeed, is honest, neat and clean,
good sense and sound judgment, self-control,
all those characteristics of a child.
And what studies have shown is that if you have a long list, like this,
people, if they self-administer these questionnaires, where you look at this
list, tend to answer more often "yes" to items in the upper segment of this list.
Then, experiments where the order of the list was changed, so half
of the respondent would get the list in the order printed here
and the other half would get them in
the opposite order, version B, all items flipped.
And in a study by Jon Krosnick and Duane Alwin in 1987, they observed
this primacy effect, more items being picked on
the top, in both forward and backward orders.
So, they concluded it's not due to the items.
They had these 13 choices about child
characteristics that I just showed you and the
task for the respondent was to choose three
most important. And among those most important they
found that this primacy effect was actually
only present for respondents with low sophistication, as
they called it, so lower level of education
and lower level on scores on vocabulary tests
has pronounced this kind of effect.
So, they argued that this group most likely answered in a satisficing way,
which means they responded as soon as an acceptable option was encountered.
2:22
They also argued that response order effects
of this kind are a function of mode,
because under a visual representation, respondents might
become cluttered after the first few options
and the first few are likely to be endorsed, showing this primacy effect.
But if you hear it in a telephone survey and it's an auditory presentation,
then the earlier options are overwritten by later ones and so
the last few are more likely to be endorsed, which would
lead to a recency effect. And actually that has more to
do with who administers the option rather than the actual mode.
Don't get confused here,
the respondent could request, of course, from the interviewer or the computer
that there's a visual presentation in addition.
So it's not mode within the sense of what the
survey mode in general is, but whether its visual or auditory.
3:32
And 19.2 showed a significant recency effect
and only 1.8 showed a significant primacy effect.
Mind you, these are telephone polls.
So, they had an average shift of 2.2%,
computed in this way, you know, in the percentage of picking the
option when second minus the percentage of option when it's picked first.
They saw as largest predictor of this effect, question difficulty,
so the average lengths of words, number of
words per sentence, and the number of sentences.
The response option length was also a significant effect
and so what the position within the questionnaire.
Now, just as a quick quiz question here in
between, what is the primacy and what is recency?
4:28
Now, continuing with that meta-analysis,
question type is another thing that they looked at
more closely and they grouped or categorized questions
in to three types, seemingly open ended questions,
delayed processing questions, and seemingly yes/no questions.
The seemingly open ended questions would be something like
this: "How do you feel about President Bill Clinton?
Is he trustworthy or dangerous?"
4:52
And, you know, you pause after Bill Clinton, that of
course doesn't have an answer category there yet, and so "Is he
trustworthy or dangerous?" is a closed ended answer category but
is something that sounds more like a seemingly open ended question.
The delayed processing question would be like this:
"Which of the following describes your view about President Clinton?"
So you know something is coming up and now you get it, "Trustworthy or dangerous?"
And the seemingly yes/no question would be, "Do you
think that President Bill Clinton is trustworthy or dangerous?"
But the "do you think" sort of implies that there will be a
yes/no but in this case the yes/no is converted into "trustworthy or dangerous".
And they found an effective question type here.
The seemingly open ended questions were least
prone to the recency effect and the delayed processing questions
were most prone to recency effects. So that's something
to keep in mind when you consider formulating your questions.
In general, education had the largest effect, for
low education groups compared to higher education groups.
Now, let's look actually, how respondents answer these questions.
I brought with me a few video clips to show to you.
And the first one is a respondent who is considering all answer options and
chooses the best answer, so this is like the perfect respondent that we see here.
This is an eye tracking study done at the University of Maryland
by Mirta Galesic and Roger Tourangeau
and colleagues, meanwhile published but we were,
you know, allowed to use those videos and you can see here, while
I'm talking, that the respondent nicely
reads the question, thinks about the answer.
Now he reread the question, goes back up,
goes back down, back and forth reading the question,
still thinking about the answer options, and then finally
deciding on one and, "Whoo, off to the next screen."
Now, not everybody has that same response pattern.
So here is another example for a response pattern.
You have a respondent,
she reads the question
slowly, carefully uses the mouse as a pointer,
and then clicks on the first option that seems good,
but then keeps reading and reconsidering.
It's like, you know, you have
listen to phone options and you hover your finger over
a particular number and then you might pick another one.
So here, the first option was changed to
the third and then the respondent still reads all
the way down until she's done and realizes there's
no better option than the one she already picked.
7:27
And then, finally, we are seeing this satisficing type behavior with this last
respondent that we filmed here. And again, the question is read nicely.
You see the first two, three, four, five answer categories are read again.
And the bubble gets bigger, that means
there's time spent on this particular players reading.
And so far, the last four options have not been read.
And now, here one is picked and off we go to the next screen.
So, you do see that this is a situation where the respondent did
not read all the options and just picked something in the upper half.
8:06
So in summary, the response order effects are common.
They can be large.
And Krosnick and Miller actually found them on election ballots,
which is of concern if you don't randomize the order.
The direction depends on the order of processing, so the
mode of presentation, auditory versus visual, we talked about that,
the pace with which the items are read,
the type of the item, whether it's a scale or an ordered list of things.
And it's magnitude depends on the
respondent's side on processing capacity, age,
education, interest in the topic, familiarity
with the topic, item difficulty, fatigue,
motivation in general of the respondent, and some
psychological concepts, like need for cognition, need for closure,
those people who show these personality traits, would
be more likely to go through the entire list,
and fuzziness of the respondents, thinking still about prior
items and prior thoughts that might occur in the questionnaire.
9:31
For example, if you have in factual questions
a list of, let's say, the study subjects,
you know, and they're alphabetical and "Computer
science", you know, you'll find that in third place
but if you study, you know, "War history", then
you will look at the very bottom of the list.
But, you know, some things are ordered historically and,
you know, it doesn't necessarily need to be alphabetical, so that can help.
But you can always at least run this list forward and backward
and randomize your respondents to one of the two,
if you can't randomize the entire order of those questionnaires.
That is one way, how you can mitigate this effect, when you design your questionnaire.