Hello. As you know a session of a design review
is a place where a group of people give feedback on designs.
This session does not include the kind of walk-through covered in the previous lectures.
In doing reviews people work at
their own pace applying their own knowledge to come up with
critique according to the goals of the review and
then the business of critique from all members are discussed.
Because there is no predefined procedure to follow,
design reviews can be applied for many purposes.
In this lecture, we will focus on the use of
design reviews for formative feasibility evaluation.
Taking into account the fact that sessions usually last no more than an hour and a half,
you should be very careful when choosing evaluation goals for each session.
You just can't use design reviews to evaluate the whole app or several scenarios at once.
Therefore, try to focus your effort on
specific parts of the user interface. All right, to the Method Structure.
During the preparation phase you need to decide who to invite,
think through an evaluation scenario and prepare all needed materials including,
not only the user interface itself,
but also a description of the usage context.
It makes sense to invite members of the core team working on the product,
professionals outside the code team,
for example designers from other departments and users.
Invite five, six people, tops.
Larger groups are usually hard to moderate.
Plus, your discussion may not fit into the allotted time.
The good news is that you don't need to
have an interactive prototype to conduct the review.
Mock-ups or wireframes will be just fine.
Please, make sure that they are organized conveniently.
For example, in the form of UI Flow.
It looks like this.
Screens should be laid out in accordance with the scenario you are going to evaluate.
Of course, if you have an interactive prototype you may use it instead.
Brian Pullen, from TWG,
recommends to include any steps users will take
before reaching the app into UI Flow or the prototype.
If the scenario starts with getting
an email then the prototype starts in the mail client.
In addition, I recommend you prepare several alternative solutions.
They can be major and minor differences in the design,
anything you feel is worth discussing.
It has a positive impact on the results of the review as a whole.
We'll talk about this phenomenon later in the first week.
Not that for purposes other than usability evolution,
you are free to use any presentations you want: context scenarios,
storyboards, a real app mirrored on a big screen and so on.
Comments gathered by design reviews are a mixture of predicted interaction problems,
incomplete ideas and recommendations.
That is why the re-desing phase starts at the same time as the review does.
Feedback almost always are used to refine the design right away.
I'm talking about design reviews aimed at evaluation usability here.
Therefore, the analysis is somewhat informal and short.
There is no usability evolution report at the end.
To build a better understanding of some problems
discovered during the review and the analysis phase,
you may want to use the usability data analysis process discussed in the next lecture.
The advice that works for design workforce, work here too.
Plans spending time on the redesign in advance.
I don't recommend to dive deep into discussing recommendations during the review.
So, the most part of this job should be done after the review.
The review itself consist of four steps.
During the introduction you should explicitly mention the review goal,
the history of the user interface and explain other relevant details.
You need to understand the following,
because people who participate in the review
are not always involved in the project you work on.
For instance, colleague designers from other departments that I mentioned earlier.
You need to allow time for the discussion of the product idea problem,
it helps to solve any existent constraints.
For example, we can't implement this feature right now because the app context of use,
etc.. That is the responsibility of
the moderator who is usually the designer who created the interface.
Tell all participants about the timing,
the structure of the session and briefly instruct on the quality of criticism.
Unlike a group design workflow,
where the whole team moves through an interface together step by step,
in a review session in at evaluating usability is better to give
all participants some time to review the user interface on their own.
The moderator should be present at the whole time during these individual reviews because
participants almost always have
additional questions regarding the design or its context of use.
If there is a user on the review session,
don't forget to provide him with a laptop and a notepad to write down notes.
Through our individual reviews,
participants should keep the notes to themselves.
During the group discussion the moderator asks
every participant to give one comment in turns.
Usually, not only one person has a similar comment,
so other participants can freely add their thoughts.
The discussion doesn't have to go in accordance with the UI flow.
In opposite, it's much more useful to ask participants to start
from the most severe interaction problems in their opinions.
The moderator job is monitor and ensure that
the discussion does not stay long on one problem or design change.
If the discussion is stuck just write down everybody's opinions and move on.
By the way, the moderators taking notes,
for example on a wide board or via some tool for
visual feedback like Red Pen or TrackDeck,
that all participants can see is
more economical solution than having a dedicated roll of a note take.
The last step is wrapping up.
It's needed to discuss controversial comments and possibly to verify feedback.
If not everyone's comments were discussed during
these step you may ask them to write the comments down in one place,
using the same tool for visual feedback or features
embedded in prototyping tools like, Envision, Actua, etc..
So we'll be able to return to these comments during the analysis.
In conclusion, design reviews have no predefined procedures to follow.
They are more like group experts reviews where each participant applies
her own knowledge to come up with feedback according to their overall goal.
They are less time consuming than design workflows and as a result, more focused.
The structure of the review session discussed in this lecture,
when each participant takes time to review these interface on her own first,
works well for usability evaluation purposes.
Thank you for watching.