0:00
Welcome back.
I'm here with Phil Kragnes, as we talk about Universal Design.
Focusing, in this lecture, on Sensory Impairments Designing for
Low and No Vision and Hearing users.
So, when we talked about sensory impairments in our introduction,
we introduced the idea that some users are totally blind.
Some may be low vision, where they can see things appropriately illuminated and
perhaps magnified, but have difficulty seeing even with correction at
a level that would allow them to read normal text and normal graphics.
We didn't speak much, but
we will also deal with the issue of color blindness that some people are unable to
distinguish certain colors even if they're able to otherwise read high contrast text.
0:56
Being deaf, wherein you completely cannot hear or
hard of hearing words difficult to hear things that are at low volume or
where there's conflicting background noise.
For each of these, we're going to take a two-part paths through it where we look
first at what the challenges are and second at what some of the ways of
addressing those challenges are, and then we'll come back at the end.
And talk about how these ways of addressing these design challenges
can have benefits far beyond the specific target population.
So, let's jump into blind and low vision users, and
maybe we should just start with the most obvious challenge,
because it's one that I'm always reminded about in this very course.
That if you provide something visual with no way to explain it,
so if I'm doing a lecture and I put up a cartoon and
say well geeze as this cartoon shows, things aren't always
what we expect and have a nice laugh track saying ha ha ha.
Well the blind user has no idea what was up there on the screen.
2:30
speech engines and they do exactly that.
They convert text on the screen, on a device,
whether that be a laptop, desktop or handheld device.
Convert that into synthesized speech.
2:48
Well there are many elements that can be used that are not text based.
Images, certainly are not.
And when we think of a cartoon, we're really talking about an image.
Color, we've touched on, another non-textual element.
And so we must think about ways we can provide
access to the equivalent information.
That may be alternative text for images and cartoons.
Maybe certain other types of mark up that explain
something in a particular area on the screen that has
a specific background color indicates something else.
Being able to convert that or convey that in multiple formats, again,
not only makes it accessible to the blind screen reader user,
the person who is color blind, which we'll talk about more in a moment.
But those users who may be in an environment where
reflected light on their computer screen washes out the color
or hard to imagine these days, anyone working with a monochrome display.
But we've seen enough monitors go bad and
not display true colors, and so to rely on that,
we're assuming that everyone's equipment is in perfect working condition.
Not always the case, and so if we cover all aspects, visual users,
non-visual users, or visual users with faulty equipment,
then our users are going to be much happier.
>> Well, and it's actually something that comes up when you're talking about
just teaching.
4:50
Not everybody's looking at the time that you are talking and showing things.
And sometimes you're better off.
If I had the old New Yorker cartoon, I could say,
as this New Yorker cartoon shows, ha ha ha.
I could also say, well,
if you look at this New Yorker cartoon which has a dog sitting at a computer,
with a caption at the bottom saying, on the Internet nobody knows you're a dog.
Not only do the people who have visual impairments get what I'm referring to.
That wasn't a very difficult explanation, but
5:29
the students who were, at the moment, focused on something else,
it might catch their attention whether to have them then pause and go back and
actually look at the visual or at least they've gotten the message as we're going.
>> Well I think another important point along with the if it's just an image,
that image is insearchable.
So if they want to find that cartoon and you have a search feature iin your app or
your website and they can search for the word dog.
6:15
>> Great, so let's talk about charts and graphs.
Because that's a special case, a lot of apps these days,
particularly some of these personal fitness apps, that are showing these
beautiful color displays of exactly how many steps I'm taking, and my goal,
and my weight change and all sorts of other things that they're measuring.
By following my smart watch.
Those are a potential real trap for users with visual challenges.
6:51
>> Yeah, the more complex the visual information conveyed by an image becomes,
the more the non visual user population is going to be left out.
But I think Joseph just hinted at the fact,
7:33
But we can also put a label and a percentage
at the end of each bar on that graph, line it up so that it's presented as text.
And the non-visual user can have access at that information.
You could use a complex image as a link or control to display.
A text based version of the information.
In some cases, we may have a pie chart, and we can say,
this slice is this percent, or represents this at this percent.
And section y represents this at y percent.
And section z represents this at z percent.
We can also provide a data table on a separate page that we link to
8:56
There also, I mean, I have an app where you can enter
text information about blood pressure, diastolic, systolic,
pulse, temperature, etc, and it's all very text based.
But when I rotate the phone into landscape view, it's presented as a visual graph.
Rotate it back to portrait view, and it's back to
9:24
straightforward text, which is the form it has to be in for entry, but
again, the user can decide which view works best for
them simply by rotating their portable device.
So lots of different options depending on the platform that you're working with.
>> So these next two areas deal with the fact that there already is a lot
of platform technology to try to support users with no or low vision.
And just as a matter of how this has changed over time,
10:04
the first time I interacted with blind programmers was
1981 and at that time, they were using Braille teletypes.
These were large keyboards with
a row of paper that would print out one line at a time.
And the rest of us weren't that far removed.
We were now working on 24 line smart terminals.
But a few years earlier, we would've been working on a print teletype.
10:52
Well, all software has been built on probably for the last 30 years.
And so we need to understand some of those navigational challenges
to how do you interact with an application that involves menus and
buttons and fields and all of this stuff when you don't
have visual sensing as a means of interacting?
I was wondering if you could give us a little bit of an introduction as
to what is used today and
what you have to do to make sure that you don't get in the way of this technology.
11:29
>> Yeah, a lot of times people assume that
putting a label near a control, that proximity is all that's required.
Well, let me tell you, it doesn't work.
11:44
And so there are features or components that we need to
make sure that we use, things such as alternative texts.
On the web we would refer to it as an alt attribute.
So let's say I have an image that has text
of some kind,
Acme Computer Parts Incorporated, and it's their logo.
And so in the alternative text for that image,
I need to have Acme Computer Parts Incorporated,
and that is the text that the screen reader,
text to speech engine, can translate for a non-visual user.
But now let's say we're presenting images that
let a user choose specific items so they're more icon-based images.
Well, we need to provide an alt tag, or
alternative text, and it needs to be descriptive
enough to convey the information that the visual user is getting.
So we don't want to have alternative text saying icon, we need to say
13:04
you'll pack me computer part number 436,
or something like that.
So it needs to be descriptive, it needs to be concise.
I don't want to sit there, as a non visual user, and
listen to this long explanation of this is an icon.
It is this many pixels high and this many pixels wide,
and contains the colors X,Y and Z.
And represents and uses this font on this color background.
And it's like I just want to find the part I need to repair my computer.
13:45
Well, putting that alternative text not only benefits those
users of screen readers who need the text base information,
but also speech recognition users.
That is, people with mobility impairments who may navigate their
device or their webpage by using their voice.
And if the icon that they need to click on does not have
alternative text, the speech recognition has no label to identify that.
So when they see this graphical image that says Acme part 437, and
they say click on Acme part 437 but there's no alternative text,
the screen reader application's I don't see anything labeled that on the screen.
14:50
We also need to make sure that there's some programmatic association
between labels and the elements to which those labels correspond.
Oftentimes screen readers will move down a page and
they'll hear edit, edit, edit, check box not checked,
radio button not selected, radio button not selected,
radio button not selected because no programmatic or
structural information has been provided to link the text with the control.
Well, just like alternative texts can benefit speech recognition users,
15:37
making that structural link between a label and
a field can benefit users with mobility impairments.
I mean, look at some of these little check boxes and radio buttons and you're like,
wow, I'm supposed to touch that or hold a mouse pointer on it?
Good luck.
But the programmatic association Means that the text label,
has now become part of the target.
So instead of clicking this little tiny radio button, I can click the word yes,
that has been structurally associated with that radio button.
Similarly, a person with a learning disability
16:44
So, lots of benefits for lots of users.
And obviously, again, don't necessarily have to have a disability.
I mean, touching the word yes versus trying to touch
a little radio button is a lot easier for everyone.
17:02
>> And I think one of the key points here is with most platforms today, if
you use them the way they were designed, they were designed to handle this for you.
They were designed so that radio buttons don't sit by themselves, but
have associated content.
If you put the text next to it, the screen reader is going to know
that that text is associated with the radio button.
The problem happens when you decide to get cute and say, well,
I'll just have the radio buttons by themselves with no labels.
Because I'm going to have this interesting thing that's nearby on
the screen that will indicate what the labels should be.
And if those are not connected programmatically,
there's no way a screen reader can be smart enough to say,
I think the person meant this button meant something.
And that's where problems become really tricky.
So supporting that screen reader interface is something that, in general,
involves actually using some of the the built in tools the way they come.
>> And sometimes, I'll find designers,
developers will actually create the labels for
controls as background images or text.
Because you'll click one tab and
the label associated with these three radio
buttons are a set of three different labels.
You click another tab, and the radio buttons remain in place.
They're in the same location, but now the labels have now changed.
Well, screen readers can't access background information, they don't do it.
18:50
Any type of attempted association isn't going to work, either.
But again, proximity does not
ensure that screen readers will read the correct label,
or will read any label at all for a field, so important.
Both Android and iOS, their developer
kits have built-in accessibility features, there's all types of information.
The World Wide Web Consortium, and what's referred to often as Web CAG,
the Web Content Accessibility Guidelines 2.0,
give good guidance on how to make these structural associations.
Add alternative text, and a great number
of other accessibility considerations and features.
And the best part is that building accessibility in does
not generally impact visual design.
20:00
When I started in this field, one thing I told myself,
if I'm going to get developers to build in web accessibility, I can't tell them well,
you need to lay out your page this way or that way, nope.
We need to build in accessibility that's hidden for visual users,
and is only accessible through specific adaptive technologies.
>> And this is part of why some of these alt texts work so
well, because they're not displayed.
If you're capable of seeing the visual, that's great.
But they can be processed behind the scenes, which is useful.
We should mention one more example here,
which is particularly for low vision users.
20:45
Screen magnifiers are wonderful in a number of ways.
And we're going to talk about screen magnifiers as specific software that zooms
in everything.
But there's also the notion of having the ability, as you might if you
hit Ctrl+Plus, or its equivalent, in your web browser to just make things bigger.
21:37
>> Well, just like proximity doesn't work for
ensuring that a label will be read for control for screen reader users.
Proximity is important when we're talking about activating controls and
pop-ups, or other changes on the screen,
making sure that that is in close proximity.
So if I have a control on the screen,
but I have my screen magnified such that that
control and maybe little bit of text, or
a few letters of words surrounding that control are visible.
But information pops up way over on the left or up at the top or
way below or something.
That individual is not going to be aware of that popup,
it's just not visual on the screen.
Well, we can have the popup associated with a tone.
22:43
So item pops up, and at the same time, a tone occurs.
And this will alert a screen
magnification user that something has changed on the screen.
It also alerts someone with attention deficit disorder
that something has changed on the screen.
We could choose, rather than a pop up, that we use something
like a modal page that appears over top of everything.
You can't miss it, it blocks out everything.
Now there are a lot of accessibility considerations with modal pages,
especially for screen readers.
But they can be made very, very accessible.
23:35
>> Well, I think the frustration that all of us have experienced, with or
without any of these particular limitations, that you go through some
large form on a webpage and hit submit, and it looks like nothing happened.
And it looks like nothing happened because two scrolls up, There's a message at
the top that says error, not all fields are complete or something.
And of course it was displayed nowhere near where you pressed it.
And there was nothing that said the press failed, it just silently failed.
If you can recognize that that's frustrating when you have the ability to
see as much as you and I have to see, imagine how much more frustrating that is
if you had to page up 16 or 20 times because of the size of the screen
that you could see at one time to try and search around for what's going on.
>> Or find where the screen area to add to that headache,
I often get error messages like that says,
please correct the information highlighted in red.
[LAUGH] And I'm like, okay well I can give up on this forum.
24:58
That's great.
But put some other indicator that is text based.
And while you're at it why not make the error message a link
that'll take them to the problem?
So they don't have to click on that scroll button or hit Page Down or
whatever to get to it.
So it really comes down to customer service.
How easy do you want the person to find your app to use?
I mean let's face it, it's not easy to use they're not going to
recommend it to others, and they're not going to continue to use it themselves,
unless required for a job or some other reason.
So enhancing usability and customer service is beneficial,
yeah, I mean for non-disabled and disabled users alike.
>> So let's talk perhaps somewhat more briefly about the same issues with
auditory feedback and auditory information.
So, if we could think about this in three categories, roughly.
And, the first one is Audible Alerts that if
the way that we find out that something failed is
the screen shows nothing, but it goes beep.
That's not going to work for somebody who's deaf, or for that matter,
somebody who's just, has the volume turned off.
So what do we do there?
26:57
If that's overwritten, then there's absolutely no way for
a deaf or hard of hearing user or someone in a noisy environment
to know that an error has just occurred.
But let's say we use a tone, well,
we've really been talking about redundant indicators,
so not using color alone, and the word is alone.
A lot of people would think, well I can't use color, because screen readers.
Well, no, you can't or shouldn't use it alone.
If you combine it with something text-based then it's great for
screen readers, but if you combine an audio signal with something color based.
So for a deaf user, theres an error and
part of the screen changes color.
Turn something like red.
Red is always used as a warning color, that hey something is wrong here.
The non-visual won't be effected.
Those people who are distracted.
28:21
What's going on here instead of looking back to the screen and go well,
the buttons grayed out, and the area, and
it now has a red border, or something of that nature.
Something must be wrong.
>> Well, this is again the message that having something persistent helps
with people who may have attention deficit issues that maybe they didn't know and
then they were distracted by something.
28:50
And if it was an audio signal that was in the past.
But if there's a persistent visual indicator on the screen,
then they could go forward.
So let's talk about audio content generally,
there's certainly a bunch of cases where there is lots
of interesting audio content including I hope this very lecture.
What do we have to do to make that audio content available?
>> Well there is two approaches
29:27
And there is a sequence or a process that
is being described or that the audio is in sync with.
So I take this container and I pour this amount into
this container and it should turn this color.
Well if we provide a static transcript,
there's no way to know what that color was, or
filling the line, the beaker to this line.
30:28
And that probably it doesn't need to be captioned,
it's better if it is, but in that case, what was said
is the most important feature and the transcript would suffice.
Providing both is really, really the best approach.
30:52
Transcripts are something that you can go back to and
search for keywords and phrases.
Doesn't matter whether you're deaf, or blind, or
you don’t currently have an impairment, it makes finding the information
you’re interested in much faster easier to find.
31:19
But, for the deaf, hard of hearing having that caption,
the information going along with the visual content,
I mean obviously you don't want to watch a football game and
try to be reading along with a transcript.
Especially if it's a live game,
that'd be a good trick that you could create a transcript of a live game.
But you don't want to watch a taped game.
31:50
And then either try to read along with the transcript,
both looking at the text on the transcript and the playing action.
You also don't want to go back later and say,
well I watched this without any sound, I was really bored but
I made it through and now I'm going to see what the calls where.
And it's not synced, it's just not synced.
And again, we talked about learning disabilities with information acquisition.
Well, multimodal input, auditory and visual Simultaneously.
So, we're going to benefit our deaf friends.
We're going to benefit users with learning disabilities.
And we're going to benefit everyone in an environment where audio is not an option.
>> Yeah, I would add that the one case where I think transcripts almost always
turn out to be better Is if it truly was audio only,
32:55
that the speed of reading a transcript, compared even with listening to the audio.
I frequently will read podcasts rather than try to listen to them because you can
read, skim, skip ahead in ways that the audio channel just doesn't
support really well, but that's the rare case that you have audio only content.
33:24
For people who are low hearing, they need to be able to turn up the volume.
Yet is seems like this is something that
some interfaces just steal away from the user.
And maybe all we need to tell people is stop doing that.
>> Yeah, [LAUGH] it's like that,
or they build >> Volume control buttons
into the interface when the device itself
has perfect volume control especially when we're talking a hand held device.
Computer, yeah may not be quite as friendly depending on
your keyboard and whether you have the volume icon displayed in your taskbar.
But in general,
people are going to find a volume that works for them in situation x and y.
But they get to an application that builds in its own control.
>> And instant z is half the volume of the other two.
So they turn it up, but when they go back to the other two, they're way too loud.
So it's best to, again, not override system defaults.
34:45
>> Right, so as we pull this together, >> I wanted to come back to those
universal design benefits and
when we think about interfaces they're usable by blind users.
They are almost inherently well designed among other things for use in the dark
with physical design that can be as simple as
hotel room and where the light switch is and how do you adjust things.
But more generally, if you're in an environment where you need to be able to
use something in a theater, if you need to be able to use something in an environment
where you can't put a lot of light in, even a car, frankly.
Designing things well for blind use is going to give you that as a benefit.
Second, if your eyes should be focused elsewhere, driving is the obvious example,
but a lot of work environments, you're working on something, and
having an interface that allows you to get help or reference materials or
other things, adjust controls, while your hands are busy manipulating
whatever it is that you're working on, are frequently the same types of interfaces
that you would use with a blind or vision-impaired user.
But the surprising one is that screen reader support and what we think of as
just making sure that your program is following a logical structure.
That things are hierarchical and grouped and that a reader can go through and
say okay file, edit, next go down there and read the text.
That is there and the labels that are there in clusters,
is also remarkably helpful for people using speech recognition as input,
because that provides the anchors that the speech recognition can go to and
say, I know what you mean when you say select yellow.
Because yellow was attached to this check box or this radio button.
37:34
They often think all this is going to take me to another search page or
take me to some other search feature whether then performing the feature.
So making sure that controls are labeled meaningfully and properly is important.
Those little things really make a difference.
Absolutely, and the same principles apply for interfaces that are well-designed for
div users.
That that makes them well-designed for use in noisy environments and
if you're on the factory floor,
you're probably not hearing anything that comes out of a computer or a device.
38:11
But also in public spaces where you need to disable sound.
Whether you're sitting in a theater or
whether you're in a public computer lab where they put in those speakers on
the computers because they don't want all of that noise happening.
Or you're in an environment where you have to silence your device.
The phones that vibrate, I find just it's a wonderful thing.
>> I wish my laptop had the same ability to vibrate.
[LAUGH] I actually, what I've done personally, because I have trouble feeling
the phone vibration is I have a watch that vibrates when my phone would vibrate.
And I'm sure one day that this will vibrate when my computer has
something to do.
And then I'll probably lose sensation in my wrist and Move on to something else.
>> Well that's an interesting accessibility feature.
I mean you think, well I've got an alarm clock on my phone.
39:06
And yeah, I can make a nice loud alarm that'll wake me in the morning.
Well, I don't care how loud you make it,
if you're deaf, you're not going to hear it.
But if you put that phone under your pillow, and your wake up time,
your pillow starts vibrating, you've got your bases covered.
>> Yep, absolutely, and then, obviously, the other side of this is
today we see more and more cases where somebody's ears are occupied.
Or somebody's got ear buds or a headset on,
listening to music they still want to be able to do other things, but
the one that's the surprise is the amazing degree to which things like
captioning have been valuable to non-native English speakers.
The idea that if you don't hear a word properly,
you can see it If you don't understand it seeing it,
you can hear it and you can connect the two as a way of learning language.
Watching captioned television has turned out be remarkably successful as
has been watching captioned lectures.
And it's just another side benefit of universal design.
So with that we're going to wrap up our lecture on universal design for
sensory impairments.
We will be back with the next in our sequence, see you then.