determine success/usability

Subject: determine success/usability
From: "Girardin, Diane" <dgirardi -at- CABLETRON -dot- COM>
Date: Wed, 23 Dec 1998 11:42:57 -0500

Hi Peg,

I asked our local usability guru, Alicia Flanders, and here are some of her
initial thoughts. I hope it helps!

Diane
dgirardi -at- ctron -dot- com
------------------------------------------------------------
Good question. What comes to my mind initially that
the success of the manuals and the improvement realized
through usability testing are all linked to initial evaluations
of the manuals.

For example, prior to testing, one must identify the areas of the
documentation that need to be checked out. One way to identify areas
of concern would be to send out a questionnare asking customers
specific questions about the documentation. The areas of concern
would then determine the kinds of testing and ways of measuring
improvement and/or affirming there was not a problem in the first place.

If one had concerns about task orientation, one might try to learn
more about user tasks (through Contextual Inquiry) and then evaluate
the books in light of this information.

If one had concerns about accessibility, one might design some read and
locate tests. These lend themselves most to metrics because you can
set some quantifiable objectives. That is, you can say, out of
5 tests, 80% of the users will be able to find 80 of the answers within
3 minutes or something like that. Then after testing, if the metrics
are not met, changes could be made to improve accessibility, using clues
from
customer expectations and areas of confusion, and perform the same
set of tests and interate until there is improvement.

If one is not sure that customers can understand parts of the documentation,
summary tests can be designed to gauge customer's ability to identify
the major messages of critical pieces of documentation. These can also
be quantified, i.e. how many people can identify how many main points,
in relation to how many there are in the portion under study.

Of course one could also observe users work through tasks and count the
number of
times they are confused and/or make a mistake. Revise the instruction,
repeat the
tests until the number of mistakes are sufficiently reduced.

Resending the original questionnaire with revised documentation could
validate that original areas of concern have been improved.

Kind of long winded but that is one approach.

Alicia

From ??? -at- ??? Sun Jan 00 00:00:00 0000=




Previous by Author: window-level help and window macro
Next by Author: FrameMaker --> Web Publisher --> Javahelp
Previous by Thread: Re: Importing .cgm files into Word
Next by Thread: TransAlpine Chapter of STC


What this post helpful? Share it with friends and colleagues:


Sponsored Ads