Surveys/Audience analysis - Summary1

Subject: Surveys/Audience analysis - Summary1
From: Damien Braniff <Damien_Braniff -at- PAC -dot- CO -dot- UK>
Date: Fri, 20 Mar 1998 14:27:07 +0000

Some time ago I asked several questions - about manual content, audience
analysis, surveys and audience types. Seems to be others interested and
I've finally got round to doing a summary - thanks to all who contributed,
some very valid points have been raised as well as . Any other comments
welcome! Summary split as long. I've decided to go down the "mix 'n'
match" route producing a questionnaire and following up with visits to
selected customers. Also going to try and get as broad a range of views as
possible by not sticking purely to our own customers but check other fields
as well. Also adding a Tech Writing email address for comments.

Formative testing: Give representatives of the user population the
product and ask them to perform specific actions. Provide a fact or two to
get them started, then let them struggle. You find out what the user wants
to do, what doesn't make sense, and what you need to tell them. Sometimes
designers even change the product based on this testing.

Usability testing: Give user representatives the product, including
documentation, and ask them to perform specific actions. You can see how
the whole package works together and what is missing, unclear, etc.

Follow-up visits: After the product has been installed for a month or two,
visit the customer site (we do this with the assistance of dealers or VARS)
and talk with people. You see how the product is used and get to ask what
they like and don't like.

Beta: For an upcoming product, I will be going along with a new product to
a customer site. I'll get to observe every step of installation,
setup, and how customers learn to use the product. With the concerns you
have, you might get the best data this way.

As you can see, none of these methods includes surveys, but require more
time (and therefore money) to implement. I'd guess that you would gather a
lot of interesting data by implementing even one of these types of testing.


****************************************
Good documentation is not about the product but about the user.
Specifically it is about the use of the product to perform specific tasks.
This being so, you cannot write good documentation unless you know what
your users do. You need to survey them to discover things like:

What are their tasks?
What are their tools?
What is their level of knowledge and training?
What conditions do they work under? (At the top of a pole? Under six
feet of
water?)
What regulations govern their work?
Do they make notes or job aids for themselves (and can you get a copy of
them)?
Which pages of the current manual are dog-eared? Which are pristine?
Which
have comments written in the margin? What are the comments?


****************************************
When we did content surveys among our beta testers, they were by necessity
very product-specific. Here are some typical questions we ask followed by a
comment (in parentheses) on what it tells us about how the user is using
documentation.

Were you able to copy a mapunit from an existing legend to another legend
and correlate its data mapunits to the new legend? (This question is geared
at finding out how much they read and used the tutorial.)

Could you find information on all data elements and data types necessary to
develop an area type query? (This info is only in our help system, so it
tells us whether they were looking there or stopping with the book.)

Did you find it easier to build your selected set by performing multiple
queries or by using load related? Please explain. (Multiple
queries is a more basic method. It's also slower once you understand load
related. Load related is easy, but only if the user understands the
database structure. Gives us an indication of where users are at and what
aids (tutorial, online, manual) they've probably used.

My advice is to develop questions specific to the product you're doing the
survey on. Limit each question to one type of documentation: help, release
notes, tutorial, or manual. Answers, or even the ability to answer, will
tell you what people have read. We're fortunate in that we have beta
testers who agree to answer our questions as a part of testing. By not
directing them to where the answer is, we're able to discern some things
about how they work.
****************************************
- Is the shrink-wrap on the current manual broken?
- Do the users have access to the documentation, or is it in a locked
cabinet in another building?
- Do the users know there is a manual?
****************************************

In short, people access information for a variety of reasons. The same
person may access the same document at different times for different
reasons. In the same vein, a person who is experienced with SkillA but who
is new to SkillB will access information differently than if that person
wanted more information about SkillA.

People who *know* the background information want to quickly find the
subject because they forgot something about it (perhaps syntax
structure).

People who want to *learn* information will read about a subject in a
linear fashion. (People like me who are new to a subject
will read a book cover to cover.)

People who want to *become familiar* with (not learn) information will
skim. I think that most people fall into this category
and subsequently fall into the other two camps. (There's more, but this
is the nutshell version.)


Without going on and on (I hit but a minutiae of information), I'd like to
recommend an excellent book that covers this subject in much greater depth
("Designing and Writing Online Documentation : Hypermedia for
Self-Supporting Products" by William K. Horton).

Let me qualify "excellent" --- it's excellent information, but Horton is an
engineer. Thus, I found the writing quite dry and oftentimes very dull.
However, comma, if you're the type that can discipline yourself to get past
the "stuff" and want to gain some valuable information, I'd recommend
snagging a copy of it today. I've been singing its praises for a couple of
years.

****************************************
A great book on end-user task analysis is the new one by JoAnn Hackos and
Ginny Redish, _User and Task Analysis for Interface Design_ (Wiley, 1998).

****************************************




Previous by Author: User analysis
Next by Author: Surveys/Audience analysis - Summary2
Previous by Thread: JOB: S. CA Sr. Writer, contract-to-perm
Next by Thread: Re: Surveys/Audience analysis - Summary1


What this post helpful? Share it with friends and colleagues:


Sponsored Ads