Re: Usability testing

Subject: Re: Usability testing
From: Sandra Charker <scharker -at- connectives -dot- com>
To: techwr-l -at- lists -dot- raycomm -dot- com
Date: Tue, 04 Jul 2000 11:39:47 +1000

Tim Alton wrote:

Usability testing isn't designed to perfect a document, although that's the
unattainable goal we all share. Rather, it's to detect obvious problems in
layout, word choice, or organization.

Errr. Well not only.

<Good stuff snipped>

For example, let's say that for a dozen
test subjects (more than enough, actually), you time how long it takes each
to find the text telling the subject how to do perform a given task.

We recently tested a document set with 6 users, looking both at whether they could find the text and at whether they could complete the task. These are online books, some quite old, all much modified, and not all written in the same country let alone the same department. They also are displayed in an in-house browser that was developed in another country for the books produced in that country, to which our books were retro-fitted. Those are excuses; but the docs failed miserably.

Since the writers have been warning of this for years, we should have been hugging ourselves with glee as we watched our users fumble. BUT, it was much worse than we realised; the books didn't only fail where we expected them to. Exactly like programmers, we did not fully take account of the effect of parts of the user experience outside our own product (ie the books for which we are responsible), and had to watch people not see information that was right on the screen in front of them because they'd come to it from a context we didn't expect.

One sad part of this is that there's very little we can do about it. Another sad part is that this dismal documentation story comes from a very good software shop that would smother its collective head in ashes if its software failed even half as badly.

And the final sad part is that I think it's true that a very high proportion of online documentation (I don't know if it's 90% or only 83.78621%) would fail just as badly if it was tested against users' success at completing realistic tasks.

Somebody on this thread said that many techwriters suffer from terrible hubris. Of course we do; just as software developers do. Just like software developers we are groping our way into a new world with precious little previous experience to guide us, and we have to live with the probability that most of what we think we know is wrong, even while we have to fight like fury to put if into effect. If we didn't have a fair bit of arrogance we couldn't do anything. But lets not kid ourselves that we really know what we're doing, and please, lets grab every chance we get to test our speculations and dogmas.

Yours humbly (for the moment),

Sandra (clambering back onto the shoulders of giants) Charker

mailto:scharker -at- connectives -dot- com
http://www.connectives.com






Previous by Author: Re: Automated e-mail responses
Next by Author: Re: Usability Testing--Interim (I hope) summary
Previous by Thread: Re: Usability testing
Next by Thread: Re: New XML mailing list


What this post helpful? Share it with friends and colleagues:


Sponsored Ads