TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Tim Altom suggests that
(a) only TWs can come up with the idea to do usability testing on their
documents, but
(b) TWs are too arrogant and set in their ways to either do so or accept
results that don't fit their preconceptions.
In a later post, Tim points the way to effective usability testing by defining
the object of such testing as, "to detect obvious problems in layout, word
choice, or organization."
Tom Johnson says that his documentation is all in maintenance mode, so they
don't do testing for minor changes. Tom does report a documentation problem,
but it turns out to have been a service rep. who arrogantly insisted that his
way was better than the documented way. So we conclude that the documents were
okay after all.
Dan Emory explains how he pulled his "90% of all an-line help produced since
1995 would fail a rigorous usability test" number out of thin air while
maintaining that it is probably a low number. Certainly Dan has not had good
experiences with on-line help, by his own admission. Not surprisingly, it turns
out to be Microsoft's fault.
Sandra Charker tells the sad story of usability testing that pointed up even
more flaws in a documentation set than were expected by the writers. Then she
goes on to inform us that the testing she did was a wasted exercise: "One sad
part of this is that there's very little we can do about it. Another sad part
is that this dismal documentation story comes from a very good software shop
that would smother its collective head in ashes if its software failed even
half as badly." (I might argue that if the software shop was so good, its
documentation would be better...or that testing without implementing a
correction plan is almost criminal negligence. Why bother with testing if
you're not going to respond to the results? You make A. Plato seem positively
brilliant.)
Forgive me for seeming so cynical, but if we are writing to meet the needs of
an audience, should we not be able to (a) define those needs, and (b) test our
documentation to determine how well we are meeting those needs with an eye
toward doing even better as we move forward?
Apparently, where usability testing IS occurring, it is a mere bureaucratic
exercise. You all have made me feel at least honest in not pushing it. It
sounds like a waste of time.
Does anyone have any success stories for usability testing?