TechWhirl (TECHWR-L) is a resource for technical writing and technical communications professionals of all experience levels and in all industries to share their experiences and acquire information.
For two decades, technical communicators have turned to TechWhirl to ask and answer questions about the always-changing world of technical communications, such as tools, skills, career paths, methodologies, and emerging industries. The TechWhirl Archives and magazine, created for, by and about technical writers, offer a wealth of knowledge to everyone with an interest in any aspect of technical communications.
Observing the "three strikes and you're out" rule, my final posting on
this particular thread, leaving the final word to others. (Flames will
be hand-delivered off-list to those who abuse the privilege. <g>)
I noted that "All that a usability expert does in _conducting_ a
usability test is collect a list of what you call "gripes"--both those
reported by the test subject and those observed by the expert."
TechComm Dood replied:
<<Geoff, this simply is incorrect. This is but a small subset of what a
usability expert *can* do, and when they do this type of thing, it's to
gain initial data from which to begin an investigation.>>
No, it's correct but incomplete. As you note, usability testing is
iterative, which is the "gain initial data" part. But that doesn't
negate my point in the slightest. The goal of the test or analysis is
to detect problems. Once detected, you then need a certain amount of
expertise to pick an appropriate solution--which may involve changing
user attitudes rather than changing the product in some cases.
I also observed that "The definition of usability is ... "_I_ can use
it effectively". TechComm Dood replied: <<No, you're wrong here Geoff.
That may be YOUR definition, but that certainly isn't the one that
sound usability decisions stem from.>>
First, it's the only useful definition of usability. Second, my
definition in no way prevents sound usability decisions. To wit:
<<Qualified or unqualified, every person indeed has a preference.
That's all well and good, but it doesn't mean they're correct in their
preference or that it even needs to be addressed.>>
Of course not, which is why I noted that your opinion has to be
***broadly representative*** (that's the third time I've said that,
which suggests that some people really aren't paying attention).
Furthermore:
<<Who knows that they are broadly representative? How? That's a neat
trick!>>
It's called statistically random sampling of a population--or
stratified random sampling if you're being a bit more sophisticated in
your approach because you already know a bit about that population--and
if you didn't already know that, I find myself wondering why you feel
qualified to be participating in this discussion. Or were you just
posing a rhetorical question to avoid making a substantive statement?
<<But as a technical writer, the product isn't intended to be used by
you, so if it doesn't work well, then is it still not usable>>
As the technical writer, you are taking on the role of the intended
user by trying to accomplish the tasks that the intended user will
accomplish; in so doing, you attempt to determine how the product is
supposed to work and choose a documentation strategy intended to
support the user's use of the product. You're given a set of goals to
accomplish using the software, and must document how you did so.
Techwriting 101. Which part of this is difficult to understand?
<<[Usability analysis is] an immature *field* but a very mature
*exercise*.>>
You lost me here: if the field is immature, who cares if the exercise
is mature? Run through a well-defined and mature testing exercise as
often as you want, but if it doesn't reflect the real experience of the
person who will use the product, it's irrelevant.
<<And, the definitive judges are not those who use the product, but
those who see benefit from the use of a usable product. This, in many
cases, is seldom the user.>>
You don't honestly believe this, do you? The only possible valid reason
to perform a usability test or analysis is to determine whether "the
intended user" is able to use the product successfully. By
successfully, I mean that they must be able to achieve the intended
results. If those results aren't useful to "those who see benefit from
the use of a product", then the product is by definition not usable--or
diverges so broadly from user expectations that extensive user
re-education is a precondition for using the product.
<<I don't think that there are a lot of tech writers who can do this
[report a problem] effectively, though. We see evidence of this
behavior on this very list daily.>>
I'll agree with you that a good many people could stand to improve
their diplomatic skills. I disagree strongly that a few injudicious
comments "among friends" on techwr-l from the ca. 2% of the list
membership who regularly contribute represents a representative
statistical sample.
In any event, your observation doesn't take away from my argument in
the least. The exercise (reporting problems) should still be
undertaken--even if it takes time to learn to do it well. Documentation
should be a cooperative endeavor with the developers, not a battle of
wills.
--Geoff Hart ghart -at- videotron -dot- ca
(try geoffhart -at- mac -dot- com if you don't get a reply)
ROBOHELP X5 - SEE THE ALL NEW ROBOHELP X5 IN ACTION!
RoboHelp X5 is a giant leap forward in Help authoring technology, featuring all new Word 2003 support, Content Management, Multi-Author support, PDF and XML support and much more! View an online demo: http://www.macromedia.com/go/techwrldemo
---
You are currently subscribed to techwr-l as:
archiver -at- techwr-l -dot- com
To unsubscribe send a blank email to leave-techwr-l-obscured -at- lists -dot- techwr-l -dot- com
Send administrative questions to lisa -at- techwr-l -dot- com -dot- Visit http://www.techwr-l.com/techwhirl/ for more resources and info.