Re: Bonus Plans

Subject: Re: Bonus Plans
From: "Steven J. Owens" <puff -at- NETCOM -dot- COM>
Date: Fri, 18 Dec 1998 18:46:43 -0800

Morrison, Aaron writes:

> Does anyone know of a good way (or any way at all) to bonus tech writers?
> Upper management wants to bonus, and we certainly don't want to stop them.
> But we're not sure of the best way.
>
> We don't want the writers to sacrifice quality just exceed a deadline and
> receive a bonus. So we're seeking other ideas.

I'd say the ideal way to award bonuses would be to award them
based on quality, and to make the decision about quality with a method
that also helps the writers and the writing department to make
progress in developing better manuals. Maybe coming up with a good
way to solicit customer feedback, to establish when manuals are more
or less helpful to customers. Or hiring a usability firm to do some
sort of usability testing with sample customers, and basing bonuses in
part on the results.

One slippery slope to watch out for is basing bonuses on
*improvements*, since logically the amount of improvement you can put
into a document will not be infinite. The amount of work necessary to
improve a doc will probably be on a bell curve, at that; easy to bring
a wretched document up to "mediocre", hard to bring a mediocre
document up to "good" and very hard to bring a good document up to
"excellent".

Another thing to watch out for is the nature of the book and
topic. Some are easier than others... of course, some writers are
better at some things that at other things, too.

Do you penalize the writer because managerial decisions (deadline
pressures, lack of writers, etc) required putting him or her
on a book that does *not* match his strong suite?

Does the writer who is excellent at tackling really intricate,
technical topics (harder topic, but doesn't do as good a
writing job) deserve more or less of a bonus than the writer
who excels at reaching the user, but tackles more elementary
topics (easier topic, but does a better writing job)?


One thought that comes from some usability projects I was on is
to break the challenges and "measurable" elements of writing projects
and figure out how to factor them together to arrive at some
pseudo-quantitative analysis. I am *not* trying to say you can break
it down to numbers, but you can impose a structure to be more
methodical.

I'll give an example of how we applied this concept in usability
engineering below; maybe folks here can work out how to apply it to
technical writing projects. The key is to identify, discuss, measure
(by gut feel most of the time), preferably averaging out the estimates
of several qualified judges (i.e. writers)), and then factor together
the different dimensions of the project. Applied to writing projects,
dimensions might include:

existing state of the project
type of document (user guide, reference, tutorial,
troubleshooting guide, etc)
overall scale of the project (not the document size! :-)
complexity of the subject material
constraints of the schedule (typical project or crash deadline?)
internal project resources (does the team have time for the writer?)
external project resources (does the writer have what he/she needs?)
external factors

"External factors" is my polite way of referring to uncooperative
engineers or management. You might find that you need to "bury" this
in some other rating for political purposes :-). You might also call
this "extenuating circumstances" :-). On second thought, it'd
probably be best to simply include these in assigning the specific
dimension ratings.

I just listed these off the top of my head. The level of
granularity might be way off; perhaps some need to be broken down into
separate dimensions, perhaps others combined. The weighting of the
importance of each dimension depends greatly on the nature of your
work and the setting.

You could also have a set of dimensions for measuring the quality
of the book; it'd be nice to have some of the actual numbers come from
external testing with typical users, but anything is better than simply
"it was a good book".

It's probably a good idea to do something like this after each
project in general, as part of a "project post-mortem" (generally
recommended but seldom done in software engineering practices), to see
what you can learn from each project. Ideally, you need to apply the
methodology to the existing document and perhaps try to estimate the
dimensions of the project-to-be, and then afterwards look at how the
document improved and see how the project difficulty estimates were
off. Of course, nobody has the time for the full-blown approach, but
something is better than anything...

An example of the methodology in practice:

The goal in the usability engineering project where we employed
this methodology was to analyze the existing "installation" process
for a full-scale industrial database suite, a process generally
considered to be unusable by both the customers and management.
Specifically, the goal was to:

1) identify the usability flaws
2) analyze them
3) prioritize them (this was the hard one!)
4) set some specific, achievable, measurable goals for usability

Identifying and analyzing them, while tedious and by no means
simple or easy, was at least a straight-forward process. The entire
writing team spent one-hour and two-hour meetings going over the
installation process, breaking down the steps, and identifying
specific steps or aspects of steps as problems. The flaws were
written down on index cards for easy tracking.

Prioritizing them properly was critical, both that they could be
addressed strategically (what's most important to fix first?) and that
personal opinions and departmental politics could be neutralized by
doing so in a methodical, repeatable, measurable way.

We identified several different aspects, or dimensions, of a
usability problem in general, and assigned each flaw a severity rating
of 1 to 5 for each dimension. We assigned fancy names to them, but
paraphrasing the descriptions:

who - How many people does the flaw affect
when - How often does the flaw come up in usual use
what - How much does the flaw interfere with using the product?
how - Is the problem "solvable" or recurring?

To describe each in more detail:

How many people does the flaw affect

Not sheer numbers so much as the distribution over our user base;
does it affect only developers, or only system adminstrators, or
everyday data-entry users? How key/critical is that segment of the
userbase, how well-suited are they to dealing with usability flaws, and
how forgiving will they be?

How often does the flaw come up in usual use

Rarely, occasionally, frequently? In assigning this rating, it's
important to restrict your consideration to the population that is
affected (as identified in the "how many" rating). An installation
problem only affects the system administrator who installs the
product, so it'd have a low "how many" rating but might have a high
"how often" rating.

How much does the flaw interfere with using the product?

This ranges from, does it make normal work awkward (1) all the
way to does it make it impossible (5)?

Solvable or Recurring

This one is tricky, it seems to be part of "how often" but it's
really a distinct dimension. Some problems occur until something is
fixed, some recur due to inherent design flaws, or because the
problem is not fixable by the customers.


Factoring It All Together

The next trick was to work out a way to factor these numbers
together to produce a overall severity rating. One of the writers who
had some familiarity with statistics analyzed the process to make sure
it was statistically and mathematically valid. Not that mathematical
integrity was a high concern for *us* (the numbers were assigned half
by feel anyway), but that in dealing with engineers, you can't expect
to respect something that is mathematically wrong.

The method we used was roughly to build three "matrices". Like
multiplication tables, the first and second matrix each factored two
dimensions together to produce a third number. Then we factored those
two numbers in the third matrix.

"How many" and "how often" were factored together to produce a
"universality" rating, while "how much" and "solvable/recurring"
combined to produce a "functionality" rating. These two factored
together to produce the final overall "severity" rating.

This approach created a defacto "weighting" of the dimensions; we
could have used a formula or equation equally as well, but using the
matrices was simple and quick, and we used the "unversality" and
"functionality" numbers in the report, as well as the "severity".

Steven J. Owens
puff -at- netcom -dot- com


From ??? -at- ??? Sun Jan 00 00:00:00 0000=



Previous by Author: Re: Developers
Next by Author: Re: Information Mapping
Previous by Thread: Bonus Plans
Next by Thread: [no subject]


What this post helpful? Share it with friends and colleagues:


Sponsored Ads