Re: Quality/validation (long)

Subject: Re: Quality/validation (long)
From: Stuart Burnfield <slb -at- FS -dot- COM -dot- AU>
Date: Fri, 9 Feb 1996 09:25:06 +0800

Steve Jong <jong -at- lightbridge -dot- com> and Charles Good <good -at- AUR -dot- ALCATEL -dot- COM>
have been discussing quality and validation of documentation. How can
quality be measured? Can it be measured?

Here's a message I posted six months ago, concerning a method described by
Gerald Weinberg and Donald Krause in their book, 'Quality Before Design'.
I've never been in a position to try their ideas out but I'd love to know
whether anyone else has. Have you read the book, Steve? They seem to back
up a lot of your suggestions.

Regards
---
Stuart Burnfield (slb -at- fs -dot- com -dot- au) Voice: +61 9 328 8288
Functional Software Fax: +61 9 328 8616
PO Box 192
Leederville, Western Australia, 6903

------------------------------------------------------------------------

Date: Sat, 15 Jul 1995 11:53:12 +0800
Subject: Re: Assessing quality and customer satisfaction
To: Multiple recipients of list TECHWR-L <TECHWR-L -at- VM1 -dot- ucc -dot- okstate -dot- edu>

I read a book about four years ago called 'Quality By Design'. One of
the authors was Gerald Weinberg, of 'The Psychology of Computer
Programming' fame. I thought the book was excellent and I've tried
to summarise it below (sorry, it's a bit long).

The theme could be summed up as: "How do you know if your development
project is still on the rails?"

The examples in the book are based on the design of a control panel for
a lift/elevator, but the method described could apply to the development
of any computer system, device, product, manuals, or whatever.

The authors show how you can use regular surveys of the product's
stakeholders to tell how satisfied they are with the current design
of the product and of the progress of the project.

A 'stakeholder' is anyone with an interest in the end product --
users, developers, marketeers, technical support people, etc.

Most surveys are rubbish. What I really liked about this book is that
it shows how to avoid most of the common problems with surveys, such as:

- badly designed questions - they don't let users say what they want
- most people hate answering surveys anyway - low response
- sample is poorly chosen - replies don't represent views of the
whole population
- survey results in impressive sounding but meaningless statistics

...and the fundamental problem of any development project:

- developers, users, and other stakeholders have different goals and
measure success in different ways.

Briefly:

- stakeholders list the attributes of the product that are most
important *to them*.
- The most popular attributes are chosen to represent stakeholders'
feelings about the current state of the product/design.
- stakeholders are surveyed regularly from start to finish of the
project. They rate each attribute on a sliding scale, from 0 to 10,
or -3 to 3, or whatever
- all the responses are averaged for each attribute
- the numbers don't matter. Changes in the numbers matter.

For example, a poll of stakeholders comes up with a list of twenty
attributes for our new Widget project. These include

Technical wizardry (programmers, engineers)
Easy to use (support, marketing, users, documentors)
Inexpensive (users, marketing)
Reliable (support, users)
Simple, robust, design (documentors, support)
Easy to maintain (support, users, marketing)
Attractive (marketing)
Multi-lingual (support, users)
Uses standard, easily available parts (support)
... and many more

The person who contributed 'standard parts' agrees that this is covered
by 'Easy to maintain'. Some other attributes are combined. The top seven
attributes are chosen by stakeholders from the combined list.

A survey is done after the draft design is released. Most results look
OK but one thing stands out: 'Easy to use' is comparatively low. The
design team talks to the user reps and the documentors, who turned in
many low votes. Nearly all of them think the proposed front panel
design is an arcane nightmare. A revised design is sent around, and
later surveys show a steady improvement in 'Ease of use'.

Later, 'Reliable' drops from 3.6 to 3.2 to 2.1. The numbers themselves
don't mean anything, but clearly stakeholders are less happy with this
attribute than they were in previous surveys. It turns out that testers
and documentors are finding many bugs and not all are being fixed. The
programmers are ritually flicked with wet beach towels and the rating
for 'Reliable' improves in later surveys...

Unfortunately the book belonged to my previous employer and I left
before I had a chance to try these ideas out. I tried to order the book
last year but it was out of print. I believe Weinberg and his partner
ran/run seminars explaining these ideas -- perhaps copies are available
to attendees or can be ordered through the authors. If you have
questions, contact me and I'll try to answer them.

---------------------------------------------------------------------------

Geoff Hart <geoff-h -at- MTL -dot- FERIC -dot- CA> said:
> In the ongoing "cost/value of publications" thread, Dick Dimock raised
> the issue of assessing customer satisfaction. We're soon going to be
> trying our first-ever "survey" of our clients' level of satisfaction
> with our reports...(snip)

> A few leading questions:
> - how can you get good response rates from your mailouts?
> - what quality metrics do you use?
> - is usability testing more useful than reader response forms and
> subjective interviews? (e.g., choose three messages, write the report,
> and assess if the readers got the same three messages you intended)
> - is stratifying our audience into four main (relatively homogeneous)
> groups a manageable task? Too many groups? Too few?


Previous by Author: Voice Recognition in 'Lord of the Rings'
Next by Author: Winning example of the Worst-Technical-Writing ...
Previous by Thread: Networking For A Better Tomorrow
Next by Thread: placing information on a network


What this post helpful? Share it with friends and colleagues:


Sponsored Ads