RE: Baiting for the single source rant - Bill's last words this t ime around

Subject: RE: Baiting for the single source rant - Bill's last words this t ime around
From: HALL Bill <bill -dot- hall -at- tenix -dot- com>
To: "TECHWR-L" <techwr-l -at- lists -dot- raycomm -dot- com>
Date: Fri, 7 Sep 2001 16:33:15 +1000 (EST)

Since I did end up with a few minutes this morning, I answered a similar
thread on the XML-docs forum about how to get the most out of content
management/single sourcing technologies. I will copy the meat from that
here.

The bottom line is that what this rant is about, is that we are all trying
to reduce the authoring of redundant text we have to author and maintain
over a document's life cycle (i.e., the kind of information that is
inevitably used many times over in a corporate documentation set). "Single
sourcing" is one way that works very well in some situations, and there are
at least two others where you can gain major benefits from working in a
structured text environment (e.g., SGML, XML or possibly some kind of
bespoke merge table or database structure).

Basically, I see three qualitatively different ways to reduce the effort
required to manage the redundancy and improve document quality through the
ability to author/edit/manage the redundant information at a single point:

1. "Single sourcing" multiple outputs from a single master document.

This is applicable to documents which are structurally very similar, but may
have alternative elements relating to different product configurations or
(probably the most powerful use) or texts in alternative languages. Texts
that are common to all or several versions of the output documents are
present only once, with variant text elements held side-by-side within
structural containers (no elements in particular containers may be
applicable to some outputs). The variant text elements are identified for
output processing by appropriate attributes to render the specific
deliverable documents.

Tenix found this approach to be very successful for maintenance procedures
used by the ANZAC Frigates we are building for the Australian and New
Zealand Navies (see my Technical Communication case study on
http://www.tenix.com/PDFLibrary/91.pdf). Here we manage Navy and
configuration specific texts within the single master document for a
particular equipment. In most cases the one routine suffices for all 10
ships in the Class. In our best case we collapsed 56 separate routines into
one. Such systems are comparatively easy on the authors, but may require
some pretty heavy duty output processing to render the alternative
deliverables.

We used RMIT University's Structured Information Manager for our solution
(http://www.simdb.com/), our home town team. SIM also has several
implementations and excellent support from SAIC in North America. Other
appropriate toolkits for this kind of application are XyEnterprise's
Content -at- XML -
http://www.xyenterprise.com/solutions/solutions_content_management.asp,
SoftwareAG's Tamino -
http://www.softwareag.com/corporat/products/default.htm, and Excelon
Corporation's XML Platform -
http://www.exceloncorp.com/platform/index.shtml. None are cheap, but for
high value text with complex and demanding delivery requirements, as we
found with the SIM solution for ANZAC Frigate maintenance procedures, they
can pay for themselves very quickly.


2. "Standard texts" established, managed and used as entities.

As Barry noted, this type of approach is particularly suited to things like
warnings, cautions, notes and other kinds of boilerplate texts. A lot of
mileage can be gained with very low cost and even manually managed solutions
if they are approached with some understanding of the processes. SiberLogic
(http://www.siberlogic.com) has recently developed a generic content
management capability for arbitrary XML elements based on this approach
which I am still trying to find the time to test against some of our
technical manuals. The theory, as explained on the SiberLogic site and as I
have discussed with the SiberLogic people, seems to be sound.


3. "Virtual documents" assembled from shared elements.

Here a discretely versioned output document is maintained for each
deliverable, but arbitrary elements within the content may be shared. Where
the document contains an element of unique text, that is held in the
versioned document itself. Where the element exists somewhere else in the
database, the element in the versioned document simply points to the
location in the database where that particular bit of content was first
created. Here the output processing is simple - the virtual document is
simply assembled from the sequence of referenced elements. The curly bits
are in the configuration and change management areas. Again, Barry raised
many of the versioning and configuration management issues that need to be
addressed if such an approach is to work. In a large application for high
value documentation, many of these issues can be dealt with by automation -
but of course someone has to understand how the automation needs to work in
order to guide the programming. Again, repeating Barry's advice, the virtual
document approach will also benefit from some capacity to automatically
detect preexisting text. RMIT has developed all of the relevant
functionality in their SIM DMS application, and I believe they are
implementing them for some of their legislation management clients, but
Tenix is still looking for a big new project where it will be cost effective
for us to implement. Documentation work for the ANZAC Frigates is
essentially complete, and given that we solved all of our major
documentation issues with the single source approach, there is little added
value to be gained from the comparatively small amount of document
maintenance work remaining.

4. "Combined approaches"

Ideally, the major content management products should be able to support all
three methodologies for managing redundancy. The nirvana for content
management systems boils down to being the same as for relational databases
- data/text normalisation - where each unique element of text only has to
written and maintained once, for use many times. Systems we have looked
closely, such as SIM and Content -at- XML, theoretically have the capacity to
support such normalised document bases. What we still lack is the practical
understanding and experience of how to implement them cost effectively.

I hope this encourages more people to try.

Bill Hall
Documentation Systems Specialist
Data Quality
Quality Control and Commissioning
ANZAC Ship Project
Tenix Defence
Williamstown, Vic. 3016 AUSTRALIA
E-mail: bill -dot- hall -at- tenix -dot- com <mailto:bill -dot- hall -at- tenix -dot- com>
URL: http://www.tenix.com/

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

A landmark hotel, one of America's most beautiful cities, and
three and a half days of immersion in the state of the art:
IPCC 01, Oct. 24-27 in Santa Fe. http://ieeepcs.org/2001/

+++ Miramo -- Database/XML publishing automation. See us at +++
+++ Seybold SFO, Sept. 25-27, in the Adobe Partners Pavilion +++
+++ More info: http://www.axialinfo.com http://www.miramo.com +++

---
You are currently subscribed to techwr-l as: archive -at- raycomm -dot- com
To unsubscribe send a blank email to leave-techwr-l-obscured -at- lists -dot- raycomm -dot- com
Send administrative questions to ejray -at- raycomm -dot- com -dot- Visit
http://www.raycomm.com/techwhirl/ for more resources and info.


Previous by Author: RE: Baiting for the single source rant
Next by Author: Fighting a losing battle
Previous by Thread: OT: Trying to respond to a request
Next by Thread: Re: Baiting for the single source rant - Bill's last words this t ime around


What this post helpful? Share it with friends and colleagues:


Sponsored Ads