UA Conference Notes

Subject: UA Conference Notes
From: Wanda Phillips <wanda -dot- phillips -at- philips -dot- com>
To: techwr-l -at- lists -dot- techwr-l -dot- com
Date: Mon, 17 Apr 2006 12:53:43 -0700

Here are the notes I sent back to my team from the Writer's UA conference:

Well, the scavenger hunt is over and I've left Anne and Abir with a large
group of fellow conference goers in a crowded and noisy bar in downtown
Palm Springs. Downtown Palm Springs is two or three times the size of
downtown Bothell. It reminds me, except that I'm in a desert, of downtown
Kingston Ontario.

Opening Session
The opening session consisted of a panel of pundits making predictions.
Interesting predictions, and a few of them touch lightly on our work. Some
of the predictions were completely contradictory and, so, cancel each
other out.
The predictions were organized into three categories:
Tools & Technologies
User Assistance
IT Industry
For Tools & Technologies, it was predicted that RoboHelp will survive.
Many people in the room cheered. Another prediction was that a software
product would arrive that would be a completely transparent for developing
content; writers would no longer need to understand any coding to be able
to generate output of varying types. Another prediction: animation and
simulation would revolutionize online learning. A more difficult
prediction to summarize was the one that said that wikis and blogs will
replace email-based assistance. Having no idea what email-based assistance
would look like, I had no idea what this prediction meant. That presenter
did go on to say that content management systems will increase in
importance. Finally, AJAX, which is a combination of HTML, JavaScript,
CSS, and XMLHttpRequest (there's a mouthful for you) will emerge as the
dominant UA delivery mechanism on the web.
At this point, I'm thinking, am I in the right place? What on earth does
any of this have to do with breaking our content from unstructured
monolithic sections into manageable, reusable chunks that can be found,
reused, and so forth?
As I did not sit near the door, there's more.
For User Assistance, it was predicted that Web 2.0 and DIY apps would
replace vendor-provided content with a user-community support solution.
For those of you not geeky enough to know what Web 2.0 is, there are
articles I can point you to, but basically it's the current label for the
web as a community. Next, one presenter assured us all that print is not
dead. He also said that the user assistance product development will find
itself being encroached upon by other groups, particularly e-learning
specialists and SMEs. Not really something I foresee in our near future.
The web is changing! The web is changing! Localization costs are driving
companies to XML-based publishing and the need to have a simultaneous
release in multiple languages is leading to component translation (finish
a chapter, send it to translation, finish the next chapter, send it to
translation...). The last prediction was that there will be an
anti-technology backlash in the user assistance development community. We
will demand that magical editor.
Finally, the IT Industry predictions. Open source software will make
headway in the market place. The trend in China of controlling the
internet will spread to the US. (The speaker had read a recently
declassified document in which the military outline their ability to
disrupt communications if and where necessary.) Innovation is limited and
the industry is slowing down; on the other hand, can we deliver content on
the omnipresent iPods? The internet will crash. (See, the web is going
away!) And, finally, the web is the only platform that matters to
consumers. More applications are being developed to be delivered, as in
run, from the internet or from a centralized server. Goodbye desktop
computer, hello hand-held wireless device.

After that I had two coffees. I met Anne and Abir in the hallway.

I cruised the booths, but did not stop to talk with anyone (yet). I did
drop my fake business card into the drawing for a free conference pass for
LavaCon. Did I mention that it's in Hawaii?

DESIGNING REUSABLE CONTENT
What did I want? How do we design our process around developing reusable
content. That question was not really answered. But...
- some reuse issues are affected by copyright issues (not really relevant
to us unless we start using other people's work in our content)
- attempting to accommodate multiple learning styles is not necessarily
compatible with re-use as the different learning style really require
different kinds of information not simply repackaging of the same content
- if you are maintaining content that is delivered over the web, you need
some mechanism to ensure that content downloaded or printed is identified
as non-controlled and you need a mechanism for notifying users of the new
content as it is pushed to the site
- there is a continuum of content sharing (we're somewhere in the middle):
- updating existing content (delete out-of-date and add new) aka
revision or versioning
- borrowing (copy, paste, maybe revise)
- systematic reuse
- short segments (sentence, paragraph)
- derivative (larger units such as procedures, segments,
illustrations)
- large portions of content (entire sections)
- personalization - reassemble content according to a user
profile
Guidelines
learn the technology of re-use which includes managing files, controlling
versions, managing workflows (we're there)
make important decisions early which means decide what you want to re-use
and at what level (we've done that and we're redoing that for chunking)
prepare a repository (which we've done, but I'll cheat and add a thought
from a later session which is that we need to consider the hierarchy of
our content repository in light of content sharing)
design processes that ensure people reuse content (well, it's true that
technology itself can take you only so far, people have to search for
content and use it)
use templates (religiously)
write generally to facilitate reuse (this has implications beyond the
simply rules presented, another style issue to be determined before we
dive into the deep)
use only system-generate xrefs (we do this, but it did raise, for me, a
thought about xrefs in shared content and how to ensure that the
destination is included in all books containing the originating topic)
avoid unnecessary product names, versions, models (use them if the
statement applies to a specific occurrence such as if you are using model
x dvd player, expect these behaviours)
don't reference other content in your lead-ins and don't number your lists
in lead-ins
contextualize reused content by adding glue where needed (depends on the
chunking, so if we chunk to the paragraph level sometime reuse means we
have to write a leading para to ensure the context is in place)
ensure visual consistency (we do this, the Philips way)
make sure you own the copyright of everything you publish (or at least
that you have permission to publish that which you do not own)
plan for maintenance
ultimately re-use should be driven by the content not by production

CREATING SELF-AWARE NAVIGATION DEVICES
This was not what I was expecting, and I'm sorry to say I got caught up in
the geekiness of it and did not move on to another session. It was a
session that did not provide us much real benefit, but in the spirit of
whatever, I'll try to apply something from it to what we do.
This presentation was about some nifty little tools that this guy
developed to allow writers to embed navigation, automatically, in
HTML-based content. The tools themselves do not support the level of
complexity we provide in our related topics. But, he did raise some
interesting points for me. He talked about browse sequences and bread
crumbs. With the table of contents visible, our general rule on related
topics is that we need not mirror the table of contents, only include
topics that are related but not in the manner visible in the table of
contents. A browse sequence presents a theoretical path through the
content. This is somewhat in line with what I was trying to accomplish
with my overview topics where I outline a basic study process for a QLAB
plug-in and associate those tasks with other topics in the documentation.
The problem is that the tool he presented would not allow us to have
multiple paths through a topic. But it got me to thinking about paths
through our content and how do we direct our users to the appropriate
content?

TOP TEN TIPS - TAXONOMY
Just to let you know, Seth is a far better presenter in person than in
webinars.
Don't boil the ocean. Clearly identify the purpose of your taxonomy.
Concrete objectives (not make content easier to find, but reduce the time
spent developing new content).
Determine how you will apply the taxonomy. A taxonomy provides a
categorization of content. It can influence the navigation, but it is not,
itself, a navigation tool.
Align the strategy with the enterprise goals. Link the project (developing
a taxonomy) to the larger goals of the company. Include other groups that
may benefit. Identify the ROI. Link it to the business imperitives (save
money).
Identify a baseline and metrics.
Focus on a specific audience. You'll expand later, especially since...
Plan change management.
Don't strive for perfection.
Leverage other initiatives.
Leverage off of existing systems, methods, architectures.
Plan for training.
This was the session that I got the most out of, but it's almost 10pm and
I'm too tired. If any of you are interested, I'll tell you more later.

After all the sessions, we gathered for a wee pool-side party. I met up
with Anne and Abir again. We went out on the evening's treasure hunt. The
main streets of downtown Palm Springs have stars in the sidewalks. We had
clues like: The star of what German femme fatale fronts a coffee bar? the
answer is: Marlene Dietrich.

DAY TWO
If days had themes, like Disneyland, today's theme was Gosh! Look at what
I can do!

Vendor news
I've chatted with Andrew, from Quadralay. He presented in the webinar
about their solution for Cisco that had Anne-Marie rubbing her hands in
glee but left us writers dumbfounded. As he and I talked, I got to see a
bit about how his proposal works. This, then, is the summary of my
understanding. Any errors are mine.
We can continue to author in Frame, building sections and books. The
sections can either be sections as we know them now or collections of
topics brought in as text insets. There is no difference for the FrameLink
replacement tool. It just connects us to the Docbase and makes retains,
for us, some of the basic functionality of FrameLink. A second tool does a
bunch of fancy processing work that includes stuff we don't do right now
because we don't use DITA. The thing is, we can be authoring shared topics
in XMetal and placing them in the midst of product-specific content
authored in Frame files. We could be authoring topics in Frame, storing
them in the Docbase and then pulling them into a Frame section file (as
text insets) and building books. The tool seems to offer a great deal of
flexibility and it gives us breathing room for the transition. And we need
time because we need to geek out a bit before we can settle back into
routines. The new version of WebWorks, by the way, looks great! The second
tool allows you to play with the structure of those insets without going
into Frame and then generate output using this shuffled deck.
Madcap looks like a cool editor, reminicent of the Dreamweaver interface.
Madcap is an XML editor but it does not connect with a CMS. The next
version will support importing FrameMaker files.
I ran into Su-Liang from BlastRadius. She assures me that they have looked
into our call-outs issue and that they can provide the feature.
Sessions
Or, what did I do on my summer vacation...
If you have no real interest in how we get from content you write to end
result, skip over this part. Today I wrote XSLT and xPath programs to
convert content to HTML, filter content using attributes, include a CSS,
and insert cross-references for a browse sequence. Then I explored the
concept of identifying the attributes required for accomplishing this in a
more complex scenario (as in the real world). Finally, I explored the
rationale between choosing DITA or Docbook. I think, at the end of the
day, that we're on the right road. I think that well-structured content
can be transformed into high-quality output using re-usable scripts,
transformations, and CSS. Since the true focus of this conference seems to
be web-based delivery (as the forward-looking technology), there was not
much about print delivery other than Saul Carliner's statement on the
first day that print is not dead.
Other News
The most important thing to know is that I have NOT won a single prize. I
did get a green pen in the shape of a palm tree as a reward for playing in
the treasure hunt. I've watched complete strangers walk off with my
software and iPod prizes. No condolences necessary.
I spent 10 minutes trying to get a photo of a hummingbird. I could not
keep up with it through my view finder, so it was a no-go. Kind of like
the prizes!
There is a wind farm up the valley a bit, I'd love to go on a tour.
This evening I ate more free food, thanks to the folks who make the Madcap
software. And trust me, none of them are the least bit bitter about their
personal histories as RoboHelp developers. And I'm 6 foot and blonde.
Anne does, indeed, know everyone. If she doesn't already know them and
they stand within a metre of her, she'll get to know them.

DAY THREE
DITA DITA
DITA in the morning, DITA in the evening, DITA round supper time, I spent
my day on DITA, DITA all the time...
Yes, indeedie, folks, two back-to-back sessions on DITA and, in spite of
our nail biting, we're on the right track, doing the right things, and we
could be poster-children for the world of delivering agile and intelligent
content. I think we should walk about the halls greeting each other with a
hearty hello and pat on the back. We're brilliant!
MIGRATING CONTENT TO DITA
Keep in mind that even though IBM has tweaked and tested DITA for years,
in terms of the standard, DITA is a baby; new technologies require there
to be the early adopters, the geeks that are willing to feel the pain and
provide the path to gain. We, I'll have you know, are not considered early
adopters! No! The early adopters are those poor people who dove into DITA
without the benefit of commercial tools (few and far between as they are
right now). Imagine considering implementing DITA without XMetal,
Arbortext Editor, or Quadralay to say "Here are the first wave of
writer-friendly tools for getting your work done in this new
architecture." We're close to early adopters, but we are on the next part
of the curve.
Not that being first is necessarily a GOOD THING. But, I think our natural
caution and nail biting is standing us in good stead. Under the guidance
of Anne-Marie, we're walking through a beautifully crafted process that is
the delight of all with whom I have shared any part of it. I dream now of
ROI metrics, pilot projects, and gradual integration. I savour words such
as leveraging and reuse. I'm drunk on the sunshine and bonhommie of the
crowd. The Cardiac Patient Monitoring people, so hard to find until today
(although Anne has tracked them down and has been my sign-post for finding
people), allowed me to take their photo at lunch. They seem dismayed by
the recent shakeup and uncertain about the flavour of management they can
now expect.
They have been entertaining themselves with the Ask Barbara posts.
Okay, I got distracted there.
Anne is pretty much bought into AuthorIT. So, Monday will be an
interesting day.
Back to migrating content. My notes highlight the following thoughts:
- evaluate your business requirements
Most companies do NOT advertise their savings as that becomes a
competitive advantage. That is such a PITA as we need to know how
effective DITA can be, where to set the expectations. Anedotal evidence,
which is an oxymoron if I ever heard one, indicates that most companies
experience a 30-70% cost reduction in translation. Interesting. Now, my
question then is what procedural changes were a part of that? Did they
send entire documents out for translation and now they send only the
changed bits? Do they handle production in-house or does the translation
agency handle it? Are they also introducing a CMS? Questions, question,
questions. See what a Green Belt class does to your brain? What were the
metrics, what changes are made, how were the measurements gathered?
- in performing the transition there are two levels or types or evaluation
milestones in which you perform an analysis
In the first instance, you evaluate your business need, legacy content,
and processes to see if the transition will provide a positive ROI. If you
decide, as we have, to make the transition, you need to perform an
analysis that first reexamines your documentation needs, without reference
to your legacy content, and then a second stage where you evaluate what of
your legacy content can make the transition. Based on my experience with
our content and the work I've done with QLAB content, I think that our
content is very well structured (in spite of our lack of a DTD) and moves
well into the structured format imposed by a DTD (any DTD, not just DITA,
but it does, also fit very well into the DITA model).
It would be nice to come up with some metrics, such as new content
created, to be able to show to those to whom we go for money.
DITA will, during processing, exclude any xrefs that do not have a target
in the output. What that means is this: if we insert an xref to
Configuring Your Printer Driver and that topic is part of the iU22 but not
the iE33 documentation, when the parent topic containing the xref is
included in a book and that book is processed (output into a CHM or PDF),
the xref only appears in the iU22 output. Now, that said, if the xref is
For more information, see Configuring Your Printer Driver and the leading
text (For more information, see ) is not conditionalized, we'll have funny
little stumps littered about our content.
One other interesting things about xrefs: you don't have to include them
in the topic. You can create reference maps that associate a list of xrefs
with a topic. So, you create a list, outside of the topic, but associated
with the book, that says the topic Configuration Options includes xrefs to
Configuring Your Printer Driver and Configuring Your System Security.
These can be automatically inserted during processing.
Okay, now the presenter was NOT from BlastRadius and he said: Of the three
packages that actively support DITA authoring, XMetal provides the most
comprehensive support for DITA.
I suppose your eyes are crossing now and you're wondering when I'm going
to get on to the interesting news, like what happened with the
hummingbird...
But, I will instead, regale you with the next session:
USING DITA FOR CONDITIONAL CONTENT
Wow. I'm really impressed. Of the things that Frame does that we were
worried about losing, let me tell you, DITA and conditional content had me
dancing a waltz in the hall with the presenter.
Oh there is so much to say.
DITA supports a greater range of conditional content than Frame. We will
no longer have the monstrous lists of tags that we use to work around
Frame's limitations (Frame uses an OR logic for producing output based on
tags, if you have a paragraph tagged as iU22 and Help, in a section you
share with the iE33, and you want to show iE33 Help, you'll get that iU22
paragraph unless you do, as we have done, and use iU22 Help as a single
tag.) I have a lovely table that shows DITA support of multi-axis
conditions in your content.
You can tag content as conditional regardless of its element: so
instructions, descriptions, images, whole tables, table rows, topics,
sub-topics... la la, I hope you get the idea. Now, if you want to have a
book but filter content at the topic level, you have two options: one,
create a book map that does not include the topic or, two, conditionalize
the topic reference in the map!! I don't know if any of you are familiar
enough with this, but for the service guys this would be brilliant! You
change the DITAVAL settings and the content is filtered at the topic level
without opening up the book again!
Oh, the beauty of it. I'm going to come home and show you, I could write
for the next hour about the application to our content and the joy it
would bring to all of us, even Fred, to put this into play.

UA in Games
I know, you're thinking, okay Wanda went to play. Nope. Here's my
thoughts. Games want to integrate user help without interfering with the
game, with the suspension of disbelief required for some games, how do you
communicate with your gamers? Does that not resonate with providing user
help on ultrasound systems? I did really get some interesting ideas from
this, but they are complex, as are most of my ideas when I first have
them. Trust me, as much as I enjoyed this session it was because I saw a
way to leverage what's happening in UA for games in our wee universe.

Language is a virus...
Finally, the closing session. A brilliant woman spoke to us about language
changes. We were delighted. There are implications to what we do, but they
are on the larger scale. They actually give new meaning, in my twisted
mind, to the term: localization. Oh yeah.

Otherwise known as
Okay, I did see the hummingbird again, but could not get close enough to
get a picture. Also, I couldn't get around it and that was important
because the little bird was between me and the sun.
After I left the bird, I encountered what I think was a coyote looking for
a place out of the sun. As the beast laid itself down, it disappeared. It
was wonderful to watch. But, as it was getting extremely hot, I did not
want to disturb either of them. Nor did I want to continue to stand there
like a dumbfounded tourist slowly baking in my layers of clothes and
boiling in my pounds of sun-screen.
I am sitting now in my hotel room with the black-out curtain drawn, in the
near dark, finishing this up and wondering if I can get my shoes back on
for a final venture into the downtown area.

Wanda Phillips
Senior Technical Writer, Technical Communications, Ultrasound
425-487-7967
Wanda -dot- Phillips -at- philips -dot- com
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

WebWorks ePublisher Pro for Word features support for every major Help
format plus PDF, HTML and more. Flexible, precise, and efficient content
delivery. Try it today!. http://www.webworks.com/techwr-l

Doc-To-Help includes a one-click RoboHelp project converter. It's that easy. Watch the demo at http://www.DocToHelp.com/TechwrlList

---
You are currently subscribed to TECHWR-L as archive -at- infoinfocus -dot- com -dot-

To unsubscribe send a blank email to
techwr-l-unsubscribe -at- lists -dot- techwr-l -dot- com
or visit http://lists.techwr-l.com/mailman/options/techwr-l/archive%40infoinfocus.com


To subscribe, send a blank email to techwr-l-join -at- lists -dot- techwr-l -dot- com

Send administrative questions to lisa -at- techwr-l -dot- com -dot- Visit
http://www.techwr-l.com/techwhirl/ for more resources and info.


Previous by Author: Re: Have you ever felt the need to create a new word?
Next by Author: Re: I've landed
Previous by Thread: Goodbye Z-pattern; hello F-pattern?
Next by Thread: Procedures - Must we use numbered steps?


What this post helpful? Share it with friends and colleagues:


Sponsored Ads