Re: Anthropomorphism is bad because...

Subject: Re: Anthropomorphism is bad because...
From: Chris Despopoulos <despopoulos_chriss -at- yahoo -dot- com>
To: techwr-l -at- lists -dot- techwr-l -dot- com
Date: Wed, 23 Jun 2010 06:10:03 -0700 (PDT)

A few points...

1) Janice is OK with "waits", but not with "expects" or "anticipates". For "anticipate" I presume she only objects to the definitions that imply necessary cognition. But to foresee a condition and prepare for it, or to meet a requirement before its due date, a system doesn't need rational thought. Systems anticipate conditions all the time, are often wrong, and are often right. You can go down the biological chain to find animals doing things you would not balk at calling anticipation, with no less a mechanical chain of preconditions and responses than we see in systems today. B. F. Skinner did wondrous things with pigeons in boxes.

My point here is that we have a shifting line. Where do you draw it? Not only can we argue the point with today's technology, but technology is pushing the boundaries. So is biology, in the other direction. Humans have unique responses that cannot be reproduced by machines, but it turns out these tend to originate from lower functions -- emotions, endocrine responses, etc. Coellio (I forget his first name and probably misspelled ihs last) is a neurologist breaking this ground. Read "In Serach of Spinoza" for an easy treatment -- assuming I even got the title right. Anyway, on the one hand, systems are doing more and more of the things we ascribe to humans (we really *are* anthropomorphising them), and more and more of the things humans do are turning out to be mechanical. So the line is shifty today, and will be shiftier tomorrow. A brief and incomplete list of actions machines would never have been granted 100 years ago:
* Search
* Check spelling
* Check grammar
* Diagnose root causes
* Warn of possible problems
* Suggest possible actions (or movie choices) with reasonable success
* Identify people by face
* Analize spending habits to discover potential terrorists
* Decide when to buy or sell

How will this list have grown by 2099?

2) What machines don't do is fight you to the death to keep your finger
away from the off button. When that happens, machines will begin to
approach qualities that are unique to living systems. I think a large part of the urge to not anthropomorphize machines is sheer vanity. We hold our intellect so high that we can't bear to see it's mechanism exposed. Machines commit that heresey, and we must combat it with our language! I've said it before, and I'll say it again -- A.I. is having so much trouble because intelligence isn't all that it's cracked up to be. It won't be long before biology gets added to the mix. Look for viruses that evolve, sexual reproduction in worms, and who knows what among the various computer parasites. For all we know, that's already happening in the lab. What's to keep us from making a program that "anticipates" when you'll shut it down, and "hides" its loaded memory on another node on the network? When these things start to happen, when machines translate survival into pleasure/pain, then you'll have what looks like a living basis for decisions. But we do
these things *before* we embellish them with our intellect. Machines are prosthetics for that embellishment -- we need language to describe these prosthetics. If our language is so old that only anthropomorphic terms will do, so be it.

3) No doubt, audience is an important consideration. If "expects" or "anticipates" really will cloud understanding, then by all means avoid them. But again, I don't think it's hard to show that anticipation can be reactive, and that machines anticipate all sorts of things you might do. It's not too hard for common language to turn "anticipate" into "expect." That might not be a good evolution, but there it is.

4) Before Windows 7 I might have believed that my computer is incapable of deliberately provoking me. Thanks to a recent upgrade, I know better.



Gain access to everything you need to create and publish information
through multiple channels. Your choice of authoring (and import)
formats with virtually any output. Try Doc-To-Help free for 30-days.

You are currently subscribed to TECHWR-L as archive -at- web -dot- techwr-l -dot- com -dot-

To unsubscribe send a blank email to
techwr-l-unsubscribe -at- lists -dot- techwr-l -dot- com
or visit

To subscribe, send a blank email to techwr-l-join -at- lists -dot- techwr-l -dot- com

Send administrative questions to admin -at- techwr-l -dot- com -dot- Visit for more resources and info.

Please move off-topic discussions to the Chat list, at:


Previous by Author: RE: Anthropomorphism is bad because...
Next by Author: RE: Anthropomorphism is bad because...
Previous by Thread: RE: Anthropomorphism is bad because...
Next by Thread: Re: Anthropomorphism is bad because...

What this post helpful? Share it with friends and colleagues:

Sponsored Ads