boffin

Musings: Complaints


The 20 most egregious misuses of words in American English
and some near misses

Here is my list of candidates for the most misused, or mispronounced, words or phrases in the English language—as used in these United States. I’ll ignore all of the horrible mispronunciations the British are guilty of. Gawd! They invented the language and they don’t even know how to pronounce it. (Of course, they probably think the same of us, but I conveniently choose to ignore that.) Hey, “patriot” does not rhyme with “rat”—know what I mean?

Let’s face it, folks: The English language is going to hell in a hand basket. Every day it gets worse. Our language is being destroyed before our very ears. There was a time when only the unwashed masses did the kind of things I am about to complain about. Nowadays, everybody is doing them. Even the sort of people who should—and probably do—know better. I’m talking about educated persons here. Even major network news anchors and high-class narrators are guilty of most of the following flubs.

Anyway, here's my list:

1. Lie, lay. Here’s a word that is an endangered species: lie. Today one rarely hears it used. Everyone says “I was laying down” or “I can’t just lay around doing nothing.” You lay things down, but you lie down. You may be lying around, but you most certainly are not laying around. The only correct usage of “lay” is in sentences like “Now I lay me down to sleep,” because here “me” is being used as an object, and you do lay objects down—even yourself. Of course, the past tense of “lie” is “lay”: “After lunch I lay down to sleep for a while”—you could hardly say “I lied down ...” could you?

Originally this item was in 8th place on my list. I have elevated it because, quite simply, the problem with the misuse of “lay” is growing by leaps and bounds. I hear or read it now literally dozens of times a week. I’m sure that “lay” is by now the most misused word in the American lexicon. It seems that the entire population of the United States suddenly has gone illiterate. Will Strunk, where are you now that we need you?

2. May have, might have. There once was a very useful distinction between this pair. According to current dictionaries, there still is. “May” is used to express possibility, “might” to express unrealized possibility. Consider these two sentences:

      The seat belt might have saved Mary’s life.

      The seat belt may have saved Mary’s life.

The first sentence means that Mary is dead, but if she had been wearing her seat belt she might have survived the accident. The second one means that Mary survived the accident, and it may have been the seat belt that saved her life.

Nowadays “might” seems to have suddenly disappeared from the English language. Only “may” is used today. One hardly ever hears “might” or “might have.” Unfortunately, this leaves us not knowing whether poor Mary is dead or alive.

This was originally number four on my list, but I have pushed it up to number two because the misusage of
may to mean might is becoming ubiquitous. Is the entire world going illiterate, or is it just the United States? Come on, people, get on the stick!

3. Continue on. I’m not the first one to complain about this one, and I’m sure I shan’t be the last. Today, virtually everyone uses this abomination. There seems to be a universal addiction to putting an “on” after “continue.” The word “continue” means “go on.” So “continue on” means “go on on.” Well, maybe you want to “go on on,” but I’ll just continue, thank you!

4. Comprised of. This is an atrocity. Yet it is becoming more common all the time. Comprised means, essentially, “composed of.” So “comprised of” means “composed of of”—thus falling into the same category as “continue on.” One should say, for example, that Congress comprises the House and the Senate, not that Congress is “comprised of” the House and Senate. Back when
The Elements of Style was written, the chief error in using “comprise” was to use it backwards. The House and Senate do not “comprise” Congress, they “constitute” Congress. Nowadays the misuse of “comprise” has become insidious.

5. Warranteed. There is no such word. There is such a thing as a warranty. But something that carries a warranty is warranted, not “warranteed.” This confusion apparently came about by confusing warranty with guarantee. Things are guaranteed, but never “warranteed.” Also, warranty is pronounced
wahr-an–tee, not wahr-an-tee. Once again, the word has been confused with guarantee. Americans are such simpletons!

6. Unique. There are no degrees of uniqueness, as Strunk and White pointed out so long ago. A thing is either unique—meaning “one of a kind” or “without equal”—or it is not. There is no in between. Phrases like “very unique” or “most unique” are therefore meaningless. I think this word is used incorrectly at least nine times out of ten these days. I even do it myself sometimes. Physician, heal thyself!

7. Try and. God knows where this one came from. The “and” is a substitute for “to” as in “try to do something.” It was one thing when this misuse was confined to spoken, colloquial English. But now it is a growing monstrosity. The first time I remember seeing this usage in formal English was in a 1936
National Geographic article titled “Butterflies: Try and Get Them” (that’s right—1936!). Nowadays everyone and his brother and sister is using this construction, not only in formal spoken English, such as news broadcasts, but in print. I think maybe this misuse first arose from a desire to avoid the somewhat awkward repetition of the word “to” in sentences such as “I’m going to try to avoid that.” Nevertheless, when you stop to think about it, the construction makes no sense. Does “I’m going to try and avoid that” mean one is going to both try to avoid something and actually avoid it in the process? Not the way it is used, it doesn’t. It still means one is going to try to avoid something. Why don’t we just say that instead of using the awful “try and”?

8. Irregardless. There is no such word. The user means “regardless,” and the extra “ir” at the beginning is superfluous and redundant. This usage allegedly came from confusion between
regardless and irrespective, which is a real word (but see below). Ignorance, thy name is everyone!

(Ann Landers pointed out recently that “irregardless” is now in the dictionary. True, it is—
but it is listed as “non-standard.” The 1990 American Heritage Dictionary wrote, “The label Non-Standard does only approximate justice to the status of irregardless … [critics sometimes insist] that there is ‘no such word’ as irregardless, a charge they would not think of leveling at a bona fide nonstandard word such as ain’t, which has an ancient genealogy.” The word irregardless was, by the way, coined by cartoonist Al Capp for his “Lil’ Abner” comic strip, so now you know who to blame for this monstrosity. Capp also introduced the word druthers in his comic strip, largely redeeming his authorship of irregardless.)

9. Semi-truck. In the past few years this ridiculous term has sprung up in the media. What is a “semi-truck” anyway? Half a truck? If so, how is it split—crossways or sideways? Either way, it would be less than useless. The abbreviation “semi” refers to “semi-trailer,” of course. And this does make sense, since the “trailers” involved have wheels only at the back. The front wheels of the semi-trailer are supplied by the rear wheels of the tractor that pulls it, so in a sense it really is only half a trailer. What ignoramus first came up with the stupid term “semi-truck”? Do they really think people are so ignorant that they don’t know what a semi is any more? Okay, so they probably are that ignorant—but why not use the correct term “semi-trailer”? Are they afraid people won’t know what that means, either? Arghhh!

10. Affect, effect. Here is another problem that seems to have cropped up fairly recently and is rapidly growing worse. People seem to have forgotten what these words mean. They think that “effect” means “affect.” Here is an example of the misuse of “effect” as made by a highly-educated man who was preparing a report on herbicide damage: “This method takes into account how the value is effected by the species of tree, its condition … and location.” This was not a typo, since he repeated the same error in at least seven other places in the same report. Not once did he use the correct “affected” or “affect.” In a bid on some cleanup work connected with the same herbicide damage, the owner of a tree service made the same mistake when he wrote, “… remove all effected by herbicide … plus lower branches effected.”

Folks, this looks like the next language plague!

11. Myself, yourself. What ever became of me and you? These days we hear phrases such as “like yourself” or “my wife and myself” constantly. Has it suddenly become unacceptable to refer to me or you? The “self” words should only be used in sentences like “Did you really do that all by yourself?” or “I don’t think so myself.” Will Strunk and E. B. White railed against things like “between you and I” in
The Elements of Style. If they heard people today saying things like “between yourself and myself” they would probably both have a stroke right on the spot!

12. Fun, funner, funnest. The last two words are non-existent, yet one hears them all the time. “It’s funner to do” thus and so. “That was the funnest thing of all.” This used to be a mistake that small children made and adults corrected. Now the adults do it, too. Originally, “fun” was a noun, not an adjective. It seems to have crept into adjective-land insidiously. By the early 1980s dictionaries were listing an adjective form of “fun,” used in sentences like “He’s a fun person,” or “That was a fun party.” This wasn’t too bad, but the illiterati took the ball and ran away with it. Soon they had invented the atrocious “funner” and “funnest” to go with the questionable use of “fun” as an adjective. Mid-1990s dictionaries list these two words, apparently giving in to common usage. To those of us who grew up when “funner” and “funnest” were just plain wrong, these words are an atrocity to the ear. Things can be “so much fun” and others may be “more fun” or the “most fun,” but “so fun,” “funner,” and “funnest” are the worst of bad English. Shame on you, America!

13. Have your cake and eat it too. Balderdash! You can’t eat your cake unless you have it. And once you’ve eaten it, you can’t have it any more. So the saying should be “eat your cake and have it too.” That’s the way it was said when I was a youngster, but nowadays everybody—and I do mean
everybody—seems to have forgotten the original in favor of the nonsensical version. The press made much of the Unabomber’s correct usage of this saying.

14. Interdiction. The feds use this word as though it meant “intercept.” It doesn’t. They talk about programs to “interdict drugs at the border” by which they mean programs to stop drugs from being smuggled across the border. The verb “to interdict,” however, means “to prohibit.” Drugs are already illegal, and hence prohibited, so it is impossible to “interdict” them any further. This is another good example of a word appropriated to mean what its user wants it to mean, and to hell with what it really means. (Note added in 2002: When I first wrote this paragraph, it was true. Nowadays our dictionaries have given in to the blatant misuse of this word and have added “interception” as an additional meaning of “interdiction.” Another case of Big Brother dictating the way we must think? It reminds me of Louis the Fourteenth’s famous “The State, it is me!” [“
L’Etat c’est moi”], picked up long afterwards by Jersey City “boss” Mayor Frank Hague, who remarked, “The law? I am the law!”)

15. People that, a person that.… You read and hear this one all the time. It’s dehumanizing. The pronoun “that” should never be used as a substitute for “who.” But it is, all the time. If you’re talking about a thing, it’s okay to use “that”—in fact, you can’t use “who” to refer to things. That would be as bad as using “that” to refer to people. You can use “that” when referring to animals, although even then I prefer to use “who.” Animals have personalities, too. It’s okay to use “it” when referring to an animal if you don’t know its gender. If you do know, then you should use “him” or “her” (or “his” or “hers”) as applicable. But you should never use “that” or “it” with respect to people. If you don’t know a person’s gender, use “him” or “his.” You would never know it today, but those two pronouns can refer to either a male or a female when the gender of the individual is not known. And all the so-called politically correct people (who are neither correct nor political) can go soak their fat heads!

16. Data. This is a plural word. You do not say “the data is faulty” you say “the data are faulty.” A single piece of data is a datum. The word is from Latin and is similar to phenomenon (singular), phenomena (plural). Also, data is pronounced
day-tuh, like Commander Data from the TV series “Startrek: The New Generation.” The “a” sounds like the one in “day,” not the one in “that.” Interestingly, most technically educated people say day-tuh, whereas most lay people say dat-uh. So if you want to sound erudite, say day-tuh. Okay?

17. Sloppy pronunciations. These are mostly words where a variation is pronounced differently than the root. Examples are irrefutable, comparable. The first is pronounced ir-
ref-u-table, not ir-re-fute-able. The second is pronounced comp-ar-able, not com-pare-able. Misusers of these words are trying to pronounced irrefutable to sound like the root “refute” and comparable to sound like the root “compare.” Then there is the noun “permit,” which is pronounced “per-mit.” These days one commonly hears it pronounced like the verb “to permit,” which is pronounced “per-mit.” What nonsense! There are several other words where this same mistake is being made commonly these days. Is nothing sacred anymore?

18. Salchow. Actually, this is not an English word at all. It’s the name of a Swedish ice skater, Ulrich Salchow (1877–1949). He invented a jump, which was named after him. If you’ve ever watched any serious ice skating, I’m sure you have heard an announcer call one of these jumps a “sal-cow.” Every time I hear this it grates on my ears like chalk squeaking on a blackboard. You see, the name of the skater who invented the jump is pronounced “sal-kawv.” The “aw” in this sounds like the word “awe.” It definitely should not sound like “cow.” Pronouncing it that way is a tremendous insult to the memory of the ice skater who invented the jump. Shame on the entire ice skating fraternity!

I’m not sure when this mispronunciation took hold. My 1966
Webster’s Third New International Dictionary (unabridged) gives the pronunciation as “sal-kov” (the first syllable rhymes with “Hal” or “pal”, and the “o” is pronounced like the “a” in “paw”), whereas a 1992 Random House Webster’s College Dictionary gives “sal-kou,” in accordance with current practice. Did the pronunciation change between then and now, or did ice-skaters always mispronounce it and dictionaries have finally admitted it? I suspect the latter. Dick Button was an Olympic champion long before 1966, and he consistently says “sal-cow.”

19. The missing adverbs. These are words that are missing, not misused. I’m referring to such modern atrocities as “passed away” without the “away”; “hanging out” with no “out”; “caved in” without the “in”; etc. There are many other instances of this nonsense, but these three will do for starters. When someone says that a person “passed,” I wonder what he passed: a stop sign? A test? When someone remarks that his child is “hanging” with his friends I get a vision of children dangling by their necks from ropes, gently twisting in the wind. And, last but not least, I always thought that “to cave” meant to explore a cave, not to give up.

This sort of mis-usage seems to have cropped up mostly in the past few years (2008, at this writing). Okay, so people are lazy, but this is taking laziness to an extreme.

20. Point in time. This has got to be one of the most hackneyed, useless expressions in common use today. If you’ll pardon the pun, what’s the point in using it? What people mean when they say this is “at the time,” “at that time,” “now,” “at this time,” or “at this point.” Why not just say that? A “point in time” is, in fact, a time, pure and simple. In every case where this term is used, either the words “point in” or the words “in time” can be removed without any loss of meaning whatsoever. Instead of saying “at this point in time...” say “at this time...”, “at this point...”, or something similar (in most cases you could use just plain “now”). This is exactly the sort of excess verbiage that Will Strunk used to rant and rail against (see
The Elements of Style by Strunk and White).

Even worse than this one is the expression “moment in time.” What other kind of moment is there? Are there “moments in space”? I think not. (There are moments in physics, but those are an entirely different kind of moment.)

Now, here are the near misses:

1. Flammable. Actually, there is nothing wrong with this word, except that when I was a kid it didn’t exist and there was no reason for it to ever come into existence—other than the absolute stupidity of the American public. When I was a lad, gasoline trucks and the like had big warning signs on them that read “DANGER: INFLAMMABLE.” Inflammable means “capable of being inflamed.” In other words, something that will burn. Something that will not burn is non-flammable (or non-inflammable). But—can you believe it?—Some of the idiots who run around loose in our society thought that “inflammable” meant “not burnable.” Apparently, some of them must have torched themselves smoking or burning things around gasoline trucks. So the “authorities” made up a brand new word: flammable. They figured anyone would know what that meant. Now the gasoline trucks have big signs on them that read “DANGER: FLAMMABLE.”

But getting back to what “inflammable” means: What kind of ignoramus thinks that someone would go to the trouble of putting a great big warning sign on the back of a gasoline truck reading: “DANGER: THIS WILL NOT BURN”? Now I ask you, does that make any damned sense at all?

Maybe we should have stuck with survival of the fittest.

2. Often. When I was in grade school we were taught that the “t” in “often” is always silent. There were no ifs, ands, or buts about it. Nowadays most people seem to pronounce the “t,” although there are still plenty of people who stick to the silent “t.” Pronouncing the “t” seems to be a British failing (I use the term advisedly), since everyone in British films seems to do it. Why can’t we keep our own pronunciations of words? Unless you’re close to my age, you have no idea how hearing the “t” in “often” grates on the ears of those of us who grew up never hearing it.

You don’t sound the t in “soften,” do you? Then why sound the t in “often”?

3. Envelope. When I was young, nine out of ten people pronounced this word the way it’s spelled:
en-vel-ope. Nowadays nearly everyone pronounces it on-vel–ope. Why, I don’t know. The latest dictionaries on the market (1993 editions as of this writing) still list en as the preferred pronunciation. The American Heritage Dictionary that came with the 1996–1997 edition of Microsoft Bookshelf also listed en as the preferred usage. Webster’s lists the on pronunciation with a little symbol that means many people object to that pronunciation. Most words that begin with “en” are pronounced like “end.” Examples abound: entire, encircle, entail, engage, enable, ensure. I could go on and on, but you get the picture. To be sure, there are exceptions, such as entourage and encore. But why add another confusing exception to a language already replete—many would say, overloaded—with them? I, for one, shall continue to say envelope. Damn the torpedoes!

4. Down’s Syndrome. Lately you see this spelled “Down syndrome” a lot. This makes no sense at all. It has nothing to do with being “down,” unless the term has suddenly become a synonym for clinical depression. The syndrome was named after the person who first described it, a physician by the name of J. H. L. Down. It is a great disservice to his memory to leave out the “’s” in the name of the syndrome. What makes it even worse is that some of the people who are doing this are medical professionals. They should know better. So “Down’s syndrome” is a bit hard to pronounce. So what? So are a lot of other terms, and nobody is trying to change the way they are spelled. Get with it folks!

Long after writing the above paragraph, I found that in 1975 the U. S. National Institutes of Health convened a conference to standardize the nomenclature of malformations. They recommended that “The possessive use of an eponym should be discontinued, since the author neither had nor owned the disorder.” What a stupid reason! No one ever said or believed that the term “Down’s syndrome” or any other disorder similarly named, such as Alzheimer’s disease, was either owned by or suffered by the person who named it. It means, of course, that he was the person who discovered it. That is all such a term ever meant, and that is what it should mean today.

Many diseases and syndromes are still known by “the possessive use of an eponym.” Examples of this include Addison’s disease, Alzheimer’s disease, Crohn’s disease, Cushing’s syndrome, Hashimoto’s thyroiditis, Huntington’s disease, Lou Gehrig’s disease, Ménière’s disease, Munchausen’s syndrome, Parkinson’s disease, Raynaud’s syndrome, Reye’s syndrome, and many more. All of these have retained their “possessive eponyms” and are still very much in use today. Why single out poor Dr. Down to “disenfranchise”?

Frankly, I think the National Institutes of Health should be censured for their stupidity. This is one of the most ridiculous things I ever heard of.

5. Appalachians. These are the mountains that run up and down the eastern states. The name is pronounced Appa-
lay-chians, not Appa-latch-ians. People in some parts of the U. S. consistently pronounce it the second way. If they knew how it grates on the ears of those of us who grew up with Appa-lay-chians, they might quit. For one, John Coleman, the TV weatherman, did stop pronouncing it that way and began pronouncing it the right way. Way to go, John!

6. Storey. This was once a very useful word in the English language. It means the number of floors in a building. Nowadays the “e” is dropped in even the most erudite publications. The word “story”—which means a tale—is substituted in place of the correct “storey.” A ten-story building is a building that has ten, possibly interesting, stories relating to it. A ten-storey building is one that has ten floors. The way things are today, we never know whether a building has tales connected with it or a number of floors. When I was a youngster, it was considered incorrect to use “story” to mean the number of floors in a building. Today, ignorance reigns supreme.

For an example of correct usage, look at the cover of Thomas Merton’s autobiography,
The Seven Storey Mountain. For a more recent example of correct usage, see Michael Crichton’s book Jurassic Park: The Lost World. Way to go, Mike!

7. Pronouncing “Nevada” the way they do in Nevada. According to my dictionary, even “Sierra Nevada” (the mountain range in California) is pronounced this way. The first “a” is pronounced like the “a” in “back.”
Sierra Nevada is, however, a Spanish name. The mountains were named by Spanish explorers before we gringos ever saw them (they were named after the Sierra Nevada in Spain). And in Spanish the first “a” in Nevada is most assuredly pronounced like the “a” in “ah,” not the “a” in “back.” I don’t care about anyone else, but I shall always pronounce Nevada as neh-vah-da. If the people who live in Nevada don’t like it, they can stuff it. The Spaniards were here first.

8. Album. This means a published collection of songs. Radio deejays have a nasty habit of using the acronym CD as a synonym of album. It isn’t. They will say, “so-and-so’s latest CD” when they really mean their latest album. After all, most albums are still put out on cassettes as well as CDs, and not a few also come out on LP records. All three are albums. Hey guys and gals, let’s call a spade a spade and an album and album, okay?

9. Luxury. Nothing wrong with this word, or with the way it is used. The only problem is the way people pronounce it. Nowadays almost everyone says “lug-sure-ee,” which is wrong. The “x” is pronounced as an “x.” It should be “lucks-your-ee.” The same thing goes for the word “exit”—it’s pronounced “ecks-it,” not “eggs-it.” These mispronunciations have become so common that they have crept into the dictionaries lately. That doesn’t make them right.

10. Diskette. This is a stupid term. It was invented, apparently, because it was just too much trouble to say “floppy disk.” The dictionary lists it as a synonym for “floppy disk.” Hey, a disk is a disk is a disk—right? The main trouble with this term comes when people working in the PC industry, and who should know better, write things like “floppy diskette.” This is redundant, like “Sahara Desert,” since it means “floppy floppy disk.” How floppy can one disk get? Every time I see “floppy diskette” in print I practically retch. Give me a break!

11. Incorrect punctuation with quotations. Nobody seems to know the rules for this sort of thing these days. They are really quite simple: periods and commas go inside the quotes; semi-colons, colons, question marks, or exclamation points go outside (unless they are part of the quoted material). Examples:

He said, “I like it,” but he didn’t mean it.
She gave me a “pig in a poke.”
Almost everyone gets these first two wrong.
Did he say, “I didn’t do it”?
He asked me, “Why did you do that?”
Did the Indian say “How!”?

12. Ration. Here’s another word that has changed for the worse. When I was young (and during the Korean War) it was always pronounced as “ray-shun.” Now it seems that all one hears is “rash-un.” I don’t know where this came from, either. This is another pronunciation that grates on the ears of us old-timers.

13. Ibuprofen. Again, there is nothing wrong with this word or with the way it is used. But it is almost universally mispronounced. Dictionaries have been universal in their pronunciation of this word ever since it became part of the public lexicon (sometime in the 1980s). It is pronounced
eye-byou-pro-phen, although almost every person in this country pronounces it as eye-bee-pro-phen. The advertisers on radio and television appear to have been responsible for this atrocity, and everyone else just jumped on the bandwagon. That doesn’t make it right. The “u” in ibuprofen should definitely be pronounced.

14. Tijuana. This has got to be the most universally mispronounced place name in the world. Look carefully at how it’s spelled. Do you see an “a” between the “i” and the “j”? No, neither do I. If it did have an “a” in that place, it would become
Tia-Juana, which means “Aunt Jane” in Spanish. Tijuana should be pronounced as tea-hwahna, where the “hwah” gets the emphasis and is produced by expelling air through the mouth forcefully. There is no English equivalent to the Spanish “j” in constructions like Juan, Juarez, etc. All are to be pronounced with that forceful “hwah” sound. Listen to a Spanish speaker say one of these words and you will understand.

15. Colorado. The latest dictionaries, for example Mirriam-Webster’s
11th Collegiate Dictionary, list two pronunciations for this name: Col-oh-rad-oh, where the “rad” rhymes with “radish,” and Col-oh-rah-do, where the “rah” rhymes with Rahway (a city in New Jersey). A note states that the second pronunciation is used “chiefly by outsiders.” Whoa! I lived in Colorado for a total of some 22 years between 1955 and 1977, and during that time I would estimate that about 75 or 80 percent of Coloradans used the second pronunciation, with the “rah.” You heard the first pronunciation used by some persons, especially from rural areas and the West Slope, but the second form was by far the commonest. I have no idea when the other form took over, but it must have been after 1977. As to why it did, I suspect it was by analogy to the pronunciation of Nevada (see number 7 above).

16. Using words like “most egregious” when “worst” would do just as well. Hey, what’s good for the geese is good for the gander, right?

Anyway, now you know what “egregious” means, if you didn’t already. They say you learn something new every day. I guess you can go to bed now. Good night!


• • •


Who's Trying to Kill the Buffalo & Other Mysteries

In which our hero (gosh, that’s me!) takes on the nattering nabobs of negatavism in the scientific community and elsewhere.

Who’s trying to kill off the American Buffalo?
In case you haven't noticed it, they’re trying to kill off the poor old American buffalo. His name is being systematically purged from our books in the name of “Scientific Correctness,” which is perhaps one notch above “Political Correctness,” if even that. Instead of “buffalo,” we are told to use the graceless term “bison” in its place. Why is this happening? Why is it so important that we call these creatures bison instead of buffaloes, so necessary that the revisionists seek to remove all reference to the original name of the great beast, thereby rendering rootless such names as Buffalo, New York, and Buffalo Bill Cody?

For one thing, the public has always been allowed a certain amount of leeway in devising common names for plants and animals. That’s why there is a scientific name in formal Latin for every plant and animal that has ever been discovered. Many common names for both plants and animals are taxonomically incorrect, but the use of buffalo for the animal Bison bison is the one of the few that seems to raise the hackles of scientists and educators. The reasons for this are unclear.

Living things are divided into seven principal categories, each one more restrictive than the one before. These are kingdom, phylum, class, order, family, genus, and species. The Latin name for the American buffalo,
Bison bison, assigns it to the genus Bison, species bison. The true buffaloes belong to the genuses Bubalis, Syncerus, etc. Both bison and buffalo belong to the same family, the Bovidae (bovines). The American robin, Turdus migratorius, for example, is not a robin at all but a thrush. It was named after the European robin, Erithacus rubecula, which is in a different genus—exactly the same as the difference between the American buffalo and the true buffaloes. But there is no concerted effort to correct this error. Errors in the common names of other creatures are often much more egregious than a mere mistake in genus. The lady bug, for example, is not a bug at all but a beetle. They belong to different orders, not just to different genera. Although there has been an attempt to correct this by calling it the lady bird beetle, there has been no effort to wipe out the name “lady bug” as determined as has been the campaign to eradicate the American buffalo.

Other examples of misnamed creatures are literally too numerous to mention. Some spectacular examples include the velvet ants (
Mutillidae), which are not ants at all but wasps (the females being wingless); the horned toads, genus Phrynosoma, also referred to as horny toads, which are lizards, not toads, and belong in the iguana family; the whip-tail scorpion, or vinegaroon (Trithyreus giganteus), which is not a scorpion (order Scorpionida) but a member of the order Thelyphonida; and the American chameleon, Anolis carolinensis, which is not a chameleon but another member of the iguana family.

Of those examples given above, only in the case of the American chameleon has there been any real effort to correct the public’s “mistake.” The name “anole” has been suggested as preferable to “American chameleon.” In articles dealing with the horned toad, authors almost always point out that the creature is actually a lizard. But there has been no big campaign to change its name to “horned lizard” in schoolbooks. Only the unfortunate buffalo has fallen prey to such a concerted effort at revisionism.

To all those scientists and educators who seek to force the American public to refer to the buffalo as a “bison” I say, paraphrasing a song made popular by the Georgia Satellites, “Don’t hand me no lines, and keep your names to yourself!” In other words, bug off! Get outta my face! Etc.

The strange case of the tidal wave
For untold centuries the huge, long-wavelength waves produced by earthquakes and other catastrophes occurring on or underneath the sea have been called “tidal waves.” About thirty or so years ago scientists working in the field of seismology decided that they didn’t like the public using the term “tidal wave” to describe these seismic sea waves. But instead of proposing to call them “seismic waves” or something similar, they chose instead to try to jam down the public’s throat a Japanese word, tsunami, as a substitute for “tidal wave.” The trouble is, tsunami translates into English roughly as—you guessed it—“tidal wave.” Aside from this boo-boo, what was their reason in trying to change the name “tidal wave” in the first place?

The main argument for changing the name “tidal wave” is that such waves have “nothing to do with tides.” Well, so what? The German method of rapid strikes with combined ground armor and air power in World War II was termed “blitzkrieg,” a German word literally translated as “lightning war.” Obviously, such warfare has nothing do to with thunderstorms. The “lightning” part refers to the speed at which the operation is carried out, and its overwhelming nature. In other words, it is war carried out like lightning, and has nothing else to do with lightning. So a “tidal wave” would be a wave that has some tidal characteristics, not necessarily a wave produced by tides, or having anything else to do with tides, per se.

The question is, then, do tidal waves have some characteristics in common with the tides? I suspect most seismologists would quickly reply, “Of course not!” And I would be just as quick to call them wrong.

Let’s examine the characteristics of tides and seismic sea waves to see what, if anything, they may have in common. First of all, they are both waves. “Tides are waves?” you may ask. Yes, they are. Here’s how it works. Tides are produced primarily by the gravitational field of the moon. (There are also tides caused by the gravitational field of the sun, but they are less than half the size of lunar tides and do not alter our conclusions here.) As the earth turns under the moon, the point directly below the moon (that is, the point where a line between the center of the moon and the center of the earth intersects the earth’s surface)—called the sub-lunar point—travels across the surface of the earth in the same direction as the moon. At the sub-lunar point the gravitational field of the moon is stronger than at any other point on the earth’s surface. It tends to pull the ocean water upwards. We can refer to this point as the crest of a tidal wave. At the point on the earth’s surface directly opposite the sub-lunar point (i.e., the point where a line from the center of the moon through the center of the earth intersects the earth’s surface on the other side) the gravitational field of the moon is weaker than at any other point on the earth’s surface. This point also marks the crest of a tidal wave, because the total gravitational pull on the ocean water (from both the earth’s and the moon’s gravitational fields) is less here than anywhere else, allowing the water to “rise” higher here than anywhere nearby.

(Actually, because of various other effects, most notably friction, the crests of these tidal waves lag behind the sub-lunar point and its opposite point. But let’s not get into that here!)

So we now have two tidal wave crests on opposite sides of the earth. At points located halfway between these two crests the “height” of the ocean water will be at a minimum. We can now define the wavelength of the lunar tidal waves. What do we mean by a “wavelength”? It is the distance between successive wave crests. (If there is only a single wave, we can compute its wavelength from the shape of the wave, inferring the position of the other, non-existent, waves in the series.) Since the earth has a circumference of roughly twenty-four thousand miles, the wavelength of the lunar tidal waves is on the order of twelve thousand miles—a very long wavelength, indeed!

Since the time between successive high (or low) tides is approximately twelve and one half hours, the speed of the tidal waves is roughly one thousand miles per hour. Since the time between low tide and the succeeding high tide is on the order of six hours, nothing much comes of the lunar or solar tidal waves, even when they are both acting in unison. In most places all that you see is a gradual rising of the level of the sea water, the typical rise in mid latitudes being four or five feet. In a few places where there are estuaries of just the right size and shape, the size of the tides can be truly spectacular. The Bay of Fundy in Nova Scotia is one such place. There the rise from low tide to high tide is about fifty feet (the height of a five-storey building). In addition, the outgoing tide is still running strongly when the incoming high tide meets it. A mighty struggle ensues. Eventually the incoming tidal wave “breaks,” forming a tidal bore that travels up the bay like a giant version of the foaming water produced by the breaking of an ordinary wave on a beach.

Now, what does all this have to do with the “tidal waves” caused by underwater earthquakes, landslides, or volcanic explosions? Plenty! In the first place, seismic sea waves have a very long wavelength, on the order of one hundred miles, and they travel at very high speeds, measured in hundreds of miles per hour. In this respect they are much closer to the true tidal waves than they are to ordinary ocean waves, which are produced by the action of the wind, have wavelengths on the order of thirty to one hundred feet, and travel at speeds measured in tens of miles per hour. In the second place, in their action seismic waves resemble the tidal waves much more than they do ordinary waves. If you were on a beach and observed the approach of a tidal wave produced by some cataclysm at sea, here is what you might see:

The first sign of the approaching tidal wave would be the sea running away from the shore, just as it does when the tide goes out, except at a more rapid pace. In a matter of minutes the sea might retreat hundreds of yards away from the beach. Then it would stop retreating in a brief simulation of ebb tide. Immediately thereafter the water would begin rising—slowly at first, then more and more rapidly. Soon you would notice that the horizon appeared much higher than it did a few minutes ago; the crest of the huge wave would be towering over you at this point. Within a matter of minutes the wave would completely inundate the beach. If the slope of the sea bed near the beach was just right, the wave might break, forming a foaming wall of water that would carry everything before it. These waves may travel several miles inland if the land behind the beach is fairly level. As you can see, such waves behave much more like tides, albeit on a much greater scale, than they do ordinary waves. In my opinion, it is quite reasonable that people chose to describe such waves in terms of the action of tides. There are many points of resemblance.

WARNING: If you ever see something matching the above description in real life, run—do not walk—to the highest ground you can find! Time’s a-wasting and your life hangs in the balance. Ignorance of this behavior cost thousands of people their lives in the great tidal waves of December 26, 2004, although many of the natives along the shores of Sri Lanka knew about this and were able to save themselves by going to higher ground.

Point of interest: The greatest height attained by a tidal wave was measured on a hillside bordering Lituya Bay, Alaska after a great landslide occurred there during an earthquake on July 9, 1958. Trees along the edge of the bay were felled to a height of over 1,700 feet above mean sea level. Now that’s a serious wave!

Also note that the term “tidal wave” has also been applied to the storm surge that accompanies hurricanes. Simply put, the extremely low atmospheric pressure in a great hurricane “sucks” the ocean up to a much greater height than normal, and the winds drive the water even higher. This surge has the characteristic of rising and falling slowly when compared to the action of ordinary waves. Even so, the rise may be rapid enough to overwhelm those who are caught by the storm. Hurricane Camille, a category five storm with sustained winds of one hundred and ninety miles per hour, came ashore near Pass Christian, Mississippi, on August 17, 1967. The storm surge was estimated to have been 22 to 25 feet above mean sea level at the center of the storm. Some ocean-going ships were carried five miles inland by this storm surge. A few foolhardy residents decided to ride out the storm in the three-storey reinforced concrete Richelieu apartments in Pass Christian, indulging themselves in the traditional hurricane party. The next morning the only thing left was the concrete foundation slab. The National Weather Service later distributed a picture of the empty slab with the title, “The Morning After at Pass Christian.” ’Nuff said. (The Christian in Pass Christian is, by the way, pronounced “
Chris-tee-ann,” not like the religious term “Christian.”)

On August 29, 2005, Hurricane Katrina—one of the greatest natural disasters to ever befall any of the United States—made landfall along the Mississippi-Louisiana coast. At landfall, Katrina’s winds only qualified it as a category 3 storm, but the central pressure (about 928 mb) made it a category 4, and it had a category-five storm surge. At the Hancock, Mississippi, Emergency Operations Center, subsequent water-level measurements showed that the storm surge there was about 27 feet above mean sea level. This is greater than any post-storm measurements made for Hurricane Camille in 1969. Numerous press reports told of buildings destroyed by Katrina’s storm surge that had survived Hurricane Camille. This discrepancy between the official category 3 designation for Katrina and its horrific category-five storm surge strongly argues that the Simpson-Saffir scale is badly in need of some sort of revision. Winds alone do not a hurricane make, especially when one considers that the storm surge is what generally does the most damage in a hurricane.

A life expectancy, but whose?
Here’s another ridiculous thing you will see frequently in print. A newspaper article will announce an increase in the life expectancy for American citizens, always including a line like, “The new life expectancy for those born this year is…”. This is blatant nonsense. Nobody can know what the life expectancy is going to be for people born this year. In order to know what their life expectancy is going to be, we would have to be able to predict all of the advances in both curative and preventive medicine that will occur during the next half century or so. Obviously, we cannot do that. Besides that, we would have to know a number of other factors in advance, such as future accidental death rates, as well as the future occurrences of plagues, famines, wars, etc. All of this is clearly beyond the state of anyone’s art!

So why do “they” make such obviously stupid statements? The answer probably lies in the way life expectancy is computed. The people who make these calculations are called actuaries. Actuaries examine the statistics on the number of people of a given age who are alive at the beginning of a year and how many of them die during that year. Actuaries always work with groups of one hundred thousand people, regardless of how many there may actually be in the population. Let’s say there are 100,000 babies born on December 31, 1991. How many of them are still alive on December 31, 1992? The answer to this question yields the probability of surviving to one’s first birthday. Let’s say that 98,000 of them are still alive (that’s a typical figure; first year mortality is much higher than for any other year prior to becoming a senior citizen). Now, to figure the odds on surviving from a first to a second birthday, we take 98,000 babies who reached age 1 on December 31, 1991. By the end of 1992 we will know what the odds are that one will reach his second birthday given that he has already reached his first birthday. But notice: this second group comprises individuals who were born on December 31, 1990.

We keep on going with this scheme until there are no people left. More precisely, we keep going until the actual number who reached a given age on December 31, 1992, reduced to fit our hypothetical cohort of 100,000 original births, results in less than one-half person, thereby rounding off to zero, or nobody. We have then determined a complete life expectancy “curve” from which we can compute the odds of living to any age given that one has already attained some earlier age. In the case of total life expectancy that earlier age is zero, or the date of birth. But as we constructed this curve we had to reach back to ever earlier generations in order to get the individuals we needed to compute yearly mortality. For example, in order to compute the odds of surviving from one’s ninety-third to ninety-fourth birthday, or any later birthdays, we had to reach back to include individuals who were not even born in this century. Someone who died during 1992 at age 93 was born in 1899. That’s a long time ago!

It should be obvious from this short discussion of the subject that most of the individuals whose life and death statistics go into making up a computation of life expectancy at any given time were born quite a while ago. When I was born in 1931 my life expectancy, as given by the tables published at the time, was 60 years (if I had been born female it would have been 64 years). As of October 2008 I have already outlived this by 17 years. In fact, with the advantage of hindsight it is obvious that my actual life expectancy was more like 75 years than 60. But who could possibly have known that at the time? No one.

You can see from the above discussion that life expectancies are computed from statistics on people who are dying, not on people who are being born (with the single exception of those contributing to the statistics on first-year mortality). So to say that these numbers are the life expectancies of people born this year is sheer nonsense. Assuming that life expectancy will continue to rise in the years ahead, these numbers are probably the minimum life expectancy for people born this year. What their actual life expectancy will turn out to have been is anybody’s guess.

Another case in point: There were 110 individuals in my high school graduating class in 1949, a figure that includes about 11 persons who did not actually graduate but were at one time part of the class of ’49. Almost all of them were born between November 1, 1930, and October 31, 1931 (I made it into the class by the skin of my teeth!). Going by the 1930–1931 life expectancy tables, roughly half of these individuals should have died by 1992 or 1993. In fact, we know of only 20 members of the class who have died as of October 2008 (the whereabouts of nine of them are unknown), which means that 74% to 82% of the class members are still living. Again, we see an obvious flaw in the life expectancy tables.

There is also another fly in this ointment. The life expectancy derived from the usual actuarial process is not what most people think it is. One would think it would be the age that you have a fifty-fifty chance of attaining. This is not the case. The fifty-fifty expectancy is the median life expectancy. At this age fifty percent of those originally born will have died and the other fifty percent will still be alive. The life expectancy computed by actuaries is, however, more like an average age of death than a median age. Life expectancy defined as the median age at death will always be greater than the ordinary life expectancy, given the shape of normal actuarial curves (as derived using the process described above).

By the way, when I first wrote this in 1992, my current life expectancy based on all relevant factors (including family history, state of health, etc.) showed that I should live to be about 102. To be quite frank, I was not at all sure that I was ready to spend another forty years or so in this vale of tears! Notwithstanding this feeling, 21 of those 40 years have already elapsed,
so I'll have to man up to this longevity thing--or so it would appear.

The cat’s meow
That may seem like a weird title, but it’s not, really. You see, there is another spelling of meow that you frequently see: miaow. For the life of me, I cannot see why anyone would use that spelling. In the first place, it is not at all obvious how “miaow” should be pronounced. In the second place, the dictionary lists “miaow” as a variation of “meow,” pronounced exactly the same as “meow.” So why use it at all? Unless one is deliberately trying to be obscure, I can’t see any valid reason for it.

Of course, cats make a lot of sounds besides the simple “meow.” Cats meow most often when asking for something like food, and less often as a sort of greeting. In the latter case the meow often means “I want something but I’m not sure what.” Meows given as a greeting often consist of two or more syllables, as in “meow-m’ow.” When a cat feels lost or abandoned it makes more of a howling sound, sort of a “meow” without the “e” sound: “m’ow.” It usually sounds dreadfully mournful. Then there is the delightful trilling call that mother cats make when calling to their kittens. This sound must be heard to be appreciated. There is no way that it can be satisfactorily put down in words using English phonemes. It’s not that we humans can’t make the sound; on the contrary, with a bit of practice one can imitate the sound well enough to fool even a cat once in a while. There’s just no way to write it down on paper, so I shan’t even try.

Some cats like to appropriate the trill for their own uses. Some cats are especially notorious for doing things like that. Many of them use it as a sort of general, all-purpose sound. A gentle trill, almost a cooing sound, is used as a greeting. A trill with just a bit more urgency to it may be used to accompany playing (and cats will play with just about anything you can imagine). A trill with a very urgent sound to it may be used to warn of an impending collision, such as when your cat is hauling ass down a hallway in which you are walking. In this case the trill is a friendly warning that if one of you doesn’t do something about it, there is liable to be a severe collision. A cat naturally expects you to do the accommodating. I have found, however, that if I take no evasive action my cats will manage to avoid the collision—usually. Many Burmese cats, such as my Obi-wan Kenobi and Retta’s Little Bit, and others such as our calico-tabby named Baby, will use various trills to make a running commentary whenever we are around to appreciate it. A sudden trill from the right as I was working at my computer announced the presence of Obie as well as his intention to jump up on my lap immediately if not sooner. After jumping up in my lap, he would stand there for a few moments, staring intently into my eyes and often touching noses with me. Then he would curl up on my lap, close his eyes, and purr himself to sleep. Hey, don’t knock it! It’s the cat’s meow. (Sadly, Obie passed away on Thanksgiving Day, 2001.)

Why our sun cannot be yellow
Another ridiculous statement you will see made even by reputable scientists is that the light of the sun is yellow. In fact, the sun is classified as a “yellow” star, because its temperature runs a bit cooler than the average main-sequence star (it’s somewhat smaller, as well). But this is a scientific classification that has no relationship to color per se. Viewed through an old-style solar eyepiece for a telescope, the sun did, indeed, appear yellow. This was, however, because of the light-absorbing characteristics of the dark glass used to reduce the sun’s light to a comfortable (and safe) level. It had nothing whatsoever to do with the actual color of the sun.

So why do I state that the color of the sun cannot be yellow? That one is easy. It’s because the light of the sun is the light we see by. Our entire optical subsystem was developed (over many millions of years) to work with the light present in natural daylight: sunlight. The color white is often described as a mixture of all colors. This is true, but the mixture must be carefully specified in order to produce a true white. There must be just the right amounts of red, green, and blue wavelengths mixed together to make a true white color.

An object seen by reflected light is described as white if it reflects all colors equally. Such an object reflects to our eyes the same color light that is illuminating it. It is a tribute to the power of the accommodation of our vision that a piece of white paper will still look white when viewed by artificial light, whether that light be incandescent or fluorescent. But if we could somehow see three pieces of white paper simultaneously illuminated by (1) the sun at high noon, (2) a 60-watt incandescent bulb, and (3) a 40-watt cool white fluorescent light, we would see something very interesting indeed. The piece illuminated by sunlight would look bright white, the piece illuminated by the 60-watt bulb would appear a dim yellow (or even light brown, since brown is simply dark yellow), and the piece illuminated by the fluorescent light would appear a dull pastel blue. So, as you can see, the color white can be a rather relative concept.

Is there an absolute white? One that is whiter than any other white? Yes, there is. And that white is called daylight white. It is a mixture of direct sunlight and skylight (which is bluish in comparison to direct sunlight). It’s true that direct sunlight is slightly yellowish compared to this absolute white, but that effect is caused by the atmosphere. Some of the blue in the sunlight (as it would be seen outside the atmosphere, by astronauts for example) is scattered by the atmosphere and forms the blue color of the sky. Daylight white, as the sum of the direct sunlight and the blue color of the skylight, is approximately the same color as the sun as seen by astronauts out in space.

The colors of things that glow because they are hot are measured in units of color temperature, which is measured on the Kelvin temperature scale. For some obscure reason, scientists dropped the “degree” from “degrees Kelvin.” A temperature of 1,000 degrees Kelvin thus becomes either “1,000 Kelvin,” which sounds grammatically incorrect, or “1,000 Kelvins,” which sounds even worse—as though a thousand little Lord Kelvins were running around loose somewhere. Most scientists duck the issue by always abbreviating “Kelvin(s)” as “K” so you don’t know whether they are bad grammarians or Kelvin-cloners.

In any event, direct sunlight (around noon time) has a color temperature of about 5,000 K (see? I do it too!). Dead-center white is about 5,500 K. And skylight (so-called “north light”) has a color temperature of about 6,700 K. Direct sunlight is slightly yellowish because lower color temperatures correspond to yellower colors, and skylight is blue because higher color temperatures mean bluer colors. On a clear day at noon, the combination of direct sunlight and skylight yields a color temperature of 5,500 K. This is the purest of the white colors. Most people would be hard put to tell much difference between 5,500 K and 5,000 K, and would call both colors “white.” And even thought the sky itself looks distinctly blue, the same light reflected off white paper (or snow) would be described by most of us as “bluish white.”

So why can’t the sun be yellow? Because it supplies the light that defines the color white for our vision. If we lived on a planet illuminated by a red giant star, that color would be our “white” and Old Sol would look distinctly blue to us. If, however, we lived on a planet illuminated by a blue giant star (with a color temperature of about 20,000 K), Old Sol really would look yellow. But we live here, where our sun is the very definition of the color “white.” That’s why the sun cannot be yellow.

Enough said?

Are we reincarnated?
A lot has been written both pro and con about the possibility of reincarnation. Yet I have never heard nor seen in print the most cogent argument against reincarnation. It goes something like this:

In order to be reincarnated the soul of a person must pass through Eternity. You die, your soul passes into Eternity, and is then incorporated into a new body in which you live your reincarnated life. In all the stories I have ever encountered about persons who supposedly remembered another life, all of them—without exception—were of a life lived in the past. But Eternity, through which the soul must pass in order to become reincarnated, is beyond time and space. It is not subject to spatial or temporal constraints. From Eternity one could, as it were, observe our space time continuum as a sort of multi-dimensional "blanket," in which the warp is spatial and the woof is temporal. The "thread" of a person’s soul could just as easily dip into this "blanket" at a time previous to its immediately past life as it could into a future time. In other words, one’s prior life (prior from a personal standpoint) could just as easily have occurred in the future as in the past. One might reasonably expect that roughly half of all memories of prior lives would be of lives lived in the future, not in the past. Yet to the best of my knowledge this has never been reported. Not even once.

Is something rotten in Denmark?

An undetermined amount of cash...
How many times have you read a newspaper article about a robbery from a bank or store that said “The robbers escaped with an undetermined amount of cash”? “Undetermined”—not! What it should have said was “an undisclosed amount of cash.” Do you think banks or other businesses are stupid enough to turn their employees loose with an unknown amount of cash in their cash drawer? Of course not! Anyone who has ever worked in such a place knows that the amount of cash that can be accessed by any employee is carefully counted before it is released to be used in the course of business. After any kind of robbery it is a simple matter to run a quick check to determine exactly how much cash was taken. After most robberies the amount of cash taken is probably known to the penny within a matter of minutes. So the “undetermined” amount of cash is a baldfaced lie.

What they mean is an “undisclosed” amount of cash. In the first place, the business does not generally want the amount it lost in a robbery to be known to the public. In the second place, the police do not want the amount known because it is “inside information.” Only the perpetrators will know the actual amount of cash that was taken. To separate kooks from real informants, when someone contacts the police saying that they know who did it, the first thing the police will do is ask the person how much cash was taken in the robbery. Those who know the correct amount are not kooks.

Interestingly enough, for the past few years (this is written in August 2005) the media have taken to using exactly the language I suggested above to report these kind of crimes. They now say an “undisclosed amount of cash” rather than “undetermined.” No one seems to know why they suddenly decided to do this.

Which “Lower 48”?
This is another ridiculous term you see frequently in print. Weather reports will list the temperature extremes for the “lower 48 states.” But that is not what they mean at all. What they mean is the original 48 states. The term “lower” comes from maps, where North is at the top. Therefore, the “lower 48 states” means the southernmost 48 states. Which two states does this leave out? Alaska and Minnesota. That’s right—Minnesota! (Not Maine—it’s much farther south than Minnesota.) The lowest of all the 50 states is Hawaii, the only state that is located within the tropics (that is, at a lower latitude than the Tropic of Cancer). So the term “lower 48 states” doesn’t even come close to catching the two states that are intended to be excluded by the term. It misses by as far as possible, since one of the two intended victims is the lowest of the lower states. There are plenty of acceptable terms that could be used in place of the “lower 48 states.” The “original 48 states” would do. So would “the inner 48 states” (Alaska and Hawaii are nothing if not outlying). Even just “the 48 states” would do. But no, they have to use a term that is totally bogus. And we wonder why illiteracy and ignorance are endemic in our country nowadays. Small wonder, when our media contribute to it knowingly.

How much does it really cost to drive your car?
This is another subject on which much nonsense has been written—not to mention spoken. What the so-called experts do is take all of the yearly costs of operating an automobile, both fixed and variable, and lump them together. Then they divide the total by the number of miles driven to come up with a per mile cost of driving. The trouble with this approach is that it mixes apples and oranges. All you have then is fruit!

Take the example of John Doe—not his real name, because he’s not even a real person. John owns a two-year-old, rather pricey import, which he drove for 9,000 miles last year. He used his car to commute to work, which accounted for 4,000 of the miles he drove last year. One day John decides to figure out how much it cost him to use his car to commute to work. He looks up his automobile in the NADA book and discovers that it depreciated by some $3,850 over the past year. To that figure he adds the $800 he paid for insurance and another $120 he paid for license and registration fees. So far he has a subtotal of $4,770, which actually represents his fixed cost of ownership. Then he gets out the little notebook where he records his gasoline purchases and finds that he paid $562.50 for fuel during the year. To this he adds $67.50 that he paid for oil, oil filters, and other miscellaneous upkeep items—such as a new pair of wiper blades—and gets a total of $630. He divides this number by the 9,000 miles he drove and comes up with a cost of 7 cents per mile. This actually represents his variable cost of driving.

John has, however, been taken in by the so-called experts and follows their meat-ax approach to computing driving costs. Therefore, he adds the $630 to the $4,770 and gets a total of $5,400. He divides that by the 9,000 miles he drove last year. This gives him a per-mile cost of 60 cents. He then multiplies the 4,000 miles he drove commuting to work by 60 cents per mile and arrives at a cost of $2,400 for using his car to commute to work last year.

Did it really cost John $2,400 to use his car to commute to work last year? No way! His car would have depreciated about the same amount whether he drove it or not, and he still would have had to pay for his insurance and license plates even if he had not used the car for commuting. Apparently, his real cost of using the car to commute to work was the 7 cents per mile he paid out for gasoline, oil, and other upkeep expenses. That cost is only $280—a far cry from the $2,400 John calculated with the meat-ax approach. But is it really this simple? No, not quite.

One day this same reasoning occurs to John. Investigating further, John calls his insurance agent, who tells him that he paid an extra premium of $150 because he used his car to commute to work. Careful examination of the NADA book reveals that he would have saved $300 in depreciation charges had he left his car at home. The reduced mileage would have made the car eligible for a low mileage credit on the wholesale value. Adding these two amounts to the $280 he came up with earlier, John gets a total of $730 as his true cost of commuting to work. Dividing this by the 4,000 commuting miles yields a per-mile cost of just over 18 cents—still a long way from the 60 cents per mile he got with the meat-ax approach.

So the next time someone tries to tell you how to figure out what it cost to use your car on that vacation trip you took last summer, say, “Oh yeah? You wanna bet?”

The strange case of “We will bury you”
I’m sure every one of you has heard or read this quotation more than once. The words were spoken by former U.S.S.R. premier Nikita Kruschev (pronounced Kroose-choff) many years ago. They have been quoted ad nauseum as having revealed the intention of the Soviet Union to destroy us. But is that really what Mr. Kruschev meant by these words?

The movie “The Flim-Flam Man” begins with a sequence in which the Flim-Flam Man, played by George C. Scott, jumps out of a moving freight car and rolls down an embankment. His companion jumps out immediately afterwards and runs over to where Scott is lying. He asks, “Are you all right?” To which Scott replies, “I’ll dance on your grave.”

Question: Did the Flim-Flam Man mean he was going to kill his companion? Of course not! “I’ll dance on your grave” is a colloqial expression in English that means “I will outlive you.” It turns out that “I will bury you” is the exact Russian language equivalent to this English expression. So Mr. Kruschev’s “We will bury you” meant not that the Soviets were going to destroy us, but that the Soviets would be around when we were gone. In other words, that Communism would outlast Capitalism. Mr. Kruschev was obviously wrong, but that’s beside the point here.

A former official of the State Department, when confronted with these facts and asked why no one in the government ever explained what Mr. Kruschev’s statement actually meant, is reported to have said, “Well, we’re not in the business of making the Soviets look good, are we.” A tribute to government disinformation, that.

The qwerty myth
Endemic in the computer industry is a pernicious myth that the so-called qwerty keyboard layout was devised to slow down typists. Nothing could be further from the truth. The qwerty layout—named after the first six keys on the top row of letter-keys on a standard keyboard—was actually designed to allow typists to type faster, not to slow them down.

The first typewriter keyboards were made with the keys in more or less alphabetical order. They didn’t work. As typists got faster, the keys began to jam frequently. This slowed the typist down, since the jammed keys had to be freed before continuing with the typing job. The reason for the key-jamming lay in the way mechanical typewriters were made. The type that made the image of a letter was on the end of a long arm. When that particular key was pressed, the arm flew up and struck the inked ribbon, pressing it against the paper and causing the image of the letter to appear on the paper. After making the impression, the arm would fall back into its place on a semi-circular bed near the front of the typewriter. The trouble with this arrangement was that all of the keys converged on exactly the same spot in order to print a letter. If the typist happened to press a key whose arm happened to be next to the arm of the previous key pressed, the rising arm would frequently clash with the falling arm, and the two of them would jam together. The farther apart two key-arms were, the less likely was it that the two of them would become jammed.

This problem was solved by the invention of the qwerty keyboard layout. Experts who studied the problem found that most of the key-jams occurred with certain letter combinations that were especially common (in English). Careful analysis of these combinations showed that if the keyboards were laid out in a different arrangement where the keys for the commonest letter combinations were well separated, the jamming problem could be overcome. The result was the qwerty keyboard that is so familiar today. Expert typists using this keyboard were able to reach speeds of over 120 words per minute without the annoying jamming that they had formerly experienced.

Today mechanical typewriters are essentially a thing of the past. Almost all keyboards today are electronically operated. The qwerty arrangement is no longer needed. But does this mean we should give it up? A number of alternate keyboard layouts have been suggested, including the Dvorak. The trouble with adopting one of these alternates is that all existing touch typists would be required to learn a new system. The new keyboard would have to have a compelling advantage over the qwerty layout in order to justify adopting it.

Unfortunately for advocates of alternate keyboard arrangements, careful testing has shown that good touch typists will type at approximately the same speed, and about the same degree of accuracy, with any keyboard arrangement. There is clearly no good reason to change the qwerty keyboard.

Do you need BCD arithmetic for accounting programs?
Hell no! I’ve seen this bit of nonsense in print more times than I care to count. BCD means Binary Coded Decimal. The reason some morons think you need BCD math to do accounting on computers is because BCD numbers can precisely represent the decimal parts of a dollar figure. Most decimals, such as 0.95, cannot be represented precisely as a binary floating point number. So what? After all, accounting figures would be pretty useless if they could not be used to calculate such things as amortized loans. And that’s where BCD arithmetic fails as well. The results of most such calculations cannot be represented exactly by either BCD or binary numbers.

So, for practical accounting problems, you are left with approximate representations of many numbers no matter whether you are using binary or BCD arithmetic. The disadvantages of BCD arithmetic are two-fold:

• You lose some precision over binary representations because each nibble in a BCD number is used to hold a digit between 0 and 9, whereas a nibble ordinarily holds a number between 0 and 15 (0 and F in hexadecimal notation). Typical double precision floating point binary arithmetic yields almost 16 significant digits of precision. The same size “word” used to do BCD arithmetic yields a bit more than 13 significant digits of precision.

• BCD calculations are noticeably slower than binary. BCD arithmetic does not come naturally to a computer, which is intrinsically a binary device. BCD math can run several times slower than binary math.

Any way you cut it, typical double precision binary floating point math, when used to compute amounts in the tens of millions of dollars, will yield results accurate to approximately one millionth of a cent, one way or the other. That should be close enough to satisfy even the most demanding CPA.

So the next time you hear some so-called computer expert say that BCD math is required to do accounting work on a computer, give him a great big Bronx cheer!

Harry Truman’s middle name
Nowadays one hears a lot of baloney about Harry Truman’s middle name. Even his daughter seems to be caught up in it. But I lived through the Truman years, and I remember very distinctly when President Truman once rebuked some reporters for putting a period after the “S” in his name. He said, and I quote, “The ‘S’ is a name, not an initial, so it doesn’t need a period.”

So much for the revisionists.

To me, he will always remain Harry S Truman—the man with a one-letter middle name. Give ’em hell, Harry!

What’s the trouble with drugs?
Drugs are the modern equivalent of the Salem witch hunts. Why do so many people get so bent out of shape about others who chose to use drugs to get high once in a while? Most of the arguments are that using drugs is dangerous. So what? So are a lot of other things. Rock climbing is dangerous; so are sky diving, surfing and wind surfing, skiing, skateboarding, roller-blading, and even bicycle riding. People are killed every year doing these things. More people than die from using illegal drugs—but not anywhere nearly as many as are killed using legal drugs (alcohol and tobacco), which are the real killers. To those who argue that drug use is a victimless crime, the do-gooders retort that all of society has to pay for the care of those who screw themselves up using drugs. True enough, but exactly the same argument applies to those who get crippled doing any of the things I listed above, not to mention the tremendous cost to society for taking care of those whose lives are destroyed by the legal drugs.

Most of the do-gooders have given up trying to make a case that illegal drug users are a threat to others. In almost any case one examines, these threats come from the illegality of drugs, which we have done ourselves by passing laws making their use illegal. A drug addict may steal to support his habit, but that’s mainly because a dollar’s worth of heroin costs $100 on the black market—a black market that we created with our laws against drugs.

These argument against drugs are a smokescreen. They cover up the real reason why so many people get uptight about drug use. It’s because they affect the mind. All these dangerous sports I mentioned earlier affect mainly the body, although it’s debatable whether or not they affect the mind. Drugs, however, are used primarily because of their affect on the mind.

Even today, at the dawn of the twenty-first century, we have not gotten over the ancient fear of things that affect the mind.

If I asked you what is the worst thing that ever happened in American politics, what would you reply? I think most people would say it was Nixon having to resign in disgrace to avoid being impeached over Watergate. Dyed-in-the-wool Nixonites might think of something else nearly as bad to substitute for this. My answer might surprise you. I think the most disgusting thing that ever happened in American politics was when Senator Eagleton was forced off a presidential ticket because he had once been treated for depression. So what? What in the bloody hell does that have to do with anything? Would he have been forced out if he had once had scarlet fever, pneumonia, or the mumps? Of course not. But depression affects the mind, and the mind is still the big bugaboo of our society. When are we going to get over this stupid hang-up?

What was the second most disgusting episode in American politics?
I believe that the second most disgusting episode in our political history was when the Republicans tried to smear Barack Obama because of an alleged connection to William Ayers, who was one of the founders of the Weathermen, a radical terrorist group that operated some 40 years ago.

We in America are supposed to believe in rehabilitation. Either it happens or it doesn’t. You can’t have it both ways. You cannot say person A has been rehabilitated and person B has not if they show similar characteristics in their lives. By any reasonable measure, William Ayers has redeemed himself from his black past. He is now a respected college professor. Here is a brief sketch of William Ayers as he has been for the past quarter century:

William Ayers is currently a Distinguished Professor at the University of Illinois at Chicago, College of Education. His interests include teaching for social justice, urban educational reform, narrative and interpretive research, children in trouble with the law, and related issues.

He began his career in primary education while an undergraduate, teaching at the Childrens Community School (CCS), a project founded by a group of students and based on the Summerhill method of education. After leaving the underground, he earned an M.Ed. from Bank Street College in Early Childhood Education (1984), an M.Ed. from Teachers College, Columbia University, in Early Childhood Education (1987), and an Ed.D. from Teachers College, Columbia University, in Curriculum and Instruction (1987).

He has edited and written many books and articles on education theory, policy and practice, and has appeared on many panels and symposia.

Wikipedia, article on William Ayers. Read October 7, 2008.

Dr. Ayers (yes, he’s a doctor as he holds an Ed.D degree) has in fact edited or written 22 published books and articles. By any measure, he is now a solid citizen, no matter how bad his past actions may have been—and I agree, they were bad. Wikipedia states: “Since 1999 he has served on the board of directors of the Woods Fund of Chicago, an anti-poverty, philanthropic foundation established as the Woods Charitable Fund in 1941.” Hardly a dangerous man, I would say.

If we are to pay more than lip service to the idea of rehabilitation, we must admit that William Ayers has rehabilitated himself—if anyone ever has. Attempting to smear Senator Obama because of his chance associations with this man at times when Ayers was a responsible citizen of his community was the height of hypocrisy.

Evidently, it backfired, as Obama was elected president of the United States on November 4, 2008.

Was Halley’s comet a bust?
Surprise! No, it wasn’t. It was the astronomers who were a bust. They told people to go out the first week in April 1986 to see the comet. Those who did so were disappointed. A superficial perusal of the literature about Halley’s comet in the weeks preceding its closest approach convinced me that the best time to see it would be somewhat earlier than that. So on the 21st of March 1986 my wife and I roused ourselves out of bed at 3:00 a.m., got in the car, and drove to
San Augustin Pass at an elevation of about 5,700 feet near Las Cruces, New Mexico. Before we even got to our destination Retta had spotted the comet. She said, “This one’s for real!” She was right.

We got out at the parking area just over the top of the pass—the place where the MPs stop traffic when there is a missile shot at nearby White Sands Missile Range. A number of others were already there, including some with telescopes set up and others with cameras on tripods. The comet was a magnificent sight, high in the southeast sky, its tail flaring out behind it like a giant plume. With the possible exception of sun-grazing comet Ikeya Seki in the early 1960s, Halley’s was the most spectacular comet I’ve ever seen (it was definitely more striking than comet West in 1976—despite widespread propaganda to the contrary).

In early April we went out in the desert south of Las Cruces to see Halley’s comet again, this time with our son and daughter-in-law. We found it finally, but it was very disappointing. We could barely make it out.

Thus are the boffins confounded!

Who started the microcomputer revolution?
Arguably, it was Apple. They were the first to mass-market a microcomputer. The MITS Altair was out first, but it never sold well. For me, though, it wasn’t Apple that started things really rolling, it was Radio Shack. Let’s face it: the Apple was a high-brow sort of thing that sold out of little ads in the back of Scientific American. There were no Apple computers in stores, except for little specialty shops that few of us ever saw. Then in 1979 Tandy came out with the TRS-80 computer. Suddenly, computers were everywhere—because TRS-80s sold in Radio Shack stores, and every little town had at least one Radio Shack. There were three in Las Cruces, New Mexico, where we lived at the time. Public interest in microcomputers really took hold with the introduction of the TRS-80. For me, that will always be the computer that started the whole thing. The Apple machines were in the background. The TRS-80s were in stores on Main Street, USA. They were visible. They were everywhere.

Even though I never owned one, long live the TRS-80!

Crack--the mythical “new” drug of the Eighties
In the early 1980s news of a “new” drug began spreading across the land. This was supposed to be a derivative of cocaine never seen before. It was bandied about as the most addictive, terrible drug to ever hit our streets. Poppycock! Crack was about as new as Coca-Cola. It was—and is—nothing but freebase cocaine, which has been known since at least the 1960s. Freebase is prepared by heat treatment of rock (crystalline) cocaine with alcohol. I’ve actually watched this being done, and it’s a tricky although not particularly difficult process. Freebase does act more quickly than cocaine, and it is “smoked,” not snorted or injected like powdered cocaine. It’s not really smoked, it’s evaporated (like opium) by holding a flame against a closed container holding the freebase. Freebase looks like waxy yellowish crystals.

What was new in the early 1980s was that some heads evidently got together and decided that rather than smoking up all of the freebase as soon as they had made it (which is what is usually done with freebase), they would keep it to sell to other heads—and make a lot of money. When the narcs found out what was going down, they turned it into a new epidemic of drug use, conveniently forgetting that it was just good old freebase being sold as a commodity instead of smoked up on the spot. After all, this increased their job security, and if there’s one thing narcs are intent on doing, it’s improving their job security. That’s why they pop the little guys and let some of the big fish swim away. After all, if they succeeded in cutting off the sources of drugs, they would work their way out of their jobs, wouldn’t they!

For example, it’s not in the narcs’ best interest to let the people find out that pot may be the least dangerous mind-altering drug on the market—legal or illegal (the possible exception is caffeine). So they do their best to keep people thinking that it is really dangerous.

The most dangerous thing about marijuana is that you might get fat from over-eating when you get the munchies—or that the narcs might arrest you and put you in jail!

What unleaded gasoline?
Here’s another stupid expression one sees all too often these days: unleaded gasoline. One sees things like “the price of a gallon of unleaded hit $2.37 this week.” And what is the price of leaded gasoline? It’s a meaningless question, since there is no such thing as leaded gasoline nowadays. It hasn’t existed for over ten years now (it was phased out by law as of December 31, 1995). Isn’t it about time we dropped the redundant term “unleaded”? Get with the program, people. Don’t use unnecessary words. And if ever a word was unnecessary, it’s “unleaded” when applied to gasoline. There isn’t any other kind.

Why oxygen cannot burn or explode
You hear all sorts of nonsense about oxygen these days. The way people talk, one would think that all you had to do was get an open flame anywhere near some oxygen apparatus and the whole thing would blow sky high. Baloney! Oxygen can neither explode nor burn. The reason for this is absurdly simple. Burning is a process by which some inflammable substance, such as paper, is rapidly oxidized. The constituents of the burning material are converted into oxides of some sort by this process. An explosion is merely extremely rapid burning, and thus it is very fast oxidization. The process is rapid because the oxygen required is itself a constituent of one of the compounds forming the explosive. An explosive therefore does not require an outside source of oxygen with which to combine when it combusts.

In order for oxygen to burn or explode, it would have to be oxidized. But this is impossible because oxygen is already, by definition, oxidized. When the Apollo I capsule burned in January 1967, it did not explode even though it was filled with pure oxygen. It simply burned, much hotter and much faster than it would have had the atmosphere inside been the normal mixture of about 21% oxygen and 78% nitrogen (and other inert gases) that comprise our everyday atmosphere. This is the only real danger involved in things that use pure oxygen for whatever reason, such as the breathing apparatus used by people with emphysema and other forms of COPD (Chronic Obstructive Pulmonary Disease). The danger is that the escaping oxygen can make any normal fire worse than it would be otherwise.

An ordinary cigarette lighter, such as the ubiquitous Bic, if held in front of a tube from which pure oxygen is escaping and then lit, would simply burn much hotter than it normally does. The flame would likely jump to several times its usual size. But the oxygen itself would not burn, neither would the apparatus explode. If the oxygen tube were held close to a hot heating coil on a stove, most likely nothing would happen, since the coil is not actually burning. It would most certainly not set the oxygen on fire, since oxygen cannot burn.

As to the Apollo I capsule, it is obvious—at least in hindsight—that the NASA engineers and designers were incredibly stupid to have specified an atmosphere of pure oxygen for the astronauts to breathe while in space. Did it not occur to any of them that there might some day be some sort of minor fire on board? With a pure oxygen environment even a minor fire would quickly become a major conflagration, just as it did on January 27, 1967. That fire grew so hot it melted the astronauts’ space suits.

Can cats see in the dark?
This strikes me as a rather dumb question. Of course cats can see in the dark! Can they see in absolute darkness, such as one finds inside a deep cave? No, nothing can see if there is no light at all to see by. This is the ridiculous comparison that most “experts” use when they try to convince you that cats cannot see in the dark. But the inside of a cave is not what most people mean by “see in the dark.” They mean the dark that is found inside an unlit house during the night. This darkness is never complete, and there is always enough light for cats to see their way around with no problem. Those who have felt their way through a dark house at night in quest of a bathroom or a drink of water and have heard their cats cavorting around in the dark as if it were broad daylight know perfectly well cats can see in the dark.

Cats have two things going for them when it comes to seeing in dim light. One is that they have mostly rods rather than cones on their retinas. Rods are useful in low light levels, cones are useful in bright light and for seeing things in color. We have more cones than rods. The second advantage cats have in the dark is the pupils of their eyes, which are elliptical rather than round like ours. This is because cats are crepuscular creatures (active mostly during twilight) rather than nocturnal or diurnal. Elliptical pupils can narrow to a tiny slit, or even close completely when necessary, something that round pupils cannot do. Round pupils can reduce to a pin point, but not close completely, or even nearly completely. And cats’ elliptical pupils can enlarge enormously. Seen in very low light conditions, a cat’s pupils are so large that the iris seems to have disappeared completely.

A cat’s elliptical pupils and abundance of rods on its retina combine to give it exceptionally adaptable vision both in bright daylight and on very dark nights. Their elliptical pupils can enlarge or narrow rapidly to adapt to quick changes in light levels. Cats are admirably adapted to hunting prey by day or by night.

These optical advantages allow our feline friends to navigate through a darkened house with aplomb, while we poor humans stumble about blindly and seek light switches or a flashlight to aid us. We may be smarter than cats (although that could be debated, at least for some people), but they can definitely move around in the dark much better than we ever will.

The 1979 “Invasion” of Afghanistan
The conventional belief in this country as that the former Soviet Union invaded Afghanistan in 1979. In fact, there was no invasion. The Soviets entered Afghanistan in response to a request made by the legitimate, elected government of Afghanistan, which had just been overthrown—at least in part—by a right-wing coup. Some formerly top secret papers of the Soviet government, released since the demise of that union, give some salient details on what happened.

From a communiqué to the Soviet ambassador in Kabul dated May 24, 1979:

As far as the request of the Afghan side for the dispatch to the DRA of helicopters and transport planes with Soviet crews and a possible landing of our parachute troops in Kabul is concerned, the question of using Soviet military units was considered in much detail and from all points of view during Comrade M. Taraki’s visit to Moscow in March of this year. Such actions, we are deeply convinced, are fraught with great complexities not only in the domestic political, but also in the foreign policy sphere, which no doubt would be used by hostile forces first of all to the detriment of the interests of the DRA and the consolidation of the victory of the April revolution.

From the Andropov-Gromyko-Ustinov-Ponomarev Report on Events in Afghanistan on 27-28 December 1979, dated 31 December 1979:

In this extremely difficult situation, which has threatened the gains of the April revolution and the interests of maintaining our national security, it has become necessary to render additional military assistance to Afghanistan, especially since such requests had been made by the previous administration in DRA. In accordance with the provisions of the Soviet-Afghan treaty of 1978, a decision has been made to send the necessary contingent of the Soviet Army to Afghanistan.

Excerpts from a Plenum of the Central Committee of the CPSU dated June 23, 1980:

Brezhnev: Not a day goes by when Washington has not tried to revive the spirit of the “Cold War,” to heat up militarist passions. Any grounds are used for this, real or imagined.

One example of this is Afghanistan. The ruling circles of the USA, and of China as well, stop at nothing, including armed aggression, in trying to keep the Afghanis from building a new life in accord with the ideals of the revolution of liberation of April 1978. And when we helped our neighbor Afghanistan, at the request of its government, to give a rebuff to aggression, to beat back the attacks of bandit formations which operate primarily from the territory of Pakistan, then Washington and Beijing raised an unprecedented racket. Of what did they accuse the Soviet Union: of a yearning to break out to warm waters, and an intention to make a grab for foreign oil. And the whole thing was that their plans to draw Afghanistan into the orbit of imperialist policy and to create a threat to our country from the south crashed to the ground.

In the Soviet act of assistance to Afghanistan there is not a grain of avarice. We had no choice other than the sending of troops. And the events confirmed that it was the only correct choice.

Excerpts from a Politburo transcript of the Central Committee of the CPSU dated November 13, 1986:

Gorbachev: In October of last year in a Politburo meeting we determined upon a course of settling the Afghan question. The goal which we raised was to expedite the withdrawal of our forces from Afghanistan and simultaneously ensure a friendly Afghanistan for us. It was projected that this should be realized through a combination of military and political measures. But there is no movement in either of these directions. The strengthening of the military position of the Afghan government has not taken place. National consolidation has not been ensured mainly because comr. Karmal continued to hope to sit in Kabul under our assistance. It was also said that we fettered the actions of the Afghan government. All in all, up until now the projected concept has been badly realized. But the problem is not in the concept itself, but in its realization. We must operate more actively, and with this guide ourselves with two questions. First of all, in the course of two years effect the withdrawal of our troops from Afghanistan. In 1987 withdraw 50 percent of our troops, and in the following [year]—another 50 percent. Second of all, we must pursue a widening of the social base of the regime, taking into account the realistic arrangement of political forces. In connection with this, it is necessary to meet with comr. Najib, and, possibly, even with other members of the CC PDPA Politburo. We must start talks with Pakistan. Most importantly, [we must make sure] that the Americans don’t get into Afghanistan. But I think that Americans will not go into Afghanistan militarily.

Akhrome’ev: They are not going to go into Afghanistan with armed forces.

Dobrynin: One can agree with USA on this question.

Gorbachev: We must give instructions to comr. Kryuchkov to meet with Najib and give him an invitation to visit the Soviet Union on an official visit in December 1986. It is necessary to also tell comr. Najib that he should make key decisions himself. Entrust comrades Shevardnadze Eh.A. (roll-call), Chebrikov V.M., Sokolov S.L., Dobrynin A.F., Talyzin N.V., and Murakhovsky V.S., taking into account the discussion which took place in Politburo meetings, to coordinate, make operative decisions, and make necessary proposals on solving the Afghan question and settling the situation around Afghanistan.

—Taken from the alternativeview.com web site.
Provided by M. Kramer; translation by D. Rozas.

In short, if the Soviets are guilty of invading Afghanistan, then the United States of America is equally guilty of having invaded Vietnam in the 1960s. The parallels are disturbingly similar. We went into Vietnam at the behest of their government, which did not enjoy the support of most of the Vietnamese people. When President Diem became “inconvenient” to America’s goal of stopping the “communist threat” in Vietnam, we encouraged a coup against him, which resulted in Diem’s death on November 2, 1963. President John F. Kennedy “turns a ghostly shade of white and leaves the room” when he is informed of the deaths of Diem and his brother, Nhu. Later, Kennedy writes in his personal diary, “I feel that we must bear a good deal of responsibility for it.”

Eventually, overwhelmed by the guerilla-type war being waged against them by the Viet Cong and North Vietnamese regulars, the Americans capitulate and—like the Soviets in Afghanistan—clear out. In both cases nothing useful was accomplished by armed intervention.

But in neither in Afghanistan nor in Vietnam can the interventions reasonably be called “invasions.”

Today, Vietnam has a vibrant, working economy. Predictions of “doomsday” by critics have proved to be yet another right-wing delusion. Most of the people of that country appear to be better off today than they were before the Vietnam War. And the notorious “domino theory” has died a silent, unlamented death.

Free Will or No?
Those who argue that there is no such thing as free will and that everything we do is predetermined by genetics (or whatever) are overlooking two important aspects of human behavior: indecisiveness and the changing of minds.
 
Indecisiveness, where one vacillates between two choices that seem almost equally correct under the circumstances, could not exist if there were no such thing as free will. If everything were predetermined and dictated by a mind that works something like an automaton, indecisiveness could not happen. One would simply do whatever one was programmed to do under the circumstances, and have done with it. Indecisiveness by its very nature implies the functioning of free will, which in the instance is caught between alternative courses of action that are difficult to choose between.
 
The changing of one’s mind brings up an even more significant challenge to the no-free-will theory. If one is always programmed to act in specific ways under particular circumstances, then there would be no changing of minds. Again, one would simply do what one was programmed to do, and that would be that. If a mind is changed under specific circumstances and the no-free-will theory were correct, then one of two scenarios would apply. (1) If the first choice were the programmed one, then the second one, being different, would in and of itself prove the existence of free will. (2) If the first choice were not the programmed one, then it would itself be evidence of free will, which asserted itself by making the wrong choice prior to the programmed choice overruling it later. In either case, there must have been some sort of free will coming into play in either the first or second choice, no matter which one was deemed to be the “programmed” one.
 
If there is no such thing as free will, then all religions are simply nonsense. And all criminals should be found not guilty, since there would have been no way they could have avoided doing whatever it is they are accused of having done. The concept of justice and the existence of systems whereby justice is administered are predicated on the existence of free will, since only a person possessed of a free will would be capable of choosing not to do something because it was against the law. Otherwise, if he were programmed to commit the act he would be powerless to avoid doing so; and if not, he would not do it anyway, law or no law.