Humans Are Animals, Too
By Terry Calhoun
How often do you have bright ideas that you are at least momentarily convinced would make you fabulously wealthy if you had: (a) the time, (b) the connections, and (c) the business savvy to make something of it? I’m not sure if this is a common thing for others, but it happens to me with some frequency. I have a ready supply of personal “idea inventions” that that are making someone else very rich.
I keep getting reminded of past “idea inventions” in the strangest places and at the strangest times. This time, it was in the San Francisco airport, between my flight from Honolulu to SF, and my later flights to Minneapolis, then to Detroit. In the middle of “Brainy Robots Start Stepping Into Daily Life” (New York Times, Tuesday, July 18, 2006, A1, C16), which I was reading avidly because the book I just finished also had a theme of artificial intelligence, I was reminded of an idea from 15 or 20 years ago which is now a commercial product.
It’s called Poseidon. The idea is simple: the water in a swimming pool is constantly scanned by computers, which identify certain kinds of movements of the bodies in the water and notify lifeguards when someone is likely to be drowning. It’s currently in use in Europe. I suspect that the main reason that it’s not yet in the U.S. is the same reason I thought it was an attractive idea in the first place. Once it’s in use in the United States it is likely to become a “standard” so quickly that every public pool will need to have one for liability reasons. (That makes it tough to introduce, because no one wants to be the first to spend money on something that no one has to have yet.)
A lot of my “idea inventions” from that period of time are related to child safety. Since my children are now 17, 21, and 22, it’s no surprise that was a theme of my thinking 15 to 20 years ago. (No one has yet brought the inflatable toddler head protection device to market.) The Poseidon idea was a good one. How cool would it be to own the rights to an expensive, high markup system that eventually everyone who operates a swimming pool will have to purchase?
Up until even this week, I would not have categorized Poseidon as an “artificial intelligence” (AI), but I now realize that was due to a mental block. I’ve read so much science fiction in my life that I have tended to reflexively think of AI only in terms of the highest levels of intelligence and functionality. You know
the kind of computer intelligence that could easily pass the Turing Test. (I guess I’m only human.)
Some science fiction writers and others speculate about AIs developing so quickly beyond the scope of human understanding that they rapidly evolve and go out and do their own things in the universe. In some future realities, they do so while protecting human life. In others, we’re just part of the environment and our created beings treat us just like we’ve treated our environment throughout human history. Scary, actually, when you think about how fast AI is coming about.
In the book I devoured between Honolulu and San Francisco, The Armies of Memory by John Barnes, those are called “aintelligences.” (AIntelligences, get it?) The book d'es a good job of hypothesizing about some possible consequences of the development and regular use of high-level artificial intelligences. However, as I read it, my mind kept turning back to an essay and subsequent commentary thread I had read earlier in the week.
One of the themes in the essay that resonated with me was the idea of we humans getting over the seemingly inherent concept that we are not animals, when in fact we are. Humans are primates, mammals, and animals, yet it is so difficult to speak of that fact using our language. I’ve tried very hard with my children. Countless times as they grew up, they heard me say: “Humans are animals, too.”
The problem is that when you contrast Homo sapiens with other animals it is so easy to refer to the “other animals” as just “animals,” which linguistically suggests that we humans are something entirely different. This language difficulty may well be one of the reasons why the majority of supposedly “naturally intelligent” Americans claim, in polls, not to believe in evolution.
But humans are animals, even though we have trouble accepting that. So you just know that once we have true AIs among us, we’re going to have a heck of a lot of trouble with the boundaries of what that means. And AIs brainy and accomplished enough to cause problems for our ways of thinking may yet happen in my lifetime. Despite my love of science fiction, is not something I had previously thought or written on.
I read earlier today that in the field of AI: “At conferences you are hearing the phrase ‘human-level AI,’ and people are saying that without blushing.” (“Brainy Robots Start Stepping Into Daily Life,” New York Times, Tuesday, July 18, 2006, A1, C16.) When you know, as we do now, that even our laptops will soon have the computing power as a human brain, then the time seems closer than I had previously thought.
Unfortunately, we’re still the same humans who can’t reflexively understand that we are animals, too. That’s partly just because it’s so easy to say “animals” instead of “nonhuman animals.” That probably means that, no matter how bright our AI creations get, we’ll stick to labeling that intelligence as “artificial.” I’m betting at this point – July 2006 – that some day our insistence on labeling our intelligent creations with the artificial label will cause us as many problems as d'es our current insistence that humans aren’t animals.