After MyDoom, the “wired” are mad at the great unwashed Dummy
An extermely interesting post appeared on Ars Technica, pointing to this article in the International Herald Tribune
(the article was actually from the New York Times. In “New digital divide pits the more adept against the dummies”, author Amy Harmon describes the self-righteous anger of the tech-savvy digeratti over the spread of the MyDoom virus – a spread enabled, they say, by the fact that most people remain ignorant of the details of how their computers work. The techies, who once eagerly invited novices to join the computer revolution, now are attacking these same people for not become sophisticated in the use of computers. As the article says:
“Some in the techno camp imagine requiring a license to operate a computer, just as for driving a car. Others are calling for a punishment that fits a careless crime. People who click on virus attachments, for instance, could automatically be cut off by their Internet service providers until they proved that their machines had been disinfected.
And some, tired of being treated like free help lines, are telling friends, relatives and random acquaintances to figure it out on their own. “
The attitude of the Dummies – this appears to be the name the tech-elite prefers for the rest of humanity – ranges for guilt to confusion. After all, they were told to join the computer revolution, and furthermore told how easy and fun it was. The geeks told the Dummies about the riches of computing and wired life. So they did. But it wasn’t easy for the Dummies – the geeks lied to them. Running a computer is far more complex than running a car. Computer systems require constant updates, tuning, reformatting, checks, configuration and optimization. Unlike real-world object, the configuration doesn’t make sense. Computers today are machines that come loaded with arbitrary rules created by programmers that the Dummies are supposed to memorize for their own good.
A few Dummies realize this, and defend their lack of knowledge. From the artice:
“…But his girlfriend, Miriam Tauber, 24, makes no apologies for her lack of computer knowledge. To her, computers are like ‘moody people’ who behave illogically. If people like Rubenstein expect her to understand them, she suggests, perhaps they should learn to speak in a language she can understand, rather than ridiculous acronyms and suffixes.
‘There are these MP3s and PDFs and a million other things that you don?t even know what they are,?’ Tauber said. ‘I don?t feel like I need to figure out computers, because my instinct is there?s just no way.'”
This Dummy hits the core problem of computers today – along with the reason they will be replaced in our society by Robots That Jump. Computers are truly like irrational or insane people. Irrational people have lots of arbitrary, contradictory rules that they repeadedly impose on the sane people around them. We have to adjust to their strange demands, weird behavior, sudden freak-outs, etc. Insane people are loaded with bizzare little mannerism, tics, and illogical behavior. Many have little dream-worlds they live in divorced from reality – the first “virtual worlds.”
Computer are like insane, moody people because they aren’t aware of themselves – they try to be a world instead of a thing. They do not reflect on the illogic of their configuration. Like a world and unlike an animal they just “are”, and don’t try to control their behavior. We just have to accept it. People are expected to react to their illogic, arbritary, rules-based behavior and tease out the reasons like a therapist. Once we figure it out, we carefully control our actions around our coputers so they don’t flip out.
The fact is, this state of affairs suits the techno-beasts just fine. They have trained themselves to constantly learn the fresh set of nonsense configuration numbers, unspoken rules, monster lists of “features,” workaround kludges and ever-present tweaking that is needed for a PC to run. In doing so they get a sort of petty power over their hapless lump of silicon clay – they can manipulate this messy thing to do their bidding.
Little wonder the wired geeks can only conceive of robots as a sort of unruly servant which might not need or even resist being configured. Where’s the fun in that?
One thing I ask my introductory classes: “Who is dumb here – you or the computer?” The answer, of course is the computer, the Internet, and all current computing technology. It is deliberately dumb so it can be controlled at a fine-grained level. The fine-grained control means that we must spend enormous amounts of time figuring things out for the machine. An extreme example of this is the MER Mars rovers. Granted, NASA has a reason to exert ultra-fine control over these machine, due to their remoteness. But even with all that control, Spirit still had a problem with file management in its “flash” memory.
Computers are kept deliberately, impossibly hard to use because programmers see the computer as an artificial world where they play God setting up whatever rules they want. Computers appear maliciously dumb to the Dummies because they are forced to abide by these ill-though, contradictory, arbitrary rules in order to enter the computer’s “metaphor” or virtual world and get work done.
My message to the digiteratti: It is the height of arrogance to blame the Dummies for the brittle, card-castle world of cyberspace you’ve created. So we just supposed to “know” not to click on email attachments so we don’t get a virus. Why did you, the tech-savvy, make a computer where a virus can be propagated in this way? Isn’t this like leaving open bottles of bleach around a nursery? Why are you then angry that we the computer newbies haven’t already memorized rule #1933 about using the computer and respond naturally to a bit of social hacking? It is like tricking a kid with candy.
In the real world you aren’t allowed to create dangerous environments. An employer who runs a workplace where people are easily injured is sued. The employer won’t win the suit if they point to rule #228 and say the employee didn’t memorize it. You are supposed to make the environment safe for people who haven’t memorized every inch of the place. But in the computer world, virtual environments just as dangerous are tolerated in the name of “features” or “fine-grained configuration.” Why can’t I sue the tech geek who led me down this virtual dark alley and lied to me about its safety?
Why do you, the digeratti, like a world where only the cyberspace gurus performing memory tricks like a carnival sideshow can confidently step between the roomful of landmines your world of “computing” has become?
And why do you actually think we want to enter this world even more fully? Why do you want to strap these demanding, unpredictable machines on us 24/7? Is it so we are forced to be like you, who spend all your time scanning tech news, reading manuals, memorizing tricks and tips simply because some programmer or interface designer requires us to do so?
And “user friendly” isn’t the answer. The PC has never been user friendly, and we have been hearing the cry to make computers “user-friendly” since the early 1980s. They have gotten harder to use rather than simpler. Tests on school children show that they learn 1970s style “command line” computer interfaces just as easily as a windows and menus style interface. Attempts to create user-friendly computers either make them into special-purpose systems (a one-trick electronic word processor, for example) or just add another layer of junk to memorize. So I get a “smart” answering machine that can monitor calls. Great. Now I have to configure it to do the monitoring, and constantly change its configuration when it breaks, gets infected, or needs an upgrade.
The reason for the nasty behavior of computers, and for the self-appointed experts slamming the Dummies goes back to the nature of computing itself. All our present paradigms imagine the computer as a sort of little toy world whose features we have to memorize. Moving in that toy “cyberspace” world requires learning a set of rules which have the arbitrary nature of a game. People who like gaming and ultra-detailed fantasy kingdoms will propser. Those of us plugged into the real world will fail. And since we don’t like or understand a life of memoring ever-changing mounds of rules to compute, we are somehow inferior to those who fill their brains up with this self-referential, unreal stuff.
The response to our complaints? From the techno-digeratti we get “The Matrix” – which says we can’t escape computers – we in fact are prisoners in a computer matrix which can be defeated by (surprise) doing the things that cool tech-nerds do. All the ordinary shlumps in “The Matrix” series are killed and posessed, duped, etc – all because they didn’t understand the nature of the game – in other words, how to use their computer very well.
I wonder if future generations will even understand these silly ideas – they may well think that we thought evil spirits lived in those midrange towers and plastic monitors.
In constrast to these bug-laden PCs, robots present a different story. True robots (rather than the useless software ‘bots touted on the Internet) force themselves to learn our world and interact with it. The burden is on the machine, not the human. If a robot fails to walk or roll across the room it is the robot’s fault – not the fault of the human for improperly “configuring” said room.
I like that. I’m looking forward to the day when we don’t have to worry about viruses attacking our computers because they won’t be these brittle card-castle games allowing viruses in as players. They will build their own understanding and the rules to understand along with it. There will be other problems with true robots. We won’t be able to simply insert knowledge into a true robot via software brain surgery – it will have to learn things for itself.
I’ll have to train it to know what I want, like a dog. But even a Dummy can train a dog.
Dragon Eyes and Wheeled vs. Legged – What’s important is the jump
After a New Year lull, we seem to be seeing more robot news. First, it looks like Hans Moravec’s (CMU professor of “Robot” fame) has gotten his startup robot company, SEEGRID (http://www.seegrid.com
) up and running. A trip to the SEEGRID website shows that the advanced visual processing system developed by Moravec is on the verge of being commercialized. While SEEGRID moves into a market already occupied by Evolution Robotics (http://www.evolution.com
) and numerous industrial robot platforms, the SEEGRID system is cleary more advanced. Instead of simply recognizing two-dimensional shapes (as the Evolution product does), SEEGRID builds a complete 3D map of the surrounding environment. The quality of the map begins to approach something closer to human vision.
IMHO, the Evolution product and SEEGRID may be complimentary. The Evolution product is suited for quick recognition of shapes and environmental landmarks and is good for low-power robotic applications. In contrast, the SEEGRID system builds a much more elaborate representation of the environment. It could be used by animators to capture an environment for a game or movie, in a way comparable to the “motion capture” used to animate characters in current films. It is also optimal for larger robots – e.g. it may make the retrofitted “robotic forklift” a reality. Good luck to Moravec and everyone brave enough to jump in at the start of the robot revolution.
Which brings us to the DARPA-sponsored “Grand Challenge” robot auto race, currently scheduled for March 13, 2004. Look at the revised website at http://www.darpa.mil/grandchallenge/. This race currently has 35 teams running autonomous robot cars in a race between Los Angeles and Las Vegas. I also note that DARPA has teamed up with SCORE International Off-Road Racing for this event – it will be a real sports event – ESPN next?
One issue with the race is whether these are “old-style” robots or something akin to the new-wave robots that jump. True, wheeled robots have been around for a long time. But this race uses wheeled robots for the sake of retrofitting existing autos – NOT to simplfy the challenge of robot navigation. Compared to the laboratory floor most research robots roll around in, the path between LA and Vegas is rough and challenging. I suspect the winning robot will be one that can do more than plan its path based on vision and GPS signals – it will have to be able to “feel” the road, its wheels, axles flexing, hear its own motor, etc. None of the vehicles racing in 2004 is likely to reach this stage, but I suspect they soon will. At that point, the robot cars will be doing what human-controlled off-road racers do – jump, make wild turns, etc. Even with wheels, they will be robots that jump.