Robots use a lot of power compared to a human. At first thought, this might be obvious – after all, can’t they fly and crash through walls? But the real story is that humanoid robots are delicate – more like a 95-year old stepping very carefully so they don’t fall and break their hip. Their problems are sensory, cognitive, and motor. Compared to a human, they could break very easily.
This isn’t really surprising. Most robots are “toy” demonstrations for a grad student thesis, or a way to wow investors to pour in money. As demos, they only have to work for a short amount of time. However, creating Robots That Jump means making resilient robot bodies that don’t need constant maintenance. Their “minds” also have to be resilient – able to adapt to new and novel stimuli. But time, space, and money have mostly prevented humanoid robots from becoming robust enough to use in the real world.
To solve these problems, engineers apply “technology.” But the “technology” is not as new or constantly changing as some thing. Many of the hardware and software solutions to robotics are decades old. The engineers aren’t developing novel systems, but instead make systems that use more power.
As the push to create the future of robots continues, brute power is substituting for an understanding of how to make agile, robust robot bodies. Rising power user in robots runs straight into issues of sustainability and climate change. Like the so-called “metaverse,” they are energy-hungry solutions to problems that have mostly been solved, but activate a business model.
I’ll start with robot brains, since this is easier to discuss. Frankly, I don’t understand the problems with building a robust robot body that doesn’t break a hip when falling. This really requires a discussion by an mechanical engineer. I’ll just note that the human body uses a few hundred watts of power, tops. Humanoid robots that walk use thousands to tens of thousands of watts just to walk, and need heavy but apparently fragile bodies mounting those big flammable lithium batteries. More power devices (e.g. the failed DARPA robot “Mule”) used gasoline engines to get the necessary, sustained power they needed – the human body is silent in motion. So there’s a problem, but I won’t go further in this post.
On the other hand, I do know something about coding. I was team leader for one of the DARPA 2005 driverless car entries, and tried to program said vehicle. We didn’t get that far, but I am more of a “Subject Matter Expert” here. So, I’ll discuss robot brains and power, specifically referencing two articles about “neural-net” based AI.
To begin with, a general-purpose robot that could do and learn a variety of tasks would need the type of adaptable brain that an animal has. And it is true that AIs using newer “deep learning” algorithms and multiple layers in their neural nets have gotten better at pattern recognition – lots better, in fact. Face recognition, so long as (you’re not dark-skinned), is pretty good. The technology is pretty well standardized.
So, if the robot with an embedded or networked AI can recognize faces, is this a stepping-stone to general recognition of the environment?
The problem with “neural net” systems that are trained by “deep learning” is that they are very specific. You can use a neural net plus deep learning to train a system to differentiate, say, shoes from similarly-shaped objects. You’ll get somewhat useful results – typically 80% with a moderate training set. With larger training sets, you might get to 90%. All is good, right?
The problem is that, in order to push up the accuracy, or make fine discriminations (e.g. women’s versus men’s shoes) you’ll need a bigger network and training set. And the training will consume a lot more power. As you push for 100% accuracy of recognition, the cost goes up in a nonlinear fashion. This makes very high accuracy (e.g. 99%) almost unobtainable, even for large networks like Google. The problems were detailed in the following article:
Deep Learning’s Computational Cost
A basic feature of modern neural networks is that they need to be “trained.” Most non-experts imagine the “AI” as a rules-based system. But the programmerscreate a random neural net, then apply a “learning set” to set the weights in the network. The training cycles take a huge amount of power, and more and more power is needed to reduce error. Training the network to, say, 95% accuracy takes the power of a large city for a month(!)
This article discusses actual energy use by real deep learning systems, rather than the extrapolation above:It takes a lot of energy for machines to learn – here’s why AI is so power-hungry (theconversation.com)
Some energy measurement tools for the operation of the AI (not the training stage, which is where the energy listed above is used):AI industry, obsessed with speed, is loathe to consider the energy cost in latest MLPerf benchmark | ZDNet
Think of it. To train an AI system recognizing objects at 95%, you need the power the city of New York for one month. If you went to 99%, you might need the power the entire United States consumes in one month.
This problem is well-known in the actual AI industry, (as opposed to the marketers and shills looking for investment) as “Red AI.” A recent discussion of the problem can be found in the paper below:
Green AI (2019), Roy Schwartz Jesse Dodge, Noah A. Smith, Oren Etzioni
A great quote from the abstract summarizes the problem:
“…The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018. These computations have a surprisingly large carbon footprint. Ironically, deep learning was inspired by the human brain, which is remarkably energy efficient. Moreover, the financial cost of the computations can make it difficult for academics, students, and researchers, in particular those from emerging economies, to engage in deep learning research…”
Consider: our advances in “deep learning” have required a 300,000x rise in energy consumption. I guess we’re gonna need more power!
The authors go on to propose a “Green AI” that includes sustainability as a goal, not just speed and efficiency. They want grad students to be able to work on AI without having to spend tens of thousands of dollars on – electrical power.
In certain environments this is acceptable. Some pattern-recognizer AIs work well. Face recognition is a good example, as all humans have pretty similar faces, and we don’t have to train networks to recognize a variety of non-human faces. True, the networks often only recognize white faces, but mayhaps we can burn a few thousands of tons of coal for each, and be done with it (until fashions and makeup change).
So, you might be able to justify burning all that coal to get a good facial recognition system for, say passenger identification in an airport.
But in natural, uncontrolled environments (like those encountered by a driverless car), you need thousands of trained AIs, all firing at once. Novel patterns not in the training set will be unpredictable. A good example of this might be in driving. Normally, traffic signs are standardized, and it isn’t too hard to recognize a stop sign. But what if the sign is broken, and a hand-drawn temporary sign is present? The system will almost certainly fail.
Basically, this is an unsolveable problem with our current technology. In theory, the robotics industry could grab half the word’s power for 5 years and get better recognition. But it won’t happen. It’s going to be a huge challenge to shift from fossil fuels to renewables in the next few decades, and nobody is going to support burning a massive amount of coal to improve robot vision.
Even if we did this, we need huge, expensive 5G networks. You can’t use the neural net onboard – too big. Instead, the robo-car needed to be connected to “the cloud” at very low latency and high bandwidth. So, burn megatons of coal to build out 5G.
So, we can’t build good driverless cars with current “deep learning” neural nets. We’ll run out of power that should be diverted to, say creating renewable energy. The power requirements of robots are that big. We can’t supply the power of multiple cities to make AI-based shopping better!
Instead, robot-pushers push two ideas. The first is infrastructure. If we can’t build pattern-recognizer AI that works in the real world – well, change the real world! Make streets and crosswalks perfectly regular. Paint every crosswalk and lane divider in high contrast. Put sensors or IoT devices into the road. Make every building more square. Basically make the world look like the 3D game simulations often proposed for training driverless cars:
This gets ridiculous. Before long, pedestrians who don’t want to be run over will be required to wear “robot friendly” clothing – loud colors, clear delineation of arms and legs, ultrasound reflectors. We will all need a Minecraft costume so the robots don’t get confused. Don’t worry; its FUN.
Basically, this so-called “solution” requires that people and the environment become more “machine-like” – so we can have “the future” promised robot world. But the real world is quite messy:
But the cost, and “embedded energy” to change the physical world would be huge beyond huge – the current USA infrastructure bill would barely start the process, even though it is in the trillions of dollars.
In short, we would have to expend massive amounts of energy just to make the world safe for robots. We won’t do that – we’ll use the energy to build windmills and nuke plants instead.
Remember…the whole idea of “technology” is that it is supposed to be faster, cheaper, better than human work – and more efficient. This is exactly the opposite.
The second idea for fixing the “robot brain power problem” is even more insidious. Put on a headset, and pretend your company has created a “self driving” car. In reality, just fire some people and make a smaller number of people monitor the robots, fixing their mistakes, like this poor slob…
Currently, most self-driving car experiments have an extremely bored, presumably minimum-wage human sitting in the driver’s seat, supposedly ready to react when the robot fucks up. Now, this won’t sell – you won’t convince people the car is “self-driving” if you get a bored human along with your car purchase. These people were supposed to disappear as the robots got better – but it has been close to a decade, and they are still there. Clearly, robot learning has run into limits, energy and otherwise.
So, driverless cars are currently a fail. The promises of 2015 failed. So, what to do? What to tell those investors and tech blogs?
So, the new proposed solution is to roll out a “5G” network that allow fast, real-time video transmission from the robo-car to a building somewhere with low wages. In that building, a group of incredibly bored, minimum-wage humans watch 9-12 computer screens, each showing a different robo-car. These bored people have to “adjust” the self-driving car’s behavior every few thousand feet or so. They are remote, so people don’t have that annoying human in their “self-driving” car. Yay!
Now, consider this from a system perspective. You don’t have to wear a tinfoil hat to see that 5G, low-latency, real-time networks everywhere will require an enormous use of energy and resources. 5G runs at a higher frequency – more energy per second. 5G is more line of sight – so you need lots of 5G transmitters – several on a single city block. 5G needs bigger and faster towers and central processing servers networks. Lots more power!
In other words, by removing the people, you have swapped in lots of power consumption.
And what for? Basically, you’ve admitted that you can’t remove humans. So you minimize the number of humans – basically an affront to your beautiful machines, and give them the nastiest, grungy jobs possible. The real goal is to eliminate some human drivers (so you don’t have to pay the), and pile the remaining driving risk, responsibility, attention onto a smaller number of remote tele-operators. You accomplish this by burning a bunch of extra energy.
As a bonus, you convert a skilled driver job into a lowly, minimum-wage, security guard type gig. Your benefit is that your company drops the healthcare plans of those “redundant” humans, and you get your holiday bonus. Finally, since the operators are remote, you can pretend that you actually have achieved your goal human-style artificial intelligence, and can post all those photoshopped images of blissful people doing nothing in their cars – except watching ads, presumably…
The energy equation is a little better for the long-haul trucks, but you still need a fast-real-time connection between the robo-trucks and human leaders, plus the local onboard super cruise-control AI.
But…there’s already a system that allows a bunch of cargo containers to “follow the leader” perfectly with just one person in charge.
It’s called a train.
Consider how dumb robo-trucks are from a “green” perspective. A train can easily manage 1000mpg for a few tons of cargo. A comparable truck – 10mpg. So, you burn 100x as much fuel to move things by truck.
But having robot-truck go in a “convoy” behind a human driver makes a train-like system, with none of the advantages of trains. They’re following the driver, using a big pile of computers and networks. A train just uses a passive connector that doesn’t need power. Flexibility is lost. If it were flexible, the robo-trucks would be able to drive themselves to their final destination after exiting the convoy. But that kind of driving is exactly what robot cars can’t do. So, you link in a bored remote human operator – again.
Now, the original idea of a truck is “flexibility” – individual cargos can be routed in real time to stores, more efficient than trains. Sure, this makes some sense, especially back in the day when oil was selling at an inflation-adjusted prices of $4/gallon and wasn’t a big cost. But in a world trying to reduce fossil fuel consumption, it doesn’t wash – it makes sense to pick the less energy-intensive system.
But what about “electrifying” the truck? Well, batteries have low energy density. If you give the trucks the range of current diesel trucks, your payload drops almost to nothing. So, your robo-convoy will either have to stop frequently to recharge, or carry much less stuff. You could just electrify the trains!
Let’s sum up how robots and power interact in our supposed “future”:
- Use the energy needed to power large cities for months to build neural nets
- Fire some people doing a boring, but not incredibly boring job
- Train a minimum-wage, out of sight “gig worker” to do the incredibly boring job of fixing the mistakes of the robots.
- Raise risk, reduce quality of service
- Executive bonus
To summarize, the push for humanoid robots and “self-driving” vehicles will require gigantic amounts of energy, either for monster neural net learning sets, or reworking infrastructure into Minecraft. That huge amount of energy would be taken away from making our system more sustainable. The increase energy of a robot world will make getting to sustainable use harder. And the result of partial robot-ization – the only kind we can do today, will be job destruction, employees pushed into smaller number of crappier jobs, and a more fragile service. Think of a phone tree versus getting a human support person. Think of a human walking down the street, versus a robot trying not to fall, plus its human “assistor” constantly adjusting it so it doesn’t fall.
This is the the reality of the robot future currently being sold.