Wired for War

Discuss the technology of any incarnation of Ghost in the Shell
Post Reply
User avatar
Freitag
Posts: 626
Joined: Mon Sep 01, 2008 3:43 pm
Location: Behind you

Wired for War

Post by Freitag »

http://io9.com/5187218/the-3-laws-may-n ... t-warriors
The 3 Laws May Not Be Enough To Guide Robot Warriors

What does the Pentagon think about a possible robot uprising? Is Star Trek's view of combat realistic? We asked P.W. Singer, senior fellow at the Brookings Institute and author of Wired for War.

Earlier this month, we reviewed P.W. Singer's latest book, Wired for War, which examines the growing role of robotics in warfare, but also looks into the human component of technology and how these advances will impact the way in which we percieve - and fight - one another.

You open your book with a reference to the show Battlestar Galactica, and throughout, I caught a huge number of science fiction references, from the Terminator, Star Trek, Short Circuit, the Matrix and others. How close to truth do some of these fictional stories come? Will the machines rise up and kill us all?

The book is very much about how the machines that were once only used in science fiction are rapidly becoming battlefield reality.

I use these science fiction references not only because I was the kid who grew up with Star Wars bedsheets, who now consults for the Pentagon, but also because of the very real impact of science fiction on what we build, but also how we understand it. New technologies often can seem not merely incomprehensible, but unimaginable. Science fiction, though, helps to take the shock out of what analysts call "Future-Shock." By allowing us to imagine the unimaginable, it helps prepare us for the future, including even in war.

This preparation extends beyond future expectations; science fiction creates a frame of reference that shapes our hopes and fears about the future, as well as how we reflect on the ethics of new technology. One set of human-rights experts I queried on the laws of unmanned warfare referenced Blade Runner, Terminator, and Robocop with the same weight as they did the Geneva Conventions. At another human rights organization, two leaders even got into a debate over whether the combat scenes in Star Trek were realistic; their idea was that resolving this could help determine whether the fictional codes of the Federation could be used as real world guides for today's tough ethical choices in war. And, of course, every single roboticist knows Asimov's "3 Laws" by heart and they have become the reference point for ethical discussions about robots.

[On the other hand], I don't think we have that kind of world coming any time soon. You can't do a book about robots without dealing with the question of a robots' revolt, so there is actually a chapter in Wired for War on it. That is, what do the actual experts in both science and the military think about robot revolt and whether it's a likelihood, and why it might, or might not happen? Here's a hint: It is not The Terminator, but The Matrix that may be more informative.

Your book explores how the introduction of robotics changes elements of the military command structure. Specifically you note that generals have the ability to take a far more active role in the actions of a battle. Do you see this change as a positive one for how battles are conducted?

No. This is actually one of those dirty little secrets that people in military are somewhat afraid to talk about for risk of their own careers. I call it the rise of the "tactical generals."

Our technologies are making it very easy for leaders at the highest level of command not only to peer into, but even take control of the lowest level operations. One four-star general, for example, in the book talked about he once spent a full two hours watching drone footage of an enemy target and then personally decided what sized bomb to drop on it. Similarly, a Special Operations Forces captain talked about a one-star, watching a raid on a terrorist hideout via a Predator, radioing into tell him where to move not merely his unit in the midst of battle, but where to position his individual troops.

These enhanced connections certainly help commanders become better informed and take personal responsibility for the situation. Indeed, who knows the commanders' intent better than the commanders themselves? But the line between timely intervention and micromanagement is a fine one, indeed. For instance, the four-star general can do the job of the captains, but those captains can't do the same on the kind of big strategic issues that only a four-star general has the authority and experience to handle. Even more, we have to ponder the long term consequences. What happens when the young officers now being cut out of the chain or micromanaged advance up the ranks... without the experience of making the tough calls?

This leadership issue is not just one for the troops. Civilian leaders are also being tempted to intervene, as they also now have a new ability to watch and decide what's going on in wars. Referencing how President Johnson often tried to influence the broader bombing campaign in Vietnam, a former Service Secretary worried that ultimately "It'll be like taking LBJ all the way down into the foxhole."

My sense on this is that we have to start wrestling with all the tough questions that are beginning to flow out from a world in which science fiction-like capabilities are being used in our very real and very human world.

Fortunately, in looking at future challenges, the lessons of the past remain solid guideposts. For example, on the issue of leadership, General George Marshall, chief of staff of the U.S. Army during World War II, remains an apt model, even for 21st century leaders. New inventions like the radio and teletype may have given him what seemed like at the time a science fiction-like ability to instruct his officers from afar. Marshall's approach, however, was to set the broad goals and agenda, have smart staff officers write up the details of the plan, but ensure that everything remained simple enough that a lieutenant in the field could understand and implement everything on their own. Just as the bedrock values of good politics, ethics, and law remain the same, regardless of the technology or century, so does good leadership.

Introducing autonomous systems into battle, either through Predator drones or Packbots, means that there is a huge amount of 'hands off' fighting on the part of soldiers - it takes them out of harm's way, which is generally seen as a positive step for how wars are fought. But what are some of the negative ramifications here?

This is a great illustration of the ripple effects of the future of war onto such areas as our politics.

I thought a former Assistant Secretary of Defense for Reagan put it well, when he said, "I like these systems because they save lives. But I also worry about more marketization of war, more "shock and awe" talk to defray discussion of the costs. People are more likely to support the use of force if they view it as costless."
My sense here is that robots bode to take certain trends already in play to their final, logical ending point. With no more draft, no more declaration of wars, no war bonds, and now the knowledge that the Americans at risk are mainly just American machines, the already-lowering bars to war may well hit the ground. We may well be seeing this now, with the Pakistan drone strikes. We've had over 50 of these strikes into Pakistan over the last year and a half, essentially the opening round of the Kosovo War. Yet, because they are unmanned and none of our people are at risk, we barely even talk about it in our media or politics. I think of SF writers like Vernor Vinge and Joe Haldeman here, and the parallels in our real world.

One of the enduring images of Science Fiction is that of the humanoid robot, like C3P0 or from Isaac Asimov's books. But the robots that we have now aren't anything like that. Do you think that we'll see more robots in the future designed so that their forms follow function, or will we move towards humanoid models that are more familiar? Will we avoid human forms in battle so that soldiers don't start sympathizing with the robots?

For all of humankind's progress in making various vehicles to move us from place to place, nothing yet beats our own effectors made for walking. Wheeled vehicles can only operate on 30% of the Earth's land surface, tracked vehicles on roughly 50%, while legs can tackle nearly 100%. Moreover, almost all the adjustments we have made to that surface to make it of value to us, our cities and buildings, were designed for those with legs.

The result is that, while our image of robots as metal humans may come from a mix of Hollywood movies and arrogance, the reality is this "humanoid" form of two arms and two legs may well be a necessary design for many roles, especially in war. In 2004, DARPA funded a study of optimal military robots forms that found "Humanoid robots should be fielded – the sooner the better."

But the human form is just a shape that robots might take. There is no limit on its size. Asimo, the robot that Honda has spent over $100 million developing, is roughly the size of a person, while Chroino, a robot from the University of Kyoto, stands just a foot high. Then there are "Mechas," basically giant robots. The word "mecha" comes from the Japanese abbreviation meka, shorthand for all things mechanical. Mechas are a staple of video games like Metal Gear Solid and Japanese manga comics, in which the Tokyo of the future is filled with giant robots that do construction, policing, and, of course, fight wars. In western science fiction, mechas have appeared as huge, building-sized robots such as the Iron Giant and or as just slightly bigger than human robotic suits, such as the one famously driven by Sigourney Weaver in Aliens.

With these inspirations in mind, many organizations have taken to making mechas real. Toyota Motor Corp. for example has developed the i-Foot. It is a 200 kg robot that stands on two legs and can climb stairs. The most popular military mecha designs borrow liberally from the world of science fiction. Sakakibara Kikai Co., for example, makes the Land Walker, which is effectively the Star Wars AT-ST All Terrain Scout Walker made real (this was the machine that the Ewoks took on in Return of the Jedi). Its prototype stands on two legs, it is 11 feet high, and has a design of mounting two cannon.

The advantage of such mecha designs is that, just like with humans, legs gives such giants the means to step over any obstacles that might limit where a truck or tank could go. However, the legs are also the major weakness. Robotic legs remain incredibly complex and expensive, and less capable the bigger they get. Moreover, being tall may allow the mecha to look down on opponents, but it also means that every enemy out there can see it. And even if those enemies are as unsophisticated as the stupid, despicable little Ewoks (who are to blame for the ruination of the Star Wars franchise), all they have to do is take out the legs to ruin the mecha's day.

For similar reasons many disparage the humanoid design, for robots big or small. When most of us look in the mirror, we have to admit that our bodies are not perfect and not just in the extra pounds on our waistlines or crooked nose from that old football injury. For example, our visual sensors (our eyes) are really quite badly situated, create bad periphery, have multiple blind spots, can't see in multiple spectrums, and are blind in the dark.

The result then is that while humanoid robots are a central type of robot form, they will not be the only one. The same DARPA study that extolled the future of humanoid soldiers also found that two legs are not necessarily the optimal form. As Rodney Brooks of the company iRobot (named after the Asimov book, they are the people who brought you the Packbot military robot and the Roomba robot vacuum cleaner) predicts, "In the next 10-20 years, we will get over our Star Wars-Star Trek complexes and build truly innovative robots."

While we've brought up Asimov, we have to talk about his 3 Laws of Robotics, and you mention that the stories are all about the robots breaking those laws, or some sort of conflict with them. Do you think that there is any merit to the introduction of said laws, or some variation to try to protect us from our creations?

When people talk about robots and issues of ethics, they always seem to bring up Isaac Asimov's "Three Laws of Robotics." But there are three big problems with these laws and their use in our real world. The first is that the laws are fiction! They are a plot device that Asimov made up to help drive his stories. Even more, his tales almost always revolved around how robots might follow these great sounding, logical ethical codes, but still go astray and the unintended consequences that result. An advertisement for the 2004 movie adaptation of Asimov's famous book I, Robot (starring the Fresh Prince and Tom Brady's baby mama) put it best, "Rules were made to be broken." For example, in one of Asimov's stories, robots are made to follow the laws, but they are given a certain meaning of "human." Prefiguring what now goes on in real-world ethnic cleansing campaigns, the robots only recognize people of a certain group as "human." They follow the laws, but still carry out genocide.

The second problem is that no technology can yet replicate Asimov's laws inside a machine. Roboticist Daniel Wilson's quote in the book puts it well. "Asimov's rules are neat, but they are also bullshit. For example, they are in English. How the heck do you program that?"

The most important reason for Asimov's Laws not being applied yet is how robots are being used in our real world. You don't arm a Reaper drone with a Hellfire missile or put a machine gun on a MAARS (Modular Advanced Armed Robotic System) not to cause humans to come to harm. That is the very point! The same goes to building a robot that takes any order from any human. Do I really want Osama Bin Laden to be able to order about my robot? And finally, the fact that robots can be sent out on dangerous missions to be "killed" is often the very rationale to using them. To give them a sense of "existence" and survival instinct would go against that rationale, as well as opens up potential scenarios from another science fiction series, the Terminator movies. The point here is that much of the funding for robotic research comes from the military, which is paying for robots that follow the very opposite of Asimov's laws. It explicitly wants robots that can kill, won't take orders from just any human, and don't care about their own lives.

The bigger issue, though, when it comes to robots and ethics, is not whether we can use something like Asimov's laws to make machines that are moral (which may be an inherent contradiction, given that morality wraps together both intent and action, not mere programming). Rather, we need to start wrestling with the ethics of the people behind the machines. Where is the code of ethics in the robotics field for what gets built and what doesn't? To what would a young roboticists turn to? Who gets to use these sophisticated systems and who doesn't? Is a Predator drone a technology that should just be limited to the military? Well, too late, the Department of Homeland Security is already flying six Predator drones doing border security. Likewise, many local police departments are exploring the purchase of their own drones to park over him crime neighborhoods. I may think that makes sense, until the drone is watching my neighborhood. But what about me? Is it within my 2nd Amendment right to have a robot that bears arms?

These all sound a bit like the sort of questions that would only be posed at science fiction conventions. But that is my point. When we talk about robots now, we are no longer talking about "mere science fiction" as one Pentagon analyst described of these technologies. They are very much of our real world.

As robots are used more and more in battle, it should be noted that the United States is not necessarily the leader in their deployment - do you foresee any sort of arms race when it comes to robots? Will all the low-tech methods for disabling robots be part of that arms race?

Yes, the US is certainly ahead now in this revolution. But what should worry us is that in war, there is no permanent first mover advantage. The French and British first used tanks, and then watched the German panzers roll right over them. The same goes in technology. For example, how many readers are now looking at this article on their Commodore or Wang computers?

Today, 43 other countries are working on military robotics of some sort, including Iran, China, Russia, and Pakistan. And we must worry about where the state of American manufacturing and, even more, our science and mathematics education has us headed. What does it mean to depend on soldiers with computer chips made in China and the software written in India?

But, as you note, robots are vulnerable to both high tech and low tech responses to target new vulnerabilities. A robot might be targeted for hacking, meaning we will have wars not just of destruction, but of "persuasion," where you try to not destroy the enemy tank, but jam or take it over. On the low tech side, though, an incredibly useful technology against one of our SWORD systems (a machine gun armed land robot) is actually a 6 year old with a can of spray paint. You either have to be incredibly bloody minded and shoot an unarmed little kid or watch as they take out the sensors and effectively blind your sophisticated machine.

Not all robotic or automated systems necessarily are mobile, but AI systems can fall into the conduct of war. You note the possible creation of AI aides to assist commanders during combat by crunching variables and predicting the outcome of battles - how far can this go? Will we see computers selecting soldiers individually for missions and locations, with the possibility that soldiers might be predicted to become a casualty before they even reach the battlefield? Where does human efforts and intuition come into this scenario, and how long before humans are conceivably out of the loop?

One of the reasons we are turning to machines is how the time needed for decisions in war is getting shorter and shorter. This is what led, for example, to the defense against mortars and rockets in Iraq being turned over to the R2-D2-like CRAM automated gun system. Humans just couldn't fit into the shorter time loop needed to shoot down incoming rockets. This shortening of time in the decision cycle is not just for the trigger-pullers, but is working its way up the chain to the generals' level. Marine General James Cartwright, Chief of the US Strategic Command predicts that, "The decision cycle of the future is not going to be minutes. The decision cycle of the future is going to be microseconds."

And thus, many think there may be one last, fundamental change in the role of commanders at war., figuring out just what command roles to leave to humans, and which to hand over to machines.

The world is already awash with all sorts of computer systems that we use to sift through information, and decide matters on our behalf. Artificial Intelligence (AI) in your email likely filters out junk mail, while billions of dollars are traded on the stock market by AI systems that decide when to buy and sell based only on algorithms.
The same sort of "expert systems" are gradually being introduced into the military. The Defense Advanced Research Projects Agency (DARPA), for example, has created the Integrated Battle Command, a system that gives military officers what it calls "decision aids." These are AI that allows a commander to visualize and evaluate their plans, as well as predict the impact of a variety of effects. For example, the system helps a command team building a military operational plan to assess the various interactions that will take place in it, so that they can see how changing certain parameters might play out in direct and indirect ways so complex that a human would find them difficult to calculate. The next phase in the project is to build an AI that plans out an entire military campaign.

The military intelligence officer version of this is RAID (the Real-time Adversarial Intelligence and Decision-making), an AI that scans a database of previous enemy actions within an area of operations to help "provide the commander with an estimate of his opponent's strategic objectives." Similarly, "battle management" systems have been activated that provide advice not only on actions an enemy might take, but also potential counter-moves, even drawing up the deployment and logistical plans for units to redeploy, as well as creating the command orders that an officer would have to issue. The Israeli military is even fielding a "virtual battle management" AI, whose primary job is to support mission commanders, but can take over in extreme situations, such as when the number of incoming targets overwhelms the human.

The developers behind such programs argue that the advantage of using computers instead of humans is not only their greater speed and processing power, but also that they don't come with our human flaws; they do not have so-called "cognitive biases." Because searching though data and then processing it takes too much time, human commanders without such aids have to pick out which data they want to look at and which to ignore. Not only does this inevitably lead them to skip the rest of the information that they don't have the time to cover, but humans also tend to give more weight in their decisions to the information they see first, even if it is not representative of the whole. The result is what is called "satisficing." They tend to come out with a satisfactory answer, though not the optimal answer. One Air Force officer planning air strikes in the Middle East, for example, described to me how each morning he received a "three inch deep" folder of printouts with that night's intelligence data, which he could only skim quickly through before he had to start assigning missions. "A lot of data is falling on the floor."

The first issue is, of course, such artificial decision systems raise is that they are how robots invariably take over the world in movies like The Terminator or stories like Harlan Ellison's "I Have No Mouth, and I Must Scream."

But machine intelligence may not be the perfect match for the realm of war for the very reason that it remains a human realm, even with machines fighting in it. It may seem just like a game of chess to some, but war doesn't have a finite set of possible actions and a quantifiable logic of 0 and 1s. Instead, as one writer put it, "In war, as in life, spontaneity still prevails over programming."

It also raises interesting questions of law. What happens when a human commander doesn't listen to the advice of their AI? If they get it right, we will likely pat them on the back and congratulate them for going with their instinct. But Air Force Major General Charles Dunlap describes how "there is a legal and moral duty," as outlined in the laws of war, to "take all feasible precautions" to prevent civilian casualties. This legal understanding, he explains, becomes much more complex with the unmanned systems and battle management AI that are growing more sophisticated, including allowing computer simulations and modeling before the actual fight. "What if a commander chooses a course of action outside the model that result in a higher number of civilian casualties?" By not listening to the AI, the commander ignored a duty to take feasible precautions and thus committed a potential war crime. On the other hand, to punish any officer for this would be placing more legal trust in the judgment of the computer than the human being actually at war.

Do you own a robot?

Yes, we have a Roomba. It and my cat seem to have a love-hate relationship, as they chase each other around the room. This was actually the opening to a chapter that ended up on the cutting room floor. Neither has forgiven me yet for being left out of the book.

Given the number of references to various science fiction works, it's clear that you're a fan of the genre - any favorites that you'd like to pass along?

Most definitely! That was one of the best aspects of the research process, to go around interviewing greats like Orson Scott Card and Greg Bear about where they saw this all headed. That was an exceptionally fun chapter to write.

But there are way too many to say "favorites." So why don't I identify what I like the very least: Ewoks. As I mentioned above, they are the ruination of all that is good. The only positive I can take away from their appearance in Return of the Jedi is that the bits of the Death Star raining down into its atmosphere likely caused a nuclear winter on Endor, ending the scourge of those cuddly little rascals.

Send an email to Andrew Liptak, the author of this post, at liptakaa@gmail.com.
Edit: added original text from source in case it goes away.
People tend to look at you a little strangely when they know you stuff voodoo dolls full of Ex-Lax.
Post Reply