Head Of Robotics Company Warns Of Dangers Using AI In Conflicts

File source: //commons.wikimedia.org/wiki/File:Military_robot_being_prepared_to_inspect_a_bomb.jpg

(Tea Party 247) – For anyone who has ever seen any movies about robots, it’s pretty clear why governments shouldn’t be using robots and other AI technology to engage in warfare and conflicts. While these movies are fiction, it would seem there are those within the AI industry who are warning about the very real dangers of using AI in this way.

NewsWars reports:

In November, a report by Army Times detailed how the US military plans to enhance the capabilities of soldiers via a human-machine fusion, including connecting human brains to computers in order to control vehicles.

Rich Walker, the head of Shadow Robot Company, warned of catastrophic consequences for mankind if artificial intelligence is used in military conflicts.

In an interview with the Daily Express, he claimed that organizations that are studying the use of autonomous weapons systems in conflicts say that using AI in these circumstances is a bad idea. “The positives of doing it are so small compared to the huge stack of negatives, so really we shouldn’t be doing this”, Walker cited the organizations as saying.

He said that the international community should treat this technology the same way it treats other dangerous technologies that people resort to in desperate situations. “We don’t allow people to use chemical weapons in war, we don’t allow people to use biological weapons and the use of land mines and cluster mines is heavily regulated or controlled”, Walker explained.

He confessed that he is against the American model of testing weapons, which he described as “let us see what goes wrong and what we can do about this”. “Overall with AI and warfare, we can see some of the things that could go wrong so let us stop them from ever happening if we can”, Walker said.

In September, Microsoft President Brad Smith said that the world needs a global convention on the use of autonomous weapons systems and stressed that robots should “not be allowed to decide on their own to engage in combat and who to kill”.

It’s an excellent point that it is just not accepted to use chemical or biological weapons in warfare. Why should AI technology and weapons be any different? If we have industry leaders cautioning against the use of this kind of technology in conflicts, it seems like we should be heeding those warnings. The United States Army, however, seems gung-ho to test them out despite these warnings.

What makes anyone think they can control Artificial Intelligence any more than we can control our own smart devices? Sure, it’s awesome getting a new phone or laptop or tablet but after 6 months, even the latest technologies seem to malfunction in strange ways and deteriorate rapidly. All electronic devices are subject to potential misfires and glitches. How many times have you been at work trying to get the copy machine to work right and we think we can somehow get technology to perform perfectly in conflict situations?

If us regular Joes recognize this how much more serious should we take industry experts and leaders who are telling us using AI in warfare is just a downright bad and dangerous idea?

Let’s hope the people in charge of making these decisions within the US Government are reasonable enough to heed these warnings before it’s too late.


  1. It is stupid silliness to make laws that confine war. War is about winning and winning is all about who has the biggest baddest gun, period. Are we to go back to simple hand fighting? No knives?, no guns? no baseball bats? no bricks or stones? War is about using the biggest and baddest weapon available to win period. Otherwise it is just plain foolishness.
    If anyone dissagrees then please, please tell all of us a better way of winning any war.

  2. we the americans must always have the most advanced technology all the time.the day our many enemies have better weapons they are coming our way.


Please enter your comment!
Please enter your name here