With the rapid development and proliferation of automatic weapons, machines began to replace humans on the battlefield. Some military and robot experts predict that “murderous robots” – completely independent weapons that can be selected and attacked without human intervention – can be developed in 20 to 30 years.
At present, military officials often say that people will maintain some degree of oversight over the use of lethal force, but their statements often make it possible for robots to make those choices one day.
Fully independent weapons, also known as “fatal robots,” will be able to select and participate in goals without human intervention. The predecessors of these weapons, such as drones, are being developed and deployed by countries such as China, Israel, South Korea, Russia, the United Kingdom, and the United States.
It is doubtful whether a fully independent weapon can meet the standards of international humanitarian law, including rules of distinction, proportionality and military necessity while threatening the basic principles of life and human dignity. Human Rights Watch calls for the development, production, and use of completely separate weapons.
Autonomous robots lack human judgment and the ability to understand the environment. These qualities are necessary to take complex ethical choices in a dynamic battlefield, to correctly distinguish between soldiers and civilians, and to assess the proportionality of attacks. Therefore, independent weapons cannot fully meet the requirements of the law of war.
Replacing individuals with machines can facilitate the decisive battle, which will increase the burden of armed conflict on civilians. The use of complete automatic weapons creates a gap in accountability because it is unclear who will be responsible for the actions of the robot: the commander, the programmer, the manufacturer, or the robot itself? Without responsibility, these parties have less incentive to ensure that robots do not endanger civilians and that victims are not satisfied with the punishment of their injuries.
There is an urgent need for a total ban on the development, production, and use of completely independent weapons – weapons that operate independently without human intervention. This can be achieved through international treaties as well as national laws and other measures.
The Campaign to Stop Killer Robots urge all countries to consider and publicly elaborate their policy on fully autonomous weapons, particularly with respect to the ethical, legal, policy, technical, and other concerns that have been raised.
Robots are nothing without human
The campaign to ban independent weapons will only distract attention, as States should try to organize people with deadly intentions rather than lethal weapons. Another limited goal is achievable.
What we need to do is to understand how humans use symbols and algorithms to shape the deadly decisions of robots and how to write and delete them through impeccable humans.
Without humans, the robot itself can’t do anything in exchange for the fear of some completely suspended supporters.
Can we verify proactive technological solutions to improve the potential abuse of artificial intelligence technology? This new focus may reveal true weapons of their own.
Imagine if we could automatically identify protected symbols, such as the Red Cross and Red Crescent emblems, and turn them into a “killer robot” to protect people, vehicles, ships and buildings that you automatically carry to reduce losses and minimize unreasonable damage.