How is AI a two-edged sword?
However, it can also be a useful tool for combating such harmful threats and improving security systems. Artificial intelligence empowers both defenders and attackers in the digital realm, making it a two-edged sword with enormous potential for growth but also a huge threat.
While AI can give the military substantial advantages, former Indian Army chief Gen. M.M. Naravane (retd) discusses how completely autonomous weapons represent a big risk.
How AI in defense is a two-edged sword ?Prime Minister Narendra Modi announced at a joint session of the US Congress in June 2023 that the future is AI—’Artificial Intelligence’ and ‘America India’. At the annual board meeting of the US-India Strategic Partnership Forum in New Delhi in October 2023, both chairman emeritus John Chambers and CEO Dr Mukesh Aghi emphasized the importance of AI in Indo-US relations. AI is the buzzword that is reverberating worldwide, from government circles to boardrooms. It will be the 21st century’s equivalent of silicon chips, penetrating every facet of our life.
AI has been a long-standing issue of discussion in the defense sector. It has the ability to transform the way we approach warfare, from training and surveillance to logistics, cybersecurity, unmanned aerial vehicles (UAVs), advanced military weaponry such as Lethal Autonomous Weapon Systems (LAWS), autonomous combat vehicles, and robotics. AI-powered military gadgets can handle large volumes of data, allowing the armed services to make more informed decisions. However, the use of AI in the military carries major risks, as it can be utilized to create autonomous weapons that operate without human intervention. The employment of such weapons may have unexpected repercussions and endanger human lives.
The use of AI in the military is a two-edged sword. It is critical to ensure that it is handled responsibly. The development of AI-powered weaponry should be regulated to guarantee that human lives are not endangered. In the world of cybersecurity, AI is also a mixed blessing. While AI can assist detect and prevent cyber attacks, attackers can also utilize it to avoid detection and launch more sophisticated operations. Like military systems, AI-based cybersecurity solutions must safeguard users and prevent cyber attacks, particularly on sensor-shooter linkages and weapons platforms.
The employment of AI in the military can give numerous benefits:
Improved decision-making: AI-powered military gadgets can handle large volumes of data, allowing the armed services to make more informed decisions.
Enhanced surveillance: Artificial intelligence can be utilized to create advanced surveillance systems capable of detecting and tracking enemy movements and activities.
Improved logistics: It can optimize logistics and supply chain management, ensuring that the correct resources are available when and where they are needed.
Autonomous vehicles: Artificial intelligence can be utilized to create autonomous combat vehicles and drones, lowering the danger of human casualties.
Cybersecurity: It can identify and prevent cyber threats, protecting vital military information.
Many of these advantages, such as improved decision-making, logistics, training, and cybersecurity, would be applicable in numerous areas of life. When AI is applied in lethal kinetic systems, the situation becomes more difficult. Ethical considerations will arise in order to ensure that weapon systems operate in accordance with humane ideals and values. However, what these values should be has still to be determined and may differ from country to country.
Whether deadly devices should be totally autonomous or have a human in or on the loop is an equally vexatious problem. A purely management decision that is mostly reversible can be implemented without oversight. Launching missiles against an enemy target is a very different matter. If deployed in a strictly defensive manner, such as to shoot down incoming missiles, like Israel’s Iron Dome, it may be autonomous and a legal use of AI.
However, considerably more attention must be devoted to its application against human targets. AI is not flawless and is prone to biases, whether by accident or purpose. AI-based algorithms employed by many companies for employment or healthcare revealed evident prejudices based on gender, color, and even accent. The unintentional incorporation of biases in military AI could be deadly. Moreover, we have not yet reached the level where human emotions like empathy or compassion may be included in—an Emotional AI. Artificial intelligence will and must be applied in military systems, particularly if it is to limit collateral damage and shorten combat. However, as the Gaza conflict has shown, what constitutes acceptable collateral damage is susceptible to interpretation.
With the development of artificial intelligence for military application, international conventions governing AI use, similar to the Convention on Certain Conventional Weapons, are required. One important component of such a convention should be the requirement to include a human in the loop who bears moral responsibility for the ultimate choice. The ‘Stop Killer Robots’ movement aims to accomplish just that, advocating for legislation to limit the amount of freedom granted to AI-powered weapon systems. India is making major efforts to integrate AI into its military forces. However, it must be used in an ethical and responsible manner. At the end of the day, the taking of human life cannot be attributed to an algorithm.
Also Read: This AI based Binocular identifies Birds