HomeOpinionA Timely Collaboration On AI

A Timely Collaboration On AI

A Timely Collaboration On AI

By D.C. PATHAK

NEW DELHI, (IANS) – Of all the matters validating strategic cooperation between India and the US from a long-term perspective, perhaps the most important was the pledge of common resolve of President Joe Biden and Prime Minister Narendra Modi to keep up commitments on safety, security, and trust in regard to Artificial Intelligence (AI).

The US is already working with seven leading AI companies including Google, Microsoft, Amazon, and Meta to make sure that AI applications are built up as a safe and trustworthy instrument for lifting humanity at large.

Science Advisor to President Biden, Arati Prabhakar, announced that Indo-US cooperation would boost the ability to deal with AI’s harms and start using it for good.

There is an implied acknowledgment of the great reality that Information Technology can be used with equal effectiveness as a weapon of combat in ‘information warfare’ and other covert operations as well as a means of spreading subversion and radicalization.

The potential for misuse includes malware injection technologies, data manipulation, forgery, cyber-attacks, and terrorism. On the other hand, AI-powered cyber security solutions in coordination with human intelligence can be extremely useful particularly when one is dealing with large amounts of data.

The AI can analyze this data to find patterns and anomalies and possibly detect the modus operandi of the adversary’s operation.

A sobering thought is that of all digital data in the world 90 percent has been created in the last two years – the speed with which information is generated apart, protection of personal data is the emerging challenge.

Smart computer systems are becoming increasingly adept at remembering and reading what we as people would be doing – this includes skills of ‘looking’, ‘listening’ or ‘speaking’. And they learn to discover patterns and rules from huge amounts of data which can even give them an upper hand in some areas of human activity.

AI systems are faster, are never tired, and have a built-in capacity to learn from examples. They are better able to recognize art forgery and detect dementia before a medical specialist could consider that option and predict diabetes.

The predictive value of AI is very extensive within the input-output paradigm that remained its defining feature.

Amazon is said to have taken to ‘predictive shipping’ whereby they would be able to send you a package before even you know you want it. AI does appear to be overriding the limitation of the input-output principle while creating new products and services.

An area of concern regarding AI is that if ‘automated decision-making systems’ are fed discriminatory data, they will reproduce the bias of the input reflected in the choice of algorithm and yet have the advantage of falsely inspiring more belief because of the human nature of considering these systems trustworthy. This bias can come into play in ‘predictive policing’ where the vulnerable in society could face an undeserved disadvantage on account of a contaminated data set traceable to the hostility of a certain kind from data providers. On the other hand, an AI company can use its resources for producing clearly defined profiles of people – with greater precision – which can be used for political purposes.

It is a measure of the apprehensions about the possible misuse of AI that governments across the world are already seized of the issue of putting in place laws and restrictions to regulate AI operations.

The Telecom Regulatory Authority of India (TRAI) has recommended the creation of an independent statutory body to ensure the right development of AI across sectors. It wanted the adoption of an ethical code by both public and private entities.

Ethical use of data has been flagged by TRAI as a major concern for the government as well as corporate entities. It must be understood that AI-powered national security systems run the risk of hacking or manipulation by the adversary with disastrous consequences. AI is effectively used in rockets, missiles, aircraft carriers, naval assets, and other automated defense systems. Creators of AI need to know that the new technology could also be used by the enemy for indoctrinating young minds and raising agents of terror including ‘lone wolves. On the other hand, AI-based systems can be proactively used for detecting whether a website or email is a phishing trap. In short, the inevitable use of AI brings in its own challenges spanning the ethical and regulatory realms.

Major powers like the US and China are investing big time in creating AI-based systems in their search for maintaining a military lead. AI is being used for preparations needed for facing the future battlefield.

Meanwhile, AI’s wide applicability in almost every sector has permeated human lives ranging from the service sector using voice assistants like Alexa, Siri, and OTT platforms to health care, agriculture, climate change, and financial spheres. However, its immense potential in security and defense is what is attracting the attention of policymakers and defense analysts.

Intelligence, Surveillance and Reconnaissance, cyber security, military logistics, and in particular Lethal Autonomous Weapons Systems have acquired newfound importance because of AI and so have image clarification from drone footage and geospatial data analysis.

In the military domain, an area of concern is that AI is providing new autonomous and affordable capabilities to a wide range of actors. AI has given weak states and non-state actors more options to enhance their capabilities and, in the process, strengthened asymmetric warfare possibilities. Too much AI development favored on the presumption of its potential positives should not make people oblivious of its negative side including, the danger posed to national security itself.

The risk of AI Chatbots influencing young minds vulnerable to neurodivergence to become terrorists is real. Greater transparency must be demanded of AI technology companies, and this would include identifying the personnel responsible for checking the guardrails. These members could themselves become a source of threat on account of some vulnerability they were suffering from – they should rightly be kept in the purview of a functioning internal vigilance system that all sensitive organizations were expected to have.

Advancement in AI is expressed mainly in ‘machine learning’ that can enable a high degree of automation in otherwise labor-intensive activities such as satellite imagery analysis and cyber defenses.

AI will affect national security generally while driving military and information superiority because the adversary could be using the same AI operations for damaging the other side.

(Pathak is a former Director of the Intelligence Bureau.)

Share With:
No Comments

Leave A Comment