Published:  12:10 AM, 21 June 2020

The malicious use of artificial intelligence: Some thoughts

The malicious use of artificial intelligence: Some thoughts
 
Martin Rees, in his book On the Future: Prospects for Humanity mentioned that 'Artificial intelligence earns its advantage over humans through its ability to analyse vast volumes of data and rapidly manipulate and respond to complex input.' But this can create some existential risk. For example, some argued that if the particle accelerator in CERN, Geneva generates unprecedented level of energy, then, should we not worry about this?

Yuval Noah Harari, in his book, 'Homo Deus: A Brief History of Tomorrow' mentioned that some countries can use 'logic bombs to shut down the power of California, blow up refineries in Texas and cause trains to collide in Michigan'. 'Logic bombs' are malicious software codes which can be planted and operated at a distance. Rees argued that 'we have zero grounds for confidence that we can survive the worst that future technologies could bring.' Similar level of threats can be assumed in social and psychological level of human kind.

In 2019, an international group of experts setup some collaborate joint research, conferences, and scientific seminars on the International Psychological Security (IPS) threats through the malicious use of artificial intelligence (MUAI). The group members formed a panel group, "The Malicious Use of Artificial Intelligence and International Psychological Security" at the Second International Conference on Information and Communication in the Digital Age: Explicit and Implicit Impacts.  A monograph titled Strategic Communication in EU-Russia Relations: Tensions, Challenges and Opportunities (edited by Evengy Pashentsev) have been prepared for publication from Palgrave Macmillan.

 On the other hand a book titled Terrorism and Advanced Technologies in Information and Psychological Warfare: New Risks, New Opportunities to Counter the Terrorist Threat was developed with the participation of 18 experts from 11 countries (edited by Darya Bazarkina, Evgeny Pashentsev, and Greg Simons, Nova Science Publishers, 2020) which discussed different aspects of the MUAI and indicated some new opportunities. This essay presents some of the research findings from the international group of experts from Russia, Cuba and Italy which focuses on the growing risks from the MUAI during coronavirus pandemic and its implication for political stability and psychological security.

In the paper titled 'Coronavirus pandemic and the rising threats of MUAI to national and international psychological security', Evgeny N. Pashentsev, a researcher from Saint Petersburg State University and Diplomatic Academy of Russia argued that the capabilities of artificial intelligence (AI) are growing at an unprecedented rate.

Given the growing capabilities of AI, the high level of Internet penetration in the majority of countries, the overall low level of protection (from legal to technical) from the MUAI, and the high level of socio-political stratification and corruption, one can assume that the coronavirus pandemic will increase threats to psychological security. He argued that the dynamics of AI technology development in general and the practical implementation of AI, dynamics of the further growth of socio-political tensions, both for reasons of internal order and growing global crises and geopolitical rivalries and by the practical readiness of state and non-state actors against the MUAI in different regions can determine the nature and the level of these threats.

Darya Bazarkina, a researcher from Saint Petersburg State University, Russia in their paper titled 'New Reality: A wide range of MUAI against psychological security' suggests that 'The MUAI represents a wide range of threats to IPS. For example, AI can be used by fraudsters to write and send phishing messages that people will be unable to recognize'. Darya concluded that 'The security of humanity is its common cause. This simple truth is more important than ever in the comprehensive analysis of threats to national and international psychological security, especially in today's very difficult and dangerous international environment'.

In his paper 'AI Threat Escalates', Alexander Raikov, a researcher from Russian Technological University suggests that computing and transmitting information are accelerating and virtual collaboration is developing. But the function of artificial intelligence to realize the human's ability of consciousness has not progressed. It is indeed a fact that human consciousness is still ahead of AI, but AI is gradually progressing using artificial neural network techniques, which is based upon human brain's neuron. Alexander eloquently argued that 'everything is changing in the world. Things, words, concepts, thoughts, and states of particles are in motion.

This movement has not only a logical, but also a relativistic, non-causal, quantum, and thermodynamic nature. AI has not yet mastered these gifts of nature. Perhaps this is why we are experiencing a crisis not only in economics, finance, and trust but also in physics, virology, and so on.' Alexander further suggests that the era of a new artificial general intelligence (AGI) will come and the question of the malicious use of AGI will rise with a new force, though, he thought that 'the risks will depend on to whose hands it falls. These risks will be disproportionately high.'

Arjie Antinori, a researcher from Sapienza University of Rome, Italy, in their paper 'Mediamorphosis of Terrorism and MUAI" suggests that 'The key elements of that process are the rise, spread, and use of new and social media instead of old media, the many-to-many communication model that substitutes the one-to-many model based on a hierarchical relationship between the producer of the message and people, such as consumers.

As a result, the "prosumer"-contextually producer and consumer-is the main "new" actor of a cyberspace populated by user generated content.' Arjie identified some challenges such as quickly moving from the cyber domain to cyber-social domain, then to the social domain and thought that the potential malicious use of AI-based technology highlights the high risk of exploiting the vulnerabilities of individuals in such a way as to deeply compromise the social ecosystem.

Arjie suggests that a multi-level strategy, based on a comprehensive approach, combining civilian, educational, political, and security instruments to prevent young people from any potential radicalization process should be adopted. However, Arjie suggests that 'it is necessary to develop specific "algor-ethics" that comply with human rights standards'.

Raynel Batista, a researcher from the University of Informatics Sciences, Cuba, Havana, in his paper titled 'Cross-cultural approach to MUAI in Latin American Regional Balance' suggests that 'Safe AI requires cultural intelligence and changes in cultural codes, behaviors, and fields of knowledge based on a sociocybernetic approach to analyze the phenomena of societal transformation and historical change of knowledge cultures'. However Raynel raised a question, if technology and culture together create a circle of influence or circles of sustainability, could a cross-cultural competency be the same for the global distribution of power? This indeed requires further investigation.

Kaleria Kramar, a researcher from the International Centre for Social and Political Studies, Russia, in their paper titled 'Prerequisites for the Potential Threats of MUAI for Psychological Security in Mexico' mentioned that Mexico ranks 32nd in Government Artificial Intelligence Readiness Index and the most likely driver of innovation will be the private sector, which includes foreign companies, often located in Mexico, but focused on the U.S. market.

Kaleria suggests that the threat of MUAI in Mexico is really possible. Kaleria also opined that 'due to economic, social problems, the upcoming negative effects of the pandemic, on the one hand, and the low level of media literacy of the population on the other hand, most likely MUAI threats will not be perceived as something urgent or even real in the near future. It is quite probable that in case of their occurrence there will no ability and sufficient resources to counteract such risks and provide the necessary level of psychological security of the population.'

It is clear from above research papers that AI is active in many aspects of our society. AI revolution is pushing the information based society to a new cognitive era. Safe AI requires cultural intelligence, multi-level strategy and need to understand mediamorphosis of MUAI. It is clear that the risk will depend on to whose hands it falls. We also need to identify asocial factors to MUAI and our ability to resist this. The security of humanity is a common cause. We as a society need to familiarize ourselves about the problem and prospect of AI. We should recognize that the unfamiliar is not the same as the improbable.


The writer is a UK based academic, chartered scientist, environmentalist, columnist and author.


Leave Your Comments



Latest News


More From Editorial

Go to Home Page »

Site Index The Asian Age