Prix bas
CHF184.00
Impression sur demande - l'exemplaire sera recherché pour vous.
This handbook focuses on new threats to psychological security that are posed by the malicious use of AI and how it can be used to counteract such threats. Studies on the malicious use of AI through deepfakes, agenda setting, sentiment analysis and affective computing and so forth, provide a visual representation of the various forms and methods of malicious influence on the human psyche, and through this on the political, economic, cultural processes, the activities of state and non-state institutions. Separate chapters examine the malicious use of AI in geopolitical confrontation, political campaigns, strategic deception, damage to corporate reputation, and activities of extremist and terrorist organizations. This is a unique volume that brings together a multidisciplinary range of established scholars and upcoming new researchers from 11 countries. This handbook is an invaluable resource for students, researchers, and professionals interested in this new and developing field of social practice and knowledge.
Brings together international experts to provide a comprehensive and global perspective on the topic Explores the methods of the malicious use of AI on the human psyche, and through this on social and political processes Examines regional and national implications of the malicious use of AI for psychological security
Auteur
Evgeny Pashentsev is Professor and Leading Researcher at the Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation, Russia. He is a coordinator of the International Research MUAI Group, and a member of the International Advisory Board of Comunicar, Spain, and the editorial board of the Journal of Political Marketing, USA.
Contenu
Chapter 1 Introduction: The Malicious Use of Artificial Intelligence: Growing Threats, Delayed Responses, Evgeny Pashentsev.- Part I The Malicious Use of Artificial Intelligence against Psychological Security: Forms and Methods.- Chapter 2 General Content and Possible Threat Classifications of the Malicious Use of Artificial Intelligence to Psychological Security, Evgeny Pashentsev.- Chapter 3 The Malicious Use of Deepfakes against Psychological Security and Political Stability, Evgeny Pashentsev.- Chapter 4 Automating Extremism: Mapping the Affective Roles of Artificial Agents in Online Radicalization, Peter Mantello, Manh-Tung Ho and Lena Podoletz.- Chapter 5 Hate Speech in Perception Management Campaigns: New Opportunities of Sentiment Analysis and Affective Computing, Yury Kolotaev.- Chapter 6 Malicious Use of Artificial Intelligence through Agenda Setting, Evgeny Pashentsev.- Part II Areas of Malicious Use of Artificial Intelligence in the Context of Threats to Psychological Security.- Chapter 7 The COVID-19 Pandemic and the Rise of Malicious Use of Artificial Intelligence Threats to National and International Psychological Security, Marta N. Lukacovic and Deborah D. Sellnow-Richmond.- Chapter 8. Malicious Use of Artificial Intelligence in Political Campaigns: Challenges for International Psychological Security for the Next Decades, Marius Vacarelu.- Chapter 9 Destabilization of Unstable Dynamic Social Equilibriums and the Malicious Use of Artificial Intelligence in High-Tech Strategic Psychological Warfare, Evgeny Pashentsev.- Chapter 10 Current and Future Threats of the Malicious Use of Artificial Intelligence by Terrorists: Psychological Aspects, Darya Bazarkina.- Chapter 11 Malicious Use of Artificial Intelligence and the Threats to Corporate Reputation in International Business, Erik Vlaeminck.- Part III Regional and National Implications of the Malicious Use of Artificial Intelligence and Psychological Security.- Chapter 12. Malicious Use of ArtificialIntelligence: Risks to Psychological Security in BRICS Countries, Evgeny Pashentsev and Darya Bazarkina.- Chapter 13 The Threats and Current Practice of Malicious Use of Artificial Intelligence in Psychological Area in China, Darya Bazarkina, Ekaterina Mikhalevich, Evgeny Pashentsev, and Darya Matyashova.- Chapter 14 Malicious Use of Artificial Intelligence, Uncertainty, and U.S.China Strategic Mutual Trust, Cuihong Cai and Ruoyang Zhang.- Chapter 15 Scenario Analysis of Malicious Use of Artificial Intelligence and Challenges to Psychological Security in India, Arvind Gupta and Aakash Guglani.- Chapter 16 Current and Potential Malicious Use of Artificial Intelligence Threats in the Psychological Domain: the Case of Japan, Darya Bazarkina, Yury Kolotaev, Evgeny Pashentsev, and Darya Matyashova.- Chapter 17 Geopolitical Competition and the Challenges for the EU of Countering the Malicious Use of Artificial Intelligence, Pierre-Emannuelle Thomann.- Chapter 18. Germany: Rising Sociopolitical Controversies and Threats to Psychological Security from the Malicious Use of Artificial Intelligence, Darya Matyashova.- Chapter 19 Artificial Intelligence and Deepfakes in Strategic Deception Campaigns: The U.S. and Russian Experiences, Sergei A. Samoilenko and Inna Suvorova.- Chapter 20 Malicious Use of Artificial Intelligence and Threats to Psychological Security in Latin America: Common Problems, Current Practice and Prospects, Evgeny Pashentsev and Darya Bazarkina.- Chapter 21 Malicious Use of Artificial Intelligence and the Threat to Psychological Security in the Middle East: Aggravation of Political and Social Turbulence, Vitali Romanovski.- Part IV Future Horizons: The New Quality of Malicious Use of Artificial Intelligence Threats to Psychological Security.- Chapter 22 Malicious Use of Artificial Intelligence in the Metaverse: Possible Threats and Countermeasures, Sergey A. Sebekin and Andrei Kalegin.- Chapter 23. Unpredictable Threats from the Malicious Use of Artificial Strong Intelligence, Alexander Raikov.- Chapter 24 Prospects for a Qualitative Breakthrough in Artificial Intelligence Development and Possible Models for Social Development: Opportunities and Threats, Evgeny Pashentsev.- Chapter 25 Conclusion Per Aspera Ad Astra, Evgeny Pashentsev.