Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Research › peer-review
Deepfakes as the new challenge of national and international psychological security. / Pantserev, Konstantin A.
Proceedings of the European Conference on the Impact of Artificial Intelligence and Robotics, ECIAIR 2020. ed. / Florinda Matos. Academic Conferences and Publishing International Limited, 2020. p. 93-99 (Proceedings of the European Conference on the Impact of Artificial Intelligence and Robotics, ECIAIR 2020).Research output: Chapter in Book/Report/Conference proceeding › Conference contribution › Research › peer-review
}
TY - GEN
T1 - Deepfakes as the new challenge of national and international psychological security
AU - Pantserev, Konstantin A.
N1 - Funding Information: The author acknowledges Saint-Petersburg State University for the research grant 26520757. Publisher Copyright: © ECIAIR 2020.All right reserved. Copyright: Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020
Y1 - 2020
N2 - The contemporary world features the rapid implementation of different technological solutions based on artificial intelligence (AI) algorithms. Nowadays AI-based technologies are used in machine translation systems, medical diagnostics, e-trade, e-education and even in the production of news and information. There have already appeared voice assistants which have significantly simplified and hastened the quest for appropriate information on the Web. Thus the production of AI-based technologies is considered as a key priority in the field of science and technology by all leading countries. But at the same time it is necessary to point out that when paying great attention to studies in the field of AI one should not forget the possibility of the malicious use of such technologies which can cause global catastrophic consequences. The author suggests discussing this issue using the example of deepfakes which represent a method of synthesis of the human image with the aid of appropriate AI algorithms. Deepfakes provide an opportunity for any user with basic computer skills to create a clone of a well-known figure and manipulates his or her words. Thus we are coming to a problem that in today’s digital age one cannot even trust his or her eyes or ears because even video or audio with the apparent participation of the person portrayed could be false. Undoubtedly this raises misinformation to a qualitatively new level. That is why it seems crucial to think about how it is possible to identify the false information produced with the aid of advanced technologies. In this paper the author analyses a wide range of examples of deepfakes in the up-to-date information space. He also focuses on analyses of existing identification methods of deepfakes; on how to distinguish fakes that are used for fun, entertainment or self-expression from deepfakes used for malicious purposes; and how to counteract the further distribution of toxic content which represents a serious threat both for national and international psychological security.
AB - The contemporary world features the rapid implementation of different technological solutions based on artificial intelligence (AI) algorithms. Nowadays AI-based technologies are used in machine translation systems, medical diagnostics, e-trade, e-education and even in the production of news and information. There have already appeared voice assistants which have significantly simplified and hastened the quest for appropriate information on the Web. Thus the production of AI-based technologies is considered as a key priority in the field of science and technology by all leading countries. But at the same time it is necessary to point out that when paying great attention to studies in the field of AI one should not forget the possibility of the malicious use of such technologies which can cause global catastrophic consequences. The author suggests discussing this issue using the example of deepfakes which represent a method of synthesis of the human image with the aid of appropriate AI algorithms. Deepfakes provide an opportunity for any user with basic computer skills to create a clone of a well-known figure and manipulates his or her words. Thus we are coming to a problem that in today’s digital age one cannot even trust his or her eyes or ears because even video or audio with the apparent participation of the person portrayed could be false. Undoubtedly this raises misinformation to a qualitatively new level. That is why it seems crucial to think about how it is possible to identify the false information produced with the aid of advanced technologies. In this paper the author analyses a wide range of examples of deepfakes in the up-to-date information space. He also focuses on analyses of existing identification methods of deepfakes; on how to distinguish fakes that are used for fun, entertainment or self-expression from deepfakes used for malicious purposes; and how to counteract the further distribution of toxic content which represents a serious threat both for national and international psychological security.
KW - Artificial intelligence
KW - Deepfakes
KW - Disinformation
KW - Fakes
KW - Information technologies
KW - National security
KW - Psychological warfare
UR - http://www.scopus.com/inward/record.url?scp=85097810321&partnerID=8YFLogxK
U2 - 10.34190/EAIR.20.003
DO - 10.34190/EAIR.20.003
M3 - Conference contribution
AN - SCOPUS:85097810321
T3 - Proceedings of the European Conference on the Impact of Artificial Intelligence and Robotics, ECIAIR 2020
SP - 93
EP - 99
BT - Proceedings of the European Conference on the Impact of Artificial Intelligence and Robotics, ECIAIR 2020
A2 - Matos, Florinda
PB - Academic Conferences and Publishing International Limited
T2 - 2nd European Conference on the Impact of Artificial Intelligence and Robotics, ECIAIR 2020
Y2 - 22 October 2020 through 23 October 2020
ER -
ID: 72627350