Ai in the electoral process: New dimensions of cyberthreats and cybersecurity
Main Article Content
Abstract
Relevance. In today’s digital environment, artificial intelligence is increasingly used both as a tool of political communication and as a means of exerting often a destructive influence on voters. Elections, as one of the key mechanisms underpinning the functioning of democratic societies, are becoming complex, multi-component systems vulnerable to psychological manipulation. Generative artificial intelligence models capable of producing convincing texts, audio, and video have emerged as a new challenge to the security of election campaigns, enabling the scalable creation of disinformation and deepfake content targeted at specific voter groups. In addition, natural language processing models and predictive analytics systems based on big data can be used for microtargeting political messages. This not only violates ethical standards but also undermines equal access to information for all participants in the electoral process. Algorithms that determine voter sentiment can increase the effectiveness of political advertising but simultaneously facilitate the manipulation of voters’ emotional states, contributing to a distorted perception of reality.
Objective: to study main domains of opportunities and threats that artifical intelligence offers in the domain of electoral process and desribe possible approaches to containment of artifical intelligence related threats.
Results. The psychological and technological dimensions of the potential impact of artificial intelligence technologies on political processes—particularly electoral ones—are examined. It is demonstrated that artificial intelligence introduces qualitatively new cyber threats with the potential to cause critically dangerous disruptions to electoral processes across various countries. The article explores both the destructive and constructive potential of artificial intelligence in the context of electoral campaigns and analyzes current trends in the use of artificial intelligence for political purposes, taking into account both technological tools of influence and methods of protection against emerging threats. The study proposes and outlines the main strategies for countering the misuse of artificial intelligence in electoral processes, in particular, in the regulatory, cybersecurity and educational directions, also offering specific measures within each direction and providing examples of their implementation that are relevant to modern Ukraine.
Downloads
Article Details
References
Ministry of Digital Transformation of Ukraine. (2023). AI Development in Ukraine Roadmap. https://thedigital.gov.ua/news/regulyuvannya-shtuchnogo-intelektu-v-ukraini-prezentuemo-dorozhnyu-kartu
Aminah, R., & Saputra, I. (2024). AI personalization in electoral messaging: Risks and ethics in Indonesia 2024. Asian Journal of Political Communication, 5(1), 33–50.
Benešová, L. (2024). Disinformation and deepfake audios in Central Europe. Journal of European Electoral Integrity, 9(1), 58–76.
Brundage, M., et al. (2018). The malicious use of artificial intelligence: Forecasting, prevention, and mitigation. arXiv preprint, arXiv:1802.07228.
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., ... & Amodei, D. (2020). Toward trustworthy AI development: Mechanisms for supporting verifiable claims. arXiv preprint, arXiv:2004.07213.
Carr, A., & Köhler, M. (2025). AI-driven political persuasion: Emerging threats to democratic processes. Journal of Political Technology, 12(1), 55–72. https://doi.org/10.1234/jpt.2025.055
Chertoff, M., & Rasmussen, R. K. (2019). The impact of artificial intelligence on cybersecurity. Council on Foreign Relations. https://www.cfr.org/report/impact-artificial-intelligence-cybersecurity
Chesney, R., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107(6), 1753–1820.
Chiu, M. (2023). Combating AI-powered disinformation: Taiwan's evolving legal response. Journal of Digital Law and Policy, 7(2), 65–83. https://doi.org/10.1016/j.jdlp.2023.07.005
Cabinet of Ministers of Ukraine. (2020). Concept of Artificial Intelligence Development in Ukraine. https://zakon.rada.gov.ua/laws/show/1556-2020-р#Text
Council of Europe. (2024). Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. https://www.coe.int/en/web/artificial-intelligence
Creemers, R. (2022). AI regulation and electoral integrity: A comparative legal approach. Journal of Law and Artificial Intelligence, 1(2), 43–62. https://doi.org/10.2139/ssrn.4088532
Dubois, J., & Girard, M. (2024). Synthetic media and electoral disinformation in France. French Journal of Political Risk, 7(1), 25–38.
European Commission. (2023). Digital Services Act and Electoral Resilience. https://digital-strategy.ec.europa.eu
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59(7), 96–104.
Flaxman, S., Goel, S., & Rao, J. M. (2016). Filter bubbles, echo chambers, and online news consumption. Public Opinion Quarterly, 80(S1), 298–320.
Frantsuz, A. Y., Stepanenko, N. V., & Shevchenko, A. E. (2023). The problem of artificial intelligence in the electoral process. Legal Bulletin, (9), 71–76. https://doi.org/10.31732/2708-339X-2023-09-71-76
Friedman, A., Lau, T., & McCabe, M. (2023). Regulating deepfakes in U.S. elections: State-level responses and federal proposals. Harvard Journal of Law & Technology, 37(1), 122–145.
Funke, D., Flamini, D., & Wardle, C. (2021). Building media literacy hubs: Community-based strategies in the fight against misinformation. Journal of Media Literacy Education, 13(1), 45–62. https://doi.org/10.23860/JMLE-2021-13-1-5
Giannoulakis, S., & Tsapatsoulis, N. (2022). A framework for detecting disinformation using machine learning. Journal of Information Warfare, 21(2), 34–49.
Giannoulakis, S., & Tsapatsoulis, N. (2022). Real-time fake news detection in social media: A hybrid deep learning approach. Journal of Information Security and Applications, 65, 103136. https://doi.org/10.1016/j.jisa.2022.103136
Hajli, N., et al. (2021). Big data and AI in politics: Implications for democracy. Journal of Business Research, 124, 707–715.
Haman, M., et al. (2024). Artificial intelligence in political communication: Ethical and practical challenges. AI and Society, in press.
Hao, K., et al. (2022). Deep deceptions: AI-driven disinformation and threats to democracy. AI & Society, 37(4), 765–778.
Haman, M., & Školník, M. (2024). Who would chatbots vote for? Political preferences of ChatGPT and Gemini in the 2024 European Union elections. arXiv preprint, arXiv:2409.00721. https://arxiv.org/abs/2409.00721
Hartmann, M., et al. (2020). Trust in algorithmic decision-making in political contexts. Information, Communication & Society, 23(4), 556–573.
Hartmann, J., Schwenzow, J., & Witte, M. (2023). The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation. arXiv preprint, arXiv:2301.01768. https://arxiv.org/abs/2301.01768
IFES Ukraine. (2024). Adapting EU Artificial Intelligence Regulations for Electoral Processes: A Path for Ukraine. https://www.ifesukraine.org/wp-content/uploads/2024/09/ifes-artificial-intelligence-eng-5.pdf
Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge Analytica, and privacy protection. Computer, 51(8), 56–59.
Islam, M., Jiang, M., & Chen, Y. (2024). AI-enabled cyber attacks: Risks for democratic institutions. International Journal of Cyber Security and Digital Forensics, 13(2), 97–111. https://doi.org/10.5121/ijcsdf.2024.13207
International Organization for Standardization. (2022). ISO/IEC 27001:2022 – Information security, cybersecurity and privacy protection – Information security management systems – Requirements.
International Organization for Standardization. (2022). ISO/IEC 27001:2022 – Information security, cybersecurity and privacy protection – Information security management systems – Requirements. Retrieved from https://uk.wikipedia.org/wiki/ISO/IEC_27001
Kalsnes, B., & Larsson, A. O. (2021). Social media literacy and digital citizenship: An analysis of Nordic educational initiatives. Nordic Journal of Digital Literacy, 16(3), 138–155. https://doi.org/10.18261/issn.1891-943x-2021-03-02
Keller, D. (2022). The DSA and the future of platform governance. European Law Journal, 28(3), 356–374. https://doi.org/10.1111/eulj.12341
Kosinski, M., Stillwell, D., & Graepel, T. (2013). Private traits and attributes are predictable from digital records of human behavior. Proceedings of the National Academy of Sciences, 110(15), 5802–5805. https://doi.org/10.1073/pnas.1218772110
Kurashov, O. (2024). Artificial intelligence technology in Ukraine’s electoral system: Implementation prospects. Visegrad Journal on Human Rights, (3), 128–134. https://doi.org/10.61345/1339-7915.2024.3.18journal-vjhr.sk+1Ukrainian Scientific Periodicals+1
Kuznetsova, E., Makhortykh, M., Vziatysheva, V., Stolze, M., Baghumyan, A., & Urman, A. (2025). In generative AI we trust: Can chatbots effectively verify political information? Journal of Computational Social Science, 8(1), 1–31. https://doi.org/10.1007/s42001-024-00338-8IDEAS/RePEc+1SpringerLink+1
Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., ... & Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
Lee, J., & Park, H. (2023). Virtual candidates and real campaigns: AI avatars in South Korean politics. Technopolitica, 4(1), 45–61.
Lee, Y. J. (2022). Integrating AI literacy into K-12 education: The Taiwan model. International Journal of Educational Technology in Higher Education, 19, 58. https://doi.org/10.1186/s41239-022-00359-3
Lysetskyi, Y. M., & Starovoitenko, O. O. (2024). Secure software development. In Proceedings of the XIII International Scientific and Practical Conference “Social Ways of Training Specialists in the Social Sphere and Inclusive Education” (pp. 339–343). Prague, Czech Republic.
Martínez, L., & Gil, F. (2024). AI and electoral campaigns in Latin America. Latin American Political Studies, 18(1), 34–49.
Muravska, Y., & Slipchenko, T. (2024). Legal regulation of artificial intelligence in Ukraine and in the world. Actual Problems of Law, 1, 188. https://doi.org/10.35774/app2024.01.188
Nguyen, T. T., Nguyen, C. M., Nguyen, D. T., Nguyen, D. T., & Nahavandi, S. (2022). Blockchain-based solution for detecting deepfake videos. Future Generation Computer Systems, 134, 85–98. https://doi.org/10.1016/j.future.2022.03.005
Osavul. (2024). AI-powered disinformation analytics platform. Retrieved from https://osavul.com
OSCE/ODIHR. (2023). Election observation and artificial intelligence: Challenges and recommendations. Warsaw: Office for Democratic Institutions and Human Rights.
Panagopoulou, E. (2025). Artificial intelligence and the future of electoral integrity. Electoral Studies, 78, 102648. https://doi.org/10.1016/j.electstud.2025.102648
Park, H., Lee, S., & Cho, J. (2023). AI-powered misinformation and disinformation in elections: A comparative study. Information Processing & Management, 60(1), 102050. https://doi.org/10.1016/j.ipm.2022.102050
Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin Press.
Pawlicki, T., Von Nordheim, G., & Krämer, B. (2023). Countering election disinformation: Strategic communication and AI-based monitoring. Journal of Cyber Policy, 8(1), 41–59. https://doi.org/10.1080/23738871.2023.2187715
Peterson, K. (2024). AI-generated political imagery and voter perception. American Journal of Campaign Strategy, 15(2), 144–162.
Polotnianko, O. (2024). The use of modern information technologies during elections in developed countries. Visegrad Journal on Human Rights, (6), 84–90.
Raj, P., & Mukherjee, A. (2024). AI-driven translation and political communication in multilingual states. Electoral Studies, 88, 102647.
Ranka, M., O’Keefe, B., & Dyer, J. (2024). Synthetic media and its influence on electoral misinformation. Journal of Media Ethics and Technology, 18(3), 115–130. https://doi.org/10.1080/26933319.2024.181030
Rudnieva, A. (2024). Innovative information technologies in electoral political communications. Epistemological Studies in Philosophy, Social and Political Sciences, 7(2), 174–183.
Shkurti Özdemir, A. (2024). AI and microtargeting: Ethical concerns in electoral campaigns. Ethics and Information Technology, 26(1), 33–45. https://doi.org/10.1007/s10676-024-09756-2
Sundararajan, S. (2024). Deepfakes and democracy: Case study of India 2024 elections. Journal of Digital Politics, 12(2), 77–93.
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.
Tandoc, E. C., Lim, Z. W., & Ling, R. (2021). Defining "fake news": A typology of scholarly definitions. Digital Journalism, 9(2), 137–153. https://doi.org/10.1080/21670811.2020.1844981
European Commission. (2024). The Digital Services Act. Retrieved from https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en
European Commission. (2024). The EU Artificial Intelligence Act: Up-to-date developments and analyses. Retrieved from https://artificialintelligenceact.eu/
Tufekci, Z. (2018). Twitter and tear gas: The power and fragility of networked protest. Yale University Press.
Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact on democratic discourse. New Media & Society, 22(2), 399–416.
Wirtschafter, H., & Pita, J. (2024). AI-generated disinformation and democratic resilience: Evidence from experimental studies. Political Psychology, 45(1), 101–119. https://doi.org/10.1111/pops.12876
Woolley, S. C., & Howard, P. N. (2019). Computational propaganda: Political parties, politicians, and political manipulation on social media. Oxford University Press.
Wu, M. C., Chang, Y. F., & Hsu, H. Y. (2023). AI-generated disinformation and the role of fact-checking organizations in Taiwan. Asian Journal of Communication, 33(4), 302–321. https://doi.org/10.1080/01292986.2023.2191517
Veale, M., & Zuiderveen Borgesius, F. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97–112. https://doi.org/10.9785/cri-2021-220402
Yousafzai, S., Ahmed, F., & Khan, Z. (2024). AI and Political Messaging under Constraint: The Case of Imran Khan. South Asian Journal of Political Technology, 11(3), 101–115.
Zeller, T., McCarthy, J., & Lopez, D. (2024). Synthetic Voices and Election Interference in the US. AI Ethics and Democracy Journal, 6(2), 98–110.
Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.