INTEGRATED APPROACH TO THREAT MODELING IN ARTIFICIAL INTELLIGENCE SYSTEMS
DOI:
https://doi.org/10.28925/2663-4023.2025.30.993Keywords:
artificial intelligence; threat modeling; ISO/IEC 42001; NIST AI RMF; MITRE ATLAS; CSA MAESTRO; OWASP GenAI; integrated approach; risk management; AI security.Abstract
This paper substantiates the relevance of threat modeling for artificial intelligence (AI) systems in the context of increasing model autonomy and the emergence of new attack vectors. It demonstrates that traditional methods fail to account for the specific nature of AI, creating the need for a comprehensive approach capable of covering the entire system lifecycle. The methodological foundation of the integrated approach combines international standards and industry best practices: ISO/IEC 42001:2023 ensures governance and auditing, NIST AI RMF 1.0 defines the process cycle Govern–Map–Measure–Manage, MITRE ATLAS enriches models with realistic attack scenarios, CSA MAESTRO introduces multi-layer architectural decomposition, and OWASP GenAI Security Project provides operational artifacts and prioritization tools. This synthesis enables the integration of strategic policies, technical taxonomies, and practical playbooks into a single managed process. The proposed approach makes threat modeling continuous and evidence-based, ensuring traceability from threat identification to control implementation and performance metrics. It addresses both technical and socio-technical risks, including impacts on users and society, and supports profile-specific adaptation for various system types—from LLMs to agent-based platforms. Integration with CI/CD pipelines and automation of security checks improves response speed and reduces security costs. The scientific novelty lies in forming a holistic vision that combines governance, process discipline, architectural analysis, and operational instruments. The practical significance is in the ability to apply this approach to develop comprehensive protection strategies aligned with international standards and suitable for certification audits. The integrated approach establishes a foundation for large-scale AI deployment with proven security and trust during threat modeling. It not only enhances system resilience but also creates a standardized risk management framework that meets modern cybersecurity challenges.
Downloads
References
Neretin, O., & Kharchenko, V. (2022). Ensuring cybersecurity of artificial intelligence systems: Analysis of vulnerabilities, attacks, and countermeasures. Bulletin of the National University “Lviv Polytechnic”. Information Systems and Networks, (12), 7–22. http://nbuv.gov.ua/UJRN/VNULPICM_2022_12_4
Dudykovych, V. B., Mykytyn, H. V., & Kret, T. B. (2015). Multilevel intelligent control systems: Guarantee capability and object security. Information Processing Systems, (4), 92–95. http://nbuv.gov.ua/j-pdf/soi_2015_4_21.pdf
Martseniuk, Ye. V., Partyka, A. I., & Kret, T. B. (2025). Study of artificial intelligence vulnerabilities and development of a comprehensive organizational security model. Modern Information Protection, 1(61), 206–218. https://doi.org/10.31673/2409-7292.2025.018929
National Institute of Standards and Technology (NIST). (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0) (NIST AI 100-1). https://doi.org/10.6028/NIST.AI.100-1
Cloud Security Alliance. (2025, February 6). Agentic AI Threat Modeling Framework: MAESTRO. https://cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro
MITRE Corporation. (n.d.). ATLAS Matrix. https://atlas.mitre.org/matrices/ATLAS
OWASP Foundation. (n.d.). Gen AI Security Project: Introduction and background. https://genai.owasp.org/introduction-genai-security-project
Amazon Web Services (AWS). (n.d.). How to approach threat modeling. https://aws.amazon.com/blogs/security/how-to-approach-threat-modeling
Amazon Web Services (AWS). (n.d.). Threat modeling your generative AI workload to evaluate security risk. https://aws.amazon.com/blogs/security/threat-modeling-your-generative-ai-workload-to-evaluate-security-risk
Yevseiev, S. P., Shmatko, O. V., Akhiezer, O. B., Sokol, V. Ye., & Chernova, N. L. (2025). Attacks on artificial intelligence systems: Educational and practical manual (S. P. Yevseiev, Ed.). Kharkiv: NTU “KhPI”; Lviv: Novyi Svit-2000.
Straiker AI. (2025). Comparing AI security frameworks: OWASP, CSA, NIST, and MITRE. https://www.straiker.ai/blog/comparing-ai-security-frameworks-owasp-csa-nist-and-mitre
International Organization for Standardization (ISO). (2023). ISO/IEC 42001:2023 — Information technology – Artificial intelligence – Management system. https://www.iso.org/standard/42001
Microsoft. (n.d.). The STRIDE Threat Model. https://learn.microsoft.com/en-us/previous-versions/commerce-server/ee823878(v=cs.20)
Mauri, L., & Damiani, E. (2022). Modeling threats to AI-ML systems using STRIDE. Sensors, 22(17), 6662. https://doi.org/10.3390/s22176662
Khan, R., Sarkar, S., Mahata, S. K., & Jose, E. (2024). Security threats in agentic AI systems. arXiv preprint arXiv:2410.14728. https://doi.org/10.48550/arXiv.2410.14728
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Тарас Крет, Євгеній Марценюк

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.