FEDERATIVE GNN-XAI MODEL FORPREDICTING COMPROMISE OF ACCOUNT RECORDS IN ZERO TRUST ENVIRONMENT
DOI:
https://doi.org/10.28925/2663-4023.2025.31.1049Keywords:
federated learning; graph neural networks; explainable artificial intelligence; compromise; cybersecurity; behavioral modeling; SIEM; Zero Trust.Abstract
The article presents a methodological approach to developing an intelligent system for predicting user account compromise in corporate information environments. The proposed system integrates federated learning, graph neural networks, and explainable artificial intelligence within the Zero Trust concept, ensuring an enhanced level of security for authentication and access management processes through decentralized data processing and privacy preservation during collaborative model training. One of the key features is local model training without transferring primary data to a central repository, which eliminates the possibility of interception or unauthorized access. Aggregation of local updates is performed using federated optimization mechanisms that account for the heterogeneity of data from various corporate domains. The graph module formalizes inter-user and inter-system relationships as a directed graph, allowing for the identification of latent behavioral dependencies and potential compromise risks at the level of individual connections between authentication objects. The integration of an explainable artificial intelligence component ensures transparency in the decision-making process, formalizes the justification of predictions, and reduces the frequency of false positives. The system is implemented in a Zero Trust paradigm, which involves continuous verification of user and device actions regardless of the trust level, ensuring adaptive real-time anomaly response. The obtained results demonstrate an increase in the accuracy of predicting account compromise compared to traditional centralized machine learning models and a reduction in the frequency of false positives. The proposed approach can be used to build adaptive security monitoring systems in critical information infrastructures operating in highly dynamic and distributed environments.
Downloads
References
González-Granadillo, G., González-Zarzosa, S., & Diaz, R. (2021). Security Information and Event Management (SIEM): Analysis, trends, and usage in critical infrastructures. Sensors, 21(14), 4759. https://doi.org/10.3390/s21144759.
Kindervag, J. (2010). Build security into your network’s DNA: The Zero Trust network architecture. Forrester Research. Retrieved from https://media.paloaltonetworks.com/documents/Forrester-Build-Security-Into-Your-Network.pdf.
Rose, S., Borchert, O., Mitchell, S., & Connelly, S. (2020). Zero Trust architecture (NIST Special Publication 800-207). National Institute of Standards and Technology. https://doi.org/10.6028/NIST.SP.800-207.
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). ACM. https://doi.org/10.1145/2939672.2939778.
Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Yu, P. S. (2021). A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 32(1), 4–24. https://doi.org/10.1109/TNNLS.2020.2978386.
Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2009). The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 61–80. https://doi.org/10.1109/TNN.2008.2005605.
Scarselli, F., Gori, M., Tsoi, A. C., Hagenbuchner, M., & Monfardini, G. (2009). The graph neural network model. IEEE Transactions on Neural Networks, 20(1), 61–80. https://doi.org/10.1109/TNN.2008.2005605.
McMahan, H. B., Moore, E., Ramage, D., Hampson, S., & Arcas, B. A. y. (2017). Communication-efficient learning of deep networks from decentralized data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS 2017) (Vol. 54, pp. 1273–1282). PMLR. Retrieved from https://proceedings.mlr.press/v54/mcmahan17a.html
Kairouz, P., McMahan, H. B., Avent, B., ... & Zhao, S. (2021). Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1–2), 1–210. https://doi.org/10.1561/2200000083.
Molnar, C. (2020). Interpretable machine learning: A guide for making black box models explainable. Lulu.com.ISBN 978-0-244-76652-2. https://christophm.github.io/interpretable-ml-book/.
Guan, F., Zhu, T., Zhou, W. et al. Graph neural networks: a survey on the links between privacy and security. Artif Intell Rev 57, 40 (2024). https://doi.org/10.1007/s10462-023-10656-4.
Alshehri, S. M., Sharaf, S. A., & Molla, R. A. (2025). Systematic Review of Graph Neural Network for Malicious Attack Detection. Information, 16(6), 470. https://doi.org/10.3390/info16060470.
Ansam Khraisat, Ammar Alazab, Sarabjot Singh, Tony Jan, Alfredo Jr. Gomez, Survey on Federated Learning for Intrusion Detection System: Concept, Architectures, Aggregation Strategies, Challenges and Future Directions. ACM Computing Surveys, Volume 57, Issue 1, Article No.: 7, Pages 1 - 38, https://doi.org/10.1145/3687124.
Lim, L.-H., Ong, L.-Y., & Leow, M.-C. (2025). Federated Learning for Anomaly Detection: A Systematic Review on Scalability, Adaptability, and Benchmarking Framework. Future Internet, 17(8), 375. https://doi.org/10.3390/fi17080375.
Gupta, L., & Misra, D. C. (2025). Cybersecurity Threat Detection Through Explainable Artificial Intelligence (XAI): A Data-Driven Framework. International Research Journal of MMC, 6(2), 119–131. https://doi.org/10.3126/irjmmc.v6i2.80687.
Ekle, O. A., & Eberle, W. (2024). Anomaly detection in dynamic graphs: A comprehensive survey. ACM Transactions on Knowledge Discovery from Data, 18(8), 1-44. https://arxiv.org/abs/2406.00134.
Tian, S., Dong, J., Li, J., Zhao, W., Xu, X., Song, B., ... & Chen, L. (2023). Sad: Semi-supervised anomaly detection on dynamic graphs. arXiv preprint arXiv:2305.13573.
Liu, Y., Pan, S., Wang, Y. G., Xiong, F., Wang, L., Chen, Q., & Lee, V. C. (2021). Anomaly detection in dynamic graphs via transformer. IEEE Transactions on Knowledge and Data Engineering, 35(12), 12081-12094.10.
Wu, Nannan, Zhao, Yazheng, Dong, Hongdou, Xi, Keao, Yu, Wei, & Wang, Wenjun. Federated Graph Anomaly Detection Through Contrastive Learning with Global Negative Pairs. IEEE Transactions on Information Forensics and Security, 2024, Vol. 19, pp. 4583–4597.
Zhao, Y., Liu, Z., & Pang, J. (2025). Anomaly Detection in Network Traffic via Cross-Domain Federated Graph Representation Learning. Applied Sciences, 15(11), 6258. https://doi.org/10.3390/app15116258.
Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2019). Federated learning: Challenges, methods, and future directions. arXiv. https://arxiv.org/abs/1908.07873.
Lo, S. K., Lu, Q., Paik, H. Y., & Zhu, L. (2021, August). FLRA: A reference architecture for federated learning systems. In European Conference on Software Architecture (pp. 83–98). Cham, Switzerland: Springer International Publishing. https://doi.org/10.1007/978-3-030-78831-5_6.
Zhan, S., Huang, L., Luo, G., Zheng, S., Gao, Z., & Chao, H.-C. (2025). A review on federated learning architectures for privacy-preserving AI: Lightweight and secure cloud–edge–end collaboration. Electronics, 14(13), 2512. https://doi.org/10.3390/electronics14132512.
Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H. B., Patel, S., Ramage, D., Segal, A., & Seth, K. (2017, October). Practical Secure Aggregation for Privacy Preserving Machine Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (pp. 1175–1191). ACM. https://doi.org/10.1145/3133956.3133982.
Shanmugam, L., Tillu, R., & Tomar, M. (2023). Federated learning architecture: Design, implementation, and challenges in distributed AI systems. Journal of Knowledge Learning and Science Technology, 2(2), 384–396. https://doi.org/10.60087/jklst.vol2.n2.p384.
Bonawitz, K., Salehi, F., Konečný, J., McMahan, B., & Gruteser, M. (2019). Federated Learning with Autotuned Communication Efficient Secure Aggregation. arXiv. https://arxiv.org/abs/1912.00131.
Kirkpatrick, J., Pascanu, R., Rabinowitz, … & Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 114(13), 3521–3526. https://doi.org/10.1073/pnas.1611835114.
Dwork, C., & Roth, A. (2014). The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science, 9(3–4), 211–407. https://doi.org/10.1561/0400000042.
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Тетяна Фесенко, Юлія Калашнікова

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.