PRACTICAL ASPECTS OF USING CRYPTOGRAPHIC METHODS TO PROTECT DATABASES FROM UNAUTHORIZED ACCESS
DOI:
https://doi.org/10.28925/2663-4023.2025.30.957Keywords:
cybersecurity, static code analysis, SAST, DevSecOps, application security, OWASP Benchmark, false positives, false negatives, SonarQube, CodeQL, Semgrep, data-flow analysis, quality metrics, risk management.Abstract
This paper analyzes the current state of static application security testing (SAST), a key component of DevSecOps practices. It provides an overview of the fundamental methodologies underlying SAST tools: from rule-based and pattern-based syntactic analysis that works with abstract syntax trees (ASTs) to more complex semantic approaches such as data flow analysis (DFA) on control flow graphs (CFGs), abstract interpretation, and symbolic execution. The central element of the study is an empirical comparison of three leading open source tools: SonarQube, which evolved from a code quality control platform; Semgrep, which focuses on speed and ease of integration into CI/CD; and GitHub CodeQL, which takes an innovative approach to representing code as a relational database for deep semantic analysis. To objectively assess their performance, a standardized set of OWASP Benchmark tests was used, which allowed us to measure key metrics: Precision, Recall, F1-Score, and False Positive Rate (FPR). The results of the quantitative analysis revealed significant differences: CodeQL demonstrated the highest balanced efficiency (F1-Score = 87.8%) due to its high completeness in detecting complex vulnerabilities associated with data flows, albeit at the expense of long scanning times; Semgrep provided the optimal ratio of speed and accuracy (F1-Score = 70.3%), making it suitable for rapid iterations in CI/CD pipelines; instead, SonarQube showed the lowest efficiency (F1-Score = 49.8%), missing more than 60% of real vulnerabilities, which confirms the limitations of non-specialized solutions in security tasks. The study proves that choosing a SAST tool is not only a technical but also a strategic decision that requires careful analysis of the trade-offs between depth of analysis, level of "noise", and speed of integration to form an effective and balanced web application security program.
Downloads
References
Aho, A. V., Lam, M. S., Sethi, R., & Ullman, J. D. (2006). Compilers: Principles, Techniques, and Tools (2nd ed., pp. 399–410). Addison-Wesley. ISBN 978-0-321-48681-3.
Arxiv. (2024). Evaluation of static analysis tools for vulnerability detection. Retrieved from https://arxiv.org/abs/2405.12333
Benmoshe, N. (2024). A simple solution for the inverse distance weighting interpolation (IDW) clustering problem. Sciences, 7(1), 30. https://doi.org/10.3390/sci7010030
Codacy Blog. (n.d.). What is static code analysis? Retrieved from https://blog.codacy.com/static-code-analysis
Cousot, P. (2024). A personal historical perspective on abstract interpretation. Retrieved from https://cs.nyu.edu/~pcousot/publications.www/Cousot-FSP-2024.pdf
Datadog. (n.d.). Static analysis overview. Retrieved from https://www.datadoghq.com/knowledge-center/static-analysis
DO-178C software compliance for aerospace and defense. (n.d.). Parasoft Learning Center. Retrieved from https://www.parasoft.com/learning-center/do-178c/static-analysis/
Ilienko, A., Spys, D., Halata, L., & Dubchak, O. (2022). System approach to web application security: Analysis of threats and methods of cyber protection. Ukrainian Information Security Research Journal, 26(2), 277–293. https://doi.org/10.18372/2410-7840.26.20022
National Institute of Standards and Technology (NIST). (2002). The economic impacts of inadequate infrastructure for software testing (RTI Project Number 7007.011). Retrieved from https://www.nist.gov/system/files/documents/director/planning/report02-3.pdf
Nielson, F., Nielson, H. R., & Hankin, C. (1999). Principles of program analysis (pp. 31–43). Springer.
OWASP Foundation. (n.d.). Static code analysis. Retrieved from https://owasp.org/www-community/controls/Static_Code_Analysis
OWASP Benchmark Project. (n.d.). OWASP Benchmark Project. Retrieved from https://owasp.org/www-project-benchmark/
ResearchGate. (2024). Systematic survey on large language models for static code analysis. Retrieved from https://www.researchgate.net/publication/392909246
ResearchGate. (2015). Improving static analysis performance using rule-filtering technique. Retrieved from https://www.researchgate.net/profile/Deng-Chen-6/publication/277817158_Improving_Static_Analysis_Performance_Using_Rule-Filtering_Technique/links/557514a908ae7536374ff5a0/Improving-Static-Analysis-Performance-Using-Rule-Filtering-Technique.pdf
Self-composition by symbolic execution. (n.d.). ResearchGate. Retrieved from https://www.researchgate.net/publication/268022643_Self-composition_by_symbolic_execution
Survey on static analysis tools of Python programs. (2019). CEUR Workshop Proceedings, 2508. Retrieved from https://ceur-ws.org/Vol-2508/paper-gul.pdf
Wired. (n.d.). How Facebook catches bugs in huge codebases. Retrieved from https://www.wired.com/story/facebook-zoncolan-static-analysis-tool
What is static code analysis? A comprehensive guide to transform your code quality. (n.d.). CodeSecure. Retrieved from https://codesecure.com/our-white-papers/what-is-static-code-analysis-comprehensive-guide-to-transform-code-quality/
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Денис Спис, Анна Ільєнко

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.