Cybersecurity and Adversarial Machine Learning: A Review of Threats, Defenses, and Architectural Considerations in Western Financial Systems
Main Article Content
Abstract
As artificial intelligence (AI), particularly large language models (LLMs) and other foundation models, become ingrained in important U.S. infrastructure and enterprise systems, it also brings new cybersecurity threats. This survey discusses the emerging threat at the crossroads of AI and cybersecurity, with an emphasis on AML vulnerabilities that undermine current defenses. Unlike standard cyber threats, AML attacks take advantage of inherent weaknesses in machine learning architectures (i.e., data poisoning, model evasion, prompt injection) which makes legacy security tools inadequate. The paper includes a comparative analysis of threat analysis practices for AI- based systems subject to the specific regulatory, legal, and organizational aspects of the United States. We show how these institutional dimensions influence (and frequently inhibit) protective implementations and create necessary gaps, which adversaries take advantage of. We present a theoretical knowledge graph framework that merges the technical and operational understanding of risk, connecting threat intelligence and real-world deployment, allowing for real-time prioritization of risks and more efficient risk mitigation strategies. Particular attention is paid to the growing attack surface of foundation models, which possess scale, complexity, and emergent behaviors leading to novel vulnerabilities that need tailored defenses. The survey concludes with a future-focused, integrated security framework that unifies technical robustness (e.g., adversarial training, input sanitization) and adaptive governance mechanisms, providing a mechanism in practice for implementing robust, lasting AI applications to a rapidly contested digital environment.