A Policy Roadmap for Secure AI by Design

Nov 12, 2025
5 minutes

In just a few short years, artificial intelligence has moved from an area of futuristic innovation to a deeply integrated component of global business and government operations. For most organizations, the choice has become clear: Effectively integrate AI into your operations or risk being left behind by business or geopolitical competitors who do.

The rapid deployment of AI, however, has largely outpaced the adoption of security measures designed to protect it. According to an October 2025 survey from the Conference Board, nearly three-quarters of S&P 500 companies now flag AI as a material risk in their public disclosures, which is a jump from just 12% in 2023. And cybersecurity risk, specifically, was among the most commonly cited risks across all public disclosures.

They’re right to recognize this security gap. Rapid AI adoption has dramatically expanded the attack surface by exposing organizations’ AI ecosystems (i.e., the applications, models, agents, data, infrastructure) to unique threats that legacy cybersecurity solutions were not explicitly designed to address.

Unlike traditional cyber exploits that target software vulnerabilities, AI-specific attacks can manipulate the very foundation of how an AI system learns and operates. These attacks are not just about breaching a network but can also be about corrupting the AI's probabilistic logic itself.

This threat-paradigm shift led to the development of new frameworks to define novel risks unique to Al, like MITRE ATLAS and the OWASP Top 10 for LLM Applications, which Palo Alto Networks sponsored. For example, prompt injection attacks manipulate an AI's contextual understanding to force unintended actions, like tricking a chatbot into revealing confidential data. Meanwhile, data poisoning corrupts the AI model by subtly inserting malicious data into its training set, causing unintended outputs, such as a financial model approving fraudulent transactions.

Recognizing this new risk paradigm, we are encouraged by recent policy developments that further recognize that AI adoption and AI security can, and must, go hand in hand:

  • The president’s cybersecurity-focused Executive Order from June 2025 includes a mandate for federal agencies to "incorporate management of AI software vulnerabilities and compromises” into their security risk management processes.
  • Even more recently, the White House AI Action Plan, released in July 2025, calls for “Secure-By-Design AI Technologies and Applications.” The AI Action Plan further notes:
    The U.S. government has a responsibility to ensure the AI systems it relies on – particularly for national security applications – are protected against spurious or malicious inputs. While much work has been done to advance the field of AI Assurance, promoting resilient and secure AI development and deployment should be a core activity of the U.S. government.”
  • The AI Action Plan also calls for the establishment of an AI Information Sharing and Analysis Center (AI-ISAC), recognizing that AI systems are vulnerable to novel threats that traditional information sharing forums do not fully account for.
  • Voluntary standards bodies, like NIST, are also starting to weigh in, creating early draft AI security overlays or AI security-specific profiles to established cybersecurity risk management standards, like Special Publication 800-53 and the NIST Cybersecurity Framework.

This brings us to the moment we face today: Broad consensus on the value of rapid AI adoption and generalized policy agreement that security must scale alongside it, but less mature clarity on what effective AI security looks like in practice.

To help close this gap between high-level intent and actionable strategy, we developed a Secure AI by Design Policy Roadmap. The Roadmap lays out a four-part construct for holistically understanding the unique attributes of the AI attack surface and threat environment (the “What”), ultimately guiding the development of purpose-built, actionable strategies and solutions for securing the AI era (the “How”).

Secure AI by Design: A policy roadmap for securing our AI future.

The AI revolution demands a corresponding revolution in security thinking. We can no longer approach AI security as a domain where we can retrofit legacy security solutions onto these complex, probabilistic AI systems. The time for generalized policy consensus is over; the era of purpose-built, actionable strategies is now.

Towards this end, Palo Alto Networks is committed to working with all interested stakeholders to collectively advance our capacity:

  1. Securing our use of external AI tools.
  2. Monitoring and controlling AI agents.
  3. Safely building and deploying AI apps.
  4. Securing the underlying AI infrastructure.

This Secure AI by Design approach will further instill trust in our AI systems, allowing us to most confidently realize the incredible promise of Artificial Intelligence.

Ready to learn more about adopting a purpose built approach to cybersecurity for the AI era? Download the full Secure AI by Design Policy Roadmap.


Three FAQs about the Secure AI by Design Policy Roadmap
  • What is the Secure AI by Design Policy Roadmap?
    The Secure AI by Design Policy Roadmap establishes a consistent baseline for discussing the AI security imperative. The roadmap defines what needs to be secured (the AI ecosystem), identifies specific AI threats (e.g., prompt injection and data poisoning), outlines an AI Security framework for organizations to prioritize, while enumerating purpose-built AI security capabilities and technologies.
  • Why is this roadmap necessary?
    The AI era presents several unique attributes, from an expanded attack surface to novel threat techniques that legacy cybersecurity solutions were not explicitly designed for. Simultaneously, while policymakers now recognize AI security as critical, they currently lack clear and standardized definitions of comprehensive AI security. Our Secure AI by Design Policy Roadmap is necessary to help advance this policy conversation, translating high-level urgency into a more concrete, implementable security framework that is firmly grounded in the unique attributes of AI.
  • What are some examples of AI threats addressed by the roadmap?
    The roadmap addresses a broad spectrum of AI novel threats found in industry standards, such as the OWASP Top 10 for LLMs and MITRE ATT&CK for AI. The roadmap specifically addresses some illustrative AI-native threats, such as prompt injection (manipulating context to force unintended actions), data poisoning (corrupting the model via malicious training data) and excessive agency (managing the risks of an AI system being granted too much authority).

Subscribe to the Blog!

Sign up to receive must-read articles, Playbooks of the Week, new feature announcements, and more.