Deploy Bravely — Secure your AI transformation with Prisma AIRS
  • Sign In
    • Customer
    • Partner
    • Employee
    • Login to download
    • Join us to become a member
  • EN
  • magnifying glass search icon to open search field
  • Contact Us
  • What's New
  • Get Support
  • Under Attack?
Palo Alto Networks logo
  • Products
  • Solutions
  • Services
  • Partners
  • Company
  • More
  • Sign In
    Sign In
    • Customer
    • Partner
    • Employee
    • Login to download
    • Join us to become a member
  • EN
    Language
  • Contact Us
  • What's New
  • Get support
  • Under Attack?
  • Demos and Trials
Podcast

Threat Vector | Secure Your Summer: Top Cyber Myths, Busted

Jun 12, 2025
podcast default icon
podcast default icon

Threat Vector | Secure Your Summer: Top Cyber Myths, Busted

00:00 00:00

In this episode of Threat Vector, David Moulton talks with Lisa Plaggemier, Executive Director of the National Cybersecurity Alliance. Lisa shares insights from this year’s “Oh Behave!” report and dives into why cybersecurity habits remain unchanged—even when we know better. From password reuse to misunderstood AI risks, Lisa explains how emotion, storytelling, and system design all play a role in protecting users. Learn why secure-by-design is the future, how storytelling can reshape behavior, and why facts alone won’t change minds. This episode is a must-listen for CISOs, security leaders, and anyone working to reduce human risk at scale.

Resources:

  • Kubikle: A comedy webseries about cybercriminals.
  • Oh Behave! The Annual Cybersecurity Attitudes and Behaviors Report 2024

Protect yourself from the evolving threat landscape - more episodes of Threat Vector are a click away



Transcript

 

Lisa Plaggemier: We've all done things with technology that we shouldn't. There was a time in your life when you reused a password or clicked on something you shouldn't or almost clicked on one of this malicious text that we're all getting all the time. You felt the emotion spike when somebody gave you some urgent message that one of your kids was in trouble or there's fraud on your account or something. We've all had the emotional reaction to that and hopefully caught ourselves before we did something. But I think it's leaving people with a sense of empathy that we all do these things. We're not going to solve for human error. And so designing software and systems and products that are more secure by design is really, I think, the way forward.

 

David Moulton: Welcome to Threat Vector, the Palo Alto Networks podcast where we discuss pressing cybersecurity threats and resilience and uncover insights into the latest industry trends. I'm your host, David Moulton, Senior Director of thought leadership for Unit 42. Today I'm speaking with Lisa Plaggemier, Executive Director for the National Cybersecurity Alliance. Lisa is on a mission to eliminate the cliché of hackers and hoodies and bring a more human, relatable face to cybersecurity. Her career spans Fortune 100 brands, cutting-edge cybersecurity training companies, and leadership roles across the cybersecurity landscape. She blends psychology, marketing, and behavioral science to inspire real-world change. She's also the coauthor of the annual Cybersecurity Attitudes and Behaviors Report 2024-25, a global study that reveals the truth about how people actually behave online, not just what they say they know. The myths we're about to unpack come straight from the gap between awareness and action. Lisa, welcome to Threat Vector. I am so excited to have you here.

 

Lisa Plaggemier: Thank you for having me.

 

David Moulton: You've had this really unique career path from launching Ford road shows across Morocco to leading cybersecurity culture initiatives. What's one experience from your early days in international marketing that surprisingly prepared you for your current work in cybersecurity awareness?

 

Lisa Plaggemier: I think it's understanding more about the creative process. So one of the most interesting things that I observed in working with highly paid ad agencies that, you know, auto manufacturers and tennis shoe companies and tequila brands all use to sell their product was that managing creatives is different than managing technical people or managing administrative folks. Giving them room for their brains to breathe, giving them time for the ideation process, more than a lot of jobs, they need to sit and think. They need to go take a walk and get ideas. And the other thing that I observed was that, no matter what the deadlines were, there's times when you just can't force it. You can't force -- I've been in ideation sessions where, like, the ideas just aren't flowing. The folks in the room are not clicking. And there's times when you just can't force it, and you have to be okay with that. You have to be okay with that ebb and flow of the creative process and allow for times when suddenly something brilliant happens and you know you've got some -- you've got a diamond, and you've got to run with it. And -- but you -- it can be frustrating in the meantime. So I think that, until you've worked directly with creatives, it's really hard to understand all that.

 

David Moulton: I feel that. I sometimes tell the teams and the folks that I work with is we can let the brown water flow, right. A lot of times, the stuff that's coming out right away out of the tap, that's not the drinking water. That's not the clear ideas. And you just let it run. It's okay, right? And then it will come. You just have to trust that process. Well, Lisa, we've got a lot to talk about today. Let's get right into it.

 

Lisa Plaggemier: Okay. Let's go.

 

David Moulton: Lisa, you've built your career at the intersection of psychology, persuasion, and cybersecurity. And now that you're shaping public perception through the National Cybersecurity Alliance and this year's massive 7000 participant report, what's one finding or a moment from this year's research that made you say we have to talk about this?

 

Lisa Plaggemier: Probably the fact that there's the -- we're not seeing the curves we want to see. We're not seeing things get better. So one of the most prominent examples would be password reuse and just general -- password habits in general, people are using passwords that are too short. People are reusing the same password too often. People are using insecure methods to keep track of their passwords. They're kind of acting as their own risk managers, and are doing things that they think are safer than a password manager because they have more control, for example. So it's the whole topic of passwords. People hate them. They haven't worked, right, as a -- as a -- as a means to protect our stuff. They've been a complete epic failure, and people just don't like them. It was one thing 20 years ago, and you could have one password or two or three that you remembered. And, you know, they didn't have to be that long. There was no such thing as complexity rules. Like, we all kind of got by. And then we very quickly realized that that's not -- that's not really going to protect our stuff. So we're big fans of things like password managers. Pass keys are a whole lot easier for people. But, by and large, they've just been -- they've been a failure. People don't like them. And they're -- they haven't worked.

 

David Moulton: Do you have a personal story or anecdote that you have found works when you talk to somebody about their short password, their password reuse, their saving it on sticky notes, their saving it in an Excel file and, you know, in an insecure way that helps them understand that they need to break those habits?

 

Lisa Plaggemier: I have one that I use all the time about password reuse because a lot of people -- I mean, I've even heard security professionals now kind of default to this, like, well, for your really important accounts, you should use a unique password. And maybe for everything else it's okay to use the same one, which I'm not a big fan of because people aren't great risk managers. They're not good at assessing what's of value to a bad guy and what isn't. So what they deem an important account is probably different than what a cybercriminal deems an important account. So the story that I use is one -- I try to remember as often as I can what it was like to work in marketing before I had any clue what the cybersecurity stuff was all about, before I was assigned to work with the security team on thought leadership at the company I was at.

 

David Moulton: Right. Before you were really made aware of, like, how dangerous some of the behavior is.

 

Lisa Plaggemier: Right. Before I understood the ins and outs, when I was just a normal consumer going about my day reusing passwords, using passwords that were too short, opting out of MFA, like, things like that, just things that people do, I was just like everybody else. And, if security professionals will admit it, they do some of these bad things still. We all do. That's how so many data breaches keep happening. They keep making mistakes with basic hygiene. So the -- I can remember when it happened because I was -- I was -- for some reason, I think I was, like, out for a walk. And I kind of remember the -- that light bulb moment when I was walking in my neighborhood, and I heard about the Yahoo! breach years ago. I can't remember what year it was. Well, we all had a Yahoo! account back in the day. Like, I can still remember my AOL dialup and the sound of the modem and, like, chatting with somebody on the other side of the Earth and -- because I was at that time. I was living in Europe, so I had a lot of reason to be excited about things like that and not paying Deutsche Telekom, you know, $1 a minute to call the US. And when I heard about that data breach, it was usernames and passwords. I just thought, Who cares? I haven't logged into that in 10 years. I mean, if you ask anybody over the age of 50 or anybody who was getting online in the in the late '90s or the early naughts, we all had a Yahoo! account. And a lot of us I would venture to guess haven't logged in, in a really long time. And I don't know if they've deleted our accounts or they're still, I'm guessing, at the time of that breach, they were all still active or valid usernames and passwords. And I just thought, if a bad guy has access to my Yahoo! account, like, I haven't used that in a million years. I don't know what's in there. You know, like, they can have at it. Like, have fun with that. There's nothing there that's of use, right? I -- in my nonsecurity brain, I told myself, like, well, if I ever use that password anywhere, then they would have to know where else I have accounts that I've reused it. And, like, they're not going to take the time to figure that out, right? I didn't know it was spray and pray. I didn't know there was automation and that these guys are using technology. You have that -- that image in your head of one person sitting at a laptop. It's the hacker in the hoodie image. Somebody's in the dark somewhere, wearing dark clothing, and usually masculine. Like, there's, you know, the vibe we get from that imagery is usually masculine. And we don't think about teams of developers like I was working with at the company that I worked with at the time. Like, we don't think about them as businesses. We don't think about them as using automation and being really smart and being agile and doing -- really being a mirror image of the legitimate world, just doing what they do for illicit reasons instead of, you know, trying to run a legitimate business. So that's also the thought behind Cubicle, the Cubicle series that we shot. And we -- we have Season Two coming out soon. It's a video series like watching The Office, but it's the office of the bad guys. And that's what I was trying to provoke in people is maybe that light bulb moment of like, Oh, wait a minute. There's somebody doing this for a living. Somebody -- it's somebody's job to hack me, and they're using technology to do it. So these little things, these myths that I tell myself that it's okay to do, the excuses I make for some of my bad habits with technology, or maybe I don't even understand that it's really a bad habit, that's what we're going for with that series is that light bulb moment that maybe people understand; and they think twice. And maybe down the line with some more nudging and some more messaging and some more education they actually change their behavior.

 

David Moulton: Yeah. As you were talking about that Yahoo! breach, for those of you listening who are curious, 2013 undetected for three years, 3 billion, 3 billion different user accounts attacked. And as you were -- as you were describing that, you know, it's the rainbow table. It's the ability to say, well, Lisa's here, and here's her password. Let's go try anywhere else that we can find Lisa and the password elsewhere. And I agree. I came into this after 20 years of being in design and didn't realize that that was what was happening, that it is a business. There are KPIs. There is a metric. They're trying to get to their revenue number, and it's off of our mistakes. It's off of the things that we don't necessarily think about or, as you called it, these -- these myths. And it allows for them to still be profitable. Otherwise, this area would go away very quickly. This tactic would dry up if we would stop doing these things or if users would stop doing these things or even just change to something as simple as a password manager. I want to say it was an NSA story that got my attention, and I moved from, Dave's clever. He can keep all of his passwords in his head if he just does a little bit of changing. And they were really weren't all that different. It was like I added a 1 or 2. And I jumped to --

 

Lisa Plaggemier: Not an exclamation point? An exclamation point would have made all the difference, Dave.

 

David Moulton: It was already there. I had a -- I had a fraternity room name that I used. I can't say that on this podcast, and then I would just be like, exclamation point 1, ex -- anyways, I got to 44, Lisa. That's just the number of times I was asked to change it before I was like, this is a terrible password. I should stop. So, you know, I think people know. And you guys call out in the report that people should use those unique passwords. They know that. But, in the report, nearly half -- I think it was, what, 46% still reuse their passwords. And this is wild to me. It's kind of like saying, you know, hey. Here's the key to everything I have in my digital existence, and I'm going to just make it a digital copy for everyone. And, if I lose it, then you can get into all the things. But I don't think people necessarily get that you were talking about that with the Yahoo! hack and how that kind of gave you that light bulb moment. Lisa, what is it that causes this to be, like, such a persistent gap between what people know and the actual actions, the behaviors that they take?

 

Lisa Plaggemier: I think some of it is just our own belief in our own superiority. We all trust ourselves more than we trust anybody else. We all think we're smarter than the average bear. And one of the other things we ask people is, do you think you can spot a fish? And it's a five-point scale. And everybody's like, you know, 4s and 5s, like, except Germany. That was the one country in the report that's far -- their confidence in themselves to spot something malicious is far lower than every other country in the survey. So it was all Five Eyes plus Germany and India. This year, later this year, it's going to be the US, the UK, Mexico -- no -- yes. Mexico and Brazil or Brazil and Chile? I can't remember -- Germany and India again because the data out of India was really, really fascinating. Like, their confidence in their ability to recognize things is very, very high; but their rates of compromise is equally very, high for things like Romeo scans and just across the board. You know, people's beliefs in themselves and their own methods runs pretty deep. And their own conviction of wanting to feel like they're in control, which is why they don't trust password managers all the time, telling them, using education or some sort of awareness or whatever you want to call it these days, any kind of communications to say to them, no, this is the better way to do it or you could -- you know, let's use the phishing example. They don't think they're -- they're -- they think they're going to be able to detect something malicious. So just saying to them, no, you're at risk for phishing, that -- we're all contrarians. Like, you're telling me something I don't believe. You can't just say to me, you know, yeah. You don't think you're going to fall for it, but you could. Like, that's not persuasion. Persuasion is there's more of an art to it than that, to persuade human beings. And I think we're still, at least in the security community, a little too guilty of just trying to be contrarians, trying to just tell them something that's the opposite of what they believe and thinking somehow that's going to change their minds. And I don't think that's enough. You know, it takes a lot of not -- I mean, for the light bulb moment you had or the light bulb moment I had, it takes that constant drumbeat of information. When something resonates with an individual for whatever reason that -- that they -- you know, that opens their eyes, and they decide to make a change in their -- in their habits.

 

David Moulton: Let's shift gears a little bit. AI has introduced a new myth. If I use AI tools correctly, they're safe. But your findings, they suggest that most people don't fully understand AI risks. What kind of misunderstandings did the report uncover?

 

Lisa Plaggemier: Well, first of all, we learned that there are a whole -- whole lot more employees that are putting sensitive company information into AI tools without their employer's knowledge. I think it's 43% or something like that. It's a pretty high, pretty high percent. The other thing we learned is that 51% of organizations at the time of the survey hadn't given employees any training on the safe use of AI. So I think the risk there is that, while we're all busy debating policies and how to enforce them and find the right tools and what we're going to allow and we're having all these conversations, meanwhile, people are using this stuff anyway and -- and finding ways to use it, whether it's on their own device or whatever. I would suggest that we navel gaze a little bit too much some organizations over their policies, and we need to have more of a bias for action, I think. You can always go back and change things but not -- not taking action, not starting to train people. I remember when I first got into cybersecurity, and I was -- I heard somebody at a -- I think I was at a conference at a round table discussion or something. And somebody said, Well, we have policies that aren't finished. I told the business, I can't train anybody until the policies are finished. And I said, Do you think the bad guys aren't going to attack your people until your policies are finished? Like -- like, we kind of -- we can get a little -- we serial -- we have this tendency to want to serialize things, I think. And I think in the case of AI that's made it -- that's increased our risk. I think people fundamentally think of it like a search engine. They think about the result that they want. Their focus is on trying to solve a problem and what they're going to get back, and they're not really thinking about what they're giving away.

 

David Moulton: Yeah. I think that the business model also makes it tricky, especially if you're paying for a service. I've always had that model, and I was recently disabused of this theory that, if I'm paying for something, it's private, right? It's -- it's my right, you know. And that space is a ethical relationship between me and the service. And I think with -- with the chatbots and some of the LLMs in particular, that's a really gray zone, mostly moving towards that's not the model. Like, you're getting amplified service. You're getting more tokens. You're getting more faster capabilities delivered. And the free model is the model for privacy. I keep coming back to is it the system design, right? Is it not on the individual? And have we built systems that allow you to do dangerous things that don't feel dangerous? You could also claim driving a car is more dangerous than other modes of transportation, you know, per mile kind of thing. And, statistically, that's true. And, yet, I think you get more anxiety out of a flight than you do out of a drive around the corner.

 

Lisa Plaggemier: Right.

 

David Moulton: But one has a higher probability of a problem.

 

Lisa Plaggemier: It's the perception of you're control over a situation.

 

David Moulton: Yes.

 

Lisa Plaggemier: You're driving the car. It's different than the pilot flying the plane. Yeah.

 

David Moulton: Yeah. And so I think that, going into a chatbot and having a conversation or putting in information, it's just you and that chatbot; and that's the edge of it. You can't see the actual larger -- the larger frame of danger. So that's a -- that's an interesting space of, like, how do you make for human security and risk management.

 

Lisa Plaggemier: We have so far to go. Sometimes I'll see these debates pop up on, like, LinkedIn where, you know, the debate about -- about designing software securely to begin with. And, really, what is the user's responsibility? Like, people should still know to do XYZ. And I -- there's -- it's not any one individual's fault. It's an -- it's a -- it's system thinking. And I think we just have a long way to go. I mean, we've -- I think those of us in security will tell you, you know, we all -- there's the old adage, the internet was never designed to be secure. And now we're trying to play catchup, and it's impossible. Like, it's -- it's really, really, really hard and really expensive. And at some point we'll get better because we'll redesign some things. And it's just like anything else, any sort of new technology. You look back at some point. Maybe it takes 50 or 100 years, and you look back and you go, you know what? We shouldn't have built it that way to begin with. We shouldn't have designed it that way to begin with because now we've seen all these bad things happen, and we need to rethink it. So I think we have a long way to go yet, but I'm glad that it's even a topic of conversation, right? I'm glad that there's folks like Bob Lord talking about Secure By Design and things like that.

 

David Moulton: So when you're talking about this idea, is it this, is it that? And this idea of fixating on or focusing on one area, I think it's a lesson we could take from economics, right? You want a diversified portfolio. You only get a couple percent. You want some things that are going to be slow growth and hold you over time like a bond. Maybe you need some stocks. Maybe you need some real estate in your portfolio. But you wouldn't say, like, let's just put it all in one area. And I think in security, when we do that, then a very clever attacker will figure out how to break that one thing that was so very strong; and then it doesn't really help all that much. Like, you get to the point where, a couple years ago, MFA was the sort of silver bullet for identity. And -- and then very clearly it's not, right. Like, you look at what Scattered Spider or Muddled Libra is doing with social engineering, and they're just like going around the MFA or making the MFA extraordinarily annoying and getting past it anyway. So it's like, each time we -- that we go, like, oh, that's the one thing, that's the -- the red flag for me. I'm like, you're beckoning for somebody to destroy this, and then --

 

Lisa Plaggemier: Somebody's going to break it. It's -- somebody's trying to break it.

 

David Moulton: Yeah. Exactly.

 

Lisa Plaggemier: Yeah. But, like, you wouldn't not use MFA just because somebody's figured out how to hack it.

 

David Moulton: No. You definitely use it.

 

Lisa Plaggemier: Right. It's the same argument when people say, Well, how do you know that any kind of security education or awareness or any of it ever has any effect? And I think I'm -- you know, came from the world of marketing and advertising, so I'm going to say, well, you know, Ford Motor Company can't tell you that their Super Bowl ads are, quote, unquote effective. But they're not going to not do them. I mean --

 

David Moulton: Exactly.

 

Lisa Plaggemier: Because we know -- you have to think about the whole -- the whole picture, not just one tactic. And, you know, I would challenge any security professional who says, well, you know, I don't think this stuff is working. Well, then, okay. Do you want to stop? Like, do you think you should just stop messaging anything about security to any of your employees?

 

David Moulton: No. I think that it's --

 

Lisa Plaggemier: Well, no. Don't want to do that. Like, that sounds -- that sounds, like, dangerous. That sounds irresponsible. Well, then, okay. Do it. But do it well, you know. Do a good job at it. It's still worth doing well, even if you're not sure that it works.

 

David Moulton: So you've talked about some storytelling. You know, I've biased myself. I'm a big fan of storytelling as an effective model for getting through to people. You've obviously used humor. Are there other types of interventions that you've noticed that have the long-term effects that -- that we're all going for with some of the training?

 

Lisa Plaggemier: I think we can do better at storytelling in different ways. So one of the projects we're looking at now is it's real simple. Every, you know, Friday night when I'm going through all the streaming channels trying to find something to watch and decide nothing looks good, I'll default and end up watching, like, Dateline or 20/20 or one of those things. And -- and every time I'm like, okay. It -- you know, she killed her husband. Like, what's new? Like, it's kind of the same old, same old in the world of physical crime. And maybe there's a little fraud thrown in there too. Like, where's my cybersecurity story? Where's my story that -- that -- because those of us who've -- who've -- who've come from the world of marketing or someplace else, you know, we've had a sideways path to get into cybersecurity. That's one of the things I think that makes you make the jump is you start to peel the onion and you're like, holy cow. This stuff is fascinating, and nobody knows what's going on. Like, most people are not paying any attention. And I'm even shocked. We do a lot of media interviews. We get a ton of earned media as a nonprofit, which is great. And I talk to a lot of investigative reporters, and I'm even surprised at how little they're paying attention sometimes, which is great. It's an opportunity for us. I get to -- I get to, you know, drip a few little hints at what's happening out there. And they're like, really? I should do a story on that. I'm like, Yeah, you should. So we're working with DHS, with -- with his, Homeland Security Investigations because they investigate crime committed by people who are not in the country legally. And some of those crimes involve technology. It's -- I think you're hard-pressed -- any organized crime these days you're hard-pressed to find things that don't involve technology in some way, shape, or form. So what we're going to focus on -- and we're also working with the Secret Service. So one of the things we're going to focus on are cases where you have -- we think it's going to be easier to communicate to the public in the 22 minutes you have in a 30-minute episode, cases that -- that involve both physical crime and there's a physical aspect to the cybercrime. So things like EBT skimmers or, in the case of -- it was Operation Red Hook is the story that we're looking at with HSI that's involves gift card scams, things that have a physical tangible -- because I think that's one of the hardest things about storytelling and cyber is it's intangible. You can't just show binary floating across the screen. People don't know what that means. It's a trope. It's not relatable. It missed -- it -- instead of demystifying this topic, it mystifies it even further, that -- and it also makes your audience feel stupid. I don't know what those 1s and 0s floating around the screen are or that green screen that you're showing me that you're scrolling through, you know, while somebody's -- there's a narrator telling the story. I don't know what that is, so I must be dumb, like; and I don't want to feel dumb when I'm trying to be entertained with a story. So we're going to really focus on the tangible aspects of some of these crimes and show how the technology has enabled those crimes, and -- and I think those will be stories that, you know, you can go to the fridge and get a Coke, and you're not going to lose track of the story. Like, it's got to be super easy to tell; and it's got to resonate very quickly if you're doing that kind of very digestible content. There are other things out there that communicate about this topic where we're I think expecting a little more undivided attention from the audience. And, if we want to scale, then I have to be honest about how much attention we're going to get. You're -- you might be scrolling Facebook while you're watching TV. You might go to the fridge, go to the bathroom. You know, your kids might ask you for something. Like, it's -- it's got to -- it's got to resonate in a way that accounts for the fact that we don't have people's undivided attention.

 

David Moulton: Yeah. So both that, like, quick hit snackable bit but then something that allows you to follow through all, you know, 15, 18, maybe 22 minutes if it's a television show but also kind of stews in your head and makes you think about it. You know, as you were talking, we kind of need a -- an Ocean's Eleven. But instead of having the -- you know, the trapeze artist and the guy who's, you know, able to crack the safe and, you know, Brad Pitt who's always eating, right, like, it's just the hackers and what they're doing. Maybe you have, like, a little bit of affinity for them, but at least it shows, like, it's a business and what they're doing. So maybe that's, like --

 

Lisa Plaggemier: I mean, look at the shows lately. I'm watching -- currently watching Friends and Neighbors, and we watched Bad Sisters lately. There's a lot of shows lately that are getting you to really root for the bad guy. Like, you have huge empathy for the criminal.

 

David Moulton: The anti-hero.

 

Lisa Plaggemier: Yeah. It's -- it's pretty disturbing.

 

David Moulton: I think it started with Sopranos and Breaking Bad, where the anti --

 

Lisa Plaggemier: Yeah. >> David Moulton -- you know, the -- right, was the main character. And you're kind of into it, even if they were awful. Yep.

 

David Moulton: And then it showed that there was a way of telling the story from a different point of view, not necessarily always, like, the police drama where the -- you know, the law enforcement was chasing the bad guys. But you're kind of rooting for the bad guy to get away. So, Lisa, I want to take it back to -- to the report and if -- if there are security leaders out there who want to really use this report to -- you know, to drive the changes in their organizations that they know they need to. You know, where should they start? What's -- what's the, like, jump-off point for them?

 

Lisa Plaggemier: Well, if you're trying to find the report, you can go to staysafeonline.org. Or I would just Google Stay Safe Online, Oh, Behave; and that'll get you to the landing page. You can download the report.

 

David Moulton: We'll put that -- we'll put that URL in our show notes. So, if you're listening and you're thinking, I don't -- I don't think I can remember that, just check the show notes.

 

Lisa Plaggemier: I think a lot of organizations have very mature -- large organizations have very mature training and awareness programs. Maybe they're transitioning into human risk management. They're looking at more sources of data. They're using more behavioral science, like nudges to -- to get employees to do the right thing or to help them to do the right thing. They're using more solutions that help employees in the moment to make a good decision. And so I think that's all good stuff. But I think a lot of the security communications or awareness materials that we're using aren't making enough use of what advertisers know about behavioral science and, like, basic human psychology and being more persuasive and being better at storytelling because being really good at those things is -- is really, really hard. Not -- not every person out there can write a really good article. I used to teach a certification class for people in training and awareness. And I give everybody an assignment once to use Dr. Cialdini's principles of persuasion. I explain the principles. And then the assignment was, here's a -- here's an FBI alert, you know, one of the alerts that they put out about a particular problem. And you want to tell your employees about this. The CISO has said to you, you know, here's this thing. We need to tell everybody. And you can't just post the FBI notice because nobody will read it. What is the title? Based on these principles of persuasion, which one are you going to pick to use? And how would you title the article you're going to write that talks about that topic? Because I'm a big old David Ogilvy fan. When you've written your headline, you spend 70 cents your advertising dollar. If you don't write a good subject line to your email or title to your article in the company newsletter, nobody's going to read the article no matter how much good stuff is in there. Everybody in the room chose to use the principle of authority because I told you so, right? The heavy-handed, you know, or doctors recommend, you know, that -- that principle of, like, well, we know better than you do. Nobody wants to be told that by an IT person. Even though it's true, people don't -- that just doesn't resonate. So the next time I taught the class, I had to say, you can pick from any of these except the principle of authority. That's off limits. That's not compelling enough. So -- so I think we still have a little ways to go in -- in -- in -- in using some of the advertisers' trickery and some of the persuasion techniques that are used in -- in the business world, in the consumer world to get us to buy products and do things. And we can do better. It's the story that we wrap it in, and it's the demographic that we target that makes the difference.

 

David Moulton: I suggest that, if you're curious, you should definitely go read, read this report. It's been fascinating to talk to you today, and I really appreciate that you took time out of your day -- I know you're really busy -- to share your insights. And, you know, just throughout the year, not just today on Threat Vector, you know, you're out there trying to make sure that the people who need this information are able to get it, and not only in a report but in video with humor, with story and really maybe to, like, raise up some of the myths so we can go, like, Wait. I see myself in that -- that thinking. I think attaching information in different ways allows different people to learn and change their behavior and -- and to start to be a little bit more safe, and that's awesome. I also like the fact that you've combined that, like, marketing and cybersecurity and behavioral science together for doing good. So thanks for coming on today and sharing with me about -- about the report and some of your thoughts and experiences.

 

Lisa Plaggemier: It was absolutely my pleasure. Thank you so much for having me.

 

David Moulton That's it for today. If you've liked what you heard, please subscribe wherever you listen. And leave us for that review on Apple podcast or Spotify. Your views and your feedback really do help me understand what you want to hear about. And, if you want to reach out to me directly about the show, email me at threatvector@ paloaltonetworks.com I want to thank our executive producer, Michael Heller; our content and production teams, which include Kenne Miller, Joe Bettencourt, and Virginia Tran. Elliott Peltzman edits the show and fixes our audio. We'll be back next week. Until then, stay secure. Stay vigilant. Goodbye for now.

Share page on facebook Share page on linkedin Share page by an email
Related Resources

Access a wealth of educational materials, such as datasheets, whitepapers, critical threat reports, informative cybersecurity topics, and top research analyst reports

See all resources

Get the latest news, invites to events, and threat alerts

By submitting this form, I understand my personal data will be processed in accordance with Palo Alto Networks Privacy Statement and Terms of Use.

Products and Services

  • AI-Powered Network Security Platform
  • Secure AI by Design
  • Prisma AIRS
  • AI Access Security
  • Cloud Delivered Security Services
  • Advanced Threat Prevention
  • Advanced URL Filtering
  • Advanced WildFire
  • Advanced DNS Security
  • Enterprise Data Loss Prevention
  • Enterprise IoT Security
  • Medical IoT Security
  • Industrial OT Security
  • SaaS Security
  • Next-Generation Firewalls
  • Hardware Firewalls
  • Software Firewalls
  • Strata Cloud Manager
  • SD-WAN for NGFW
  • PAN-OS
  • Panorama
  • Secure Access Service Edge
  • Prisma SASE
  • Application Acceleration
  • Autonomous Digital Experience Management
  • Enterprise DLP
  • Prisma Access
  • Prisma Browser
  • Prisma SD-WAN
  • Remote Browser Isolation
  • SaaS Security
  • AI-Driven Security Operations Platform
  • Cloud Security
  • Cortex Cloud
  • Application Security
  • Cloud Posture Security
  • Cloud Runtime Security
  • Prisma Cloud
  • AI-Driven SOC
  • Cortex XSIAM
  • Cortex XDR
  • Cortex XSOAR
  • Cortex Xpanse
  • Unit 42 Managed Detection & Response
  • Managed XSIAM
  • Threat Intel and Incident Response Services
  • Proactive Assessments
  • Incident Response
  • Transform Your Security Strategy
  • Discover Threat Intelligence

Company

  • About Us
  • Careers
  • Contact Us
  • Corporate Responsibility
  • Customers
  • Investor Relations
  • Location
  • Newsroom

Popular Links

  • Blog
  • Communities
  • Content Library
  • Cyberpedia
  • Event Center
  • Manage Email Preferences
  • Products A-Z
  • Product Certifications
  • Report a Vulnerability
  • Sitemap
  • Tech Docs
  • Unit 42
  • Do Not Sell or Share My Personal Information
PAN logo
  • Privacy
  • Trust Center
  • Terms of Use
  • Documents

Copyright © 2026 Palo Alto Networks. All Rights Reserved

  • Youtube
  • Podcast
  • Facebook
  • LinkedIn
  • Twitter
  • Select your language