The Hacker Mind Podcast: Hacking Human Behavior

Robert Vamosi
June 14, 2023
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Could the nudges and prompts like those from our Fitbits and Apple watches be effective in enforcing good security behavior as well? Oz Alashe, CEO and founder of CybSafe, brings his experience in the UK Intelligence Community to the commercial world along with some solid science around what motivates us to make changes in our lives. It’s not just one-off phishing examples, it’s also about providing positive feedback, even gamification, to make things stick in future insider trust programs.

VAMOSI: Perhaps this will sound familiar. In the fall of 2014, Telegraph reporter Sophie Curtis received an email, a LinkedIn connection request from someone who appeared to work at her publication, so she clicked the link and thought nothing of it. In retrospect she realized she should have checked her company’s employee list before accepting the invite.  

I receive a lot of LinkedIn invitations, but I don’t generally respond if I don’t actually know the person in the other end. Even if they say they work at my company. And we get soo many invites like that, who bothers to check to see that it really did come from LinkedIn? In Sophie’s case the unsolicited email was okay. 

A short time later, Sophie received another unsolicited email. This time it was from someone she didn’t know at an anonymous whistle-blowing organization. This email included a doc file. As a reporter, Sophie sometimes receives documents from individuals as evidence of what they claim.  And the claim seemed believable at the time. So she opened the attached doc from the whistleblower. 

And as soon as she did so, Windows Defender started flashing security alerts. And the warnings kept piling up on her desktop screen. Worse, the malicious software within the doc file opened her webcam and took a picture of her looking all startled, and displayed that on the screen.

But Sophie wasn’t too concerned. You see, weeks before she’d contracted with a pen testing company to try and trick her into opening a malicious document. And, after a few weeks of researching her behaviors, they succeeded. Sophie went on to tell this story in a subsequent Telegraph article, and I used this story in The Art of Invisibility, a book digital best practices that I wrote with Kevin Mitnick.

So the question is, are the social engineering training programs like this effective. Do one off tests start to drive the corrective behavior that the IT department at the Telegraph and other organizations hope to see? Or are there better, more scientific methods for creating good security behaviors in employees? 

In a moment we’ll talk to someone with experience in the intelligence community who is now using behavioral models, not quarterly phishing tests, to drive effective security change in corporate environments.  I hope you’ll stick around

[Music]

VAMOSI: Welcome to The Hacker Mind, an original podcast from Forallsecure. It’s about challenging our expectations about the people who hack for a living.

I’m Robert Vamosi and in this episode we're going to learn some things about human behavior and how to nudge people to move toward better and safer security practices.

[Music]

VAMOSI: So I’m a CISSP and I’ve covered the security beat for over twenty years. I like to think that I’m smart enough to know when I’m being scammed. And, in the main, I’ve avoided a lot of the pitfalls of clicking this and that. Someone might say I’m too skeptical of email and texts and invitations that I received. Perhaps I’m overdoing it. So that’s why I was eager to talk with my guest.

ALASHE: Hi, my name is Oz Alashe,  I'm CEO and founder of CybSafe. CybSafe is a software company. We build software that helps automate behavior change, influences employee security behaviors, and reduces risk for organizations.

VAMOSI: Oz is a former UK Special Forces Lieutenant Colonel, Oz is focused on making society more secure by helping organiZations address the human aspect of cyber security. He has extensive experience and understanding in the areas of intelligence insight, complex human networks, and human cyber risk & resilience. He’s also passionate about reducing societal threats to stability and security by making the most of opportunities presented through advancements in technology. I met up with Oz at the RSA Conference in San Francisco and wanted to pick his brain about the human factor. 

First, I’m curious about his transition from intelligence to commercial business.

ALASHE: Yeah, it's been incredible. I mean, I'm a former Army officer originally that's in the UK, so I finished university went to the Royal Military Academy Sandhurst, which is where we send our officers to train and then joined my unit and then follow volunteers getting served with UK Special Forces focused on counterterrorism and, and kind of that journey for me has spread over 17 years of service. In many ways. I feel like I'm still serving now. I'm serving clearly in a very different context. And that's fantastic and building a software company that is full of software engineers, behavioral scientists and data scientists is slightly different. But of course, I worked with great software engineers, data scientists and data scientists on the other side of the government fence in a previous life. So the transition has been exciting and interesting. It's amazing and wonderful to be focused on addressing a challenge that we know is going to make a big difference for a lot of people. And that's really what we're doing asides if we aim to fundamentally transform the way society addresses human cyber risk, it's nice to be at home a bit more, I guess. It's also nice to be in places that are slightly less, slightly less unpleasant, but the reality is, I love my service. I loved every second of it. I really appreciated the opportunity to work with some amazing people in some amazing countries helping to raise awareness.

We hear a lot about insider threats, but it's a lot more complicated than just phishing.

ALASHE: Yeah, you're absolutely right. When you talk about the human factor, in many ways, the human factor within an organization breaks down into two one side, you've got the makers, masters and maintainers of technology. So there's people who write software code and build architecture and infrastructure. But on the other side, you've got the majority of the workforce and most employees, people like you and I who use technology all day every day for our jobs, or to live our lives. And that often poses a really rather large risk for most organizations. People talk about the insider threats but people aren't a threat. the threat is outside of the organization. In most cases, there are vulnerabilities and risks. There is an exception to this very small number of people in the organization who do things deliberately to bring harm. To the organization, but most employees don't do that. Most of the rest and most of the insider threats and inverted commas that people are talking about are actually accidental mistakes or people trying to do things to get their jobs done, but ultimately doing things that bring harm to the organization by mistake. And so that's where we're focused. I focused on the security behaviors of everyday people. And actually, there's a lot more to it, deep science and a fair bit of data that can be used to reduce that risk.

VAMOSI: So when you're talking about accidents, I would imagine the way people do their homes, the way they do their social media, and then they bring that into the workplace and some of those bad habits. carry over into the workplace might that fall into the accidental category?

ALASHE: That’s exactly right. So um, so there's security behaviors that I'm talking about, and they're not always accidental. Sometimes it's actually just ignorance or people don't know, I haven't done something. So in many ways, it might even be negligence, but the reality is, it's not intentional harm, and in most cases, but you're right to list the different types of behaviors. There's actually over 120 of them. We've actually cataloged every single security behavior and linked it to every single risk outcome. And you've given us some examples. What do people do with their social media privacy settings? What passwords do people use? And indeed, how frequently do they use them across different accounts so people change the default password on their Wi Fi router? At home? Is there a device set to auto lock after a period of inactivity? These are all examples of security behaviors that can reduce risk for the individual and the organization 

VAMOSI: So is it a matter of better onboarding? For enterprises to say you may do this at home, but now you're in the big organization.

There are some companies that now specialize in social engineering the workforce. They’ll set up quarterly phishing campaigns and identify the employees that fell for the fake email, recommending them for more training -- which the social engineering company probably wants to sell the company as well. Thing is, Oz, who has experience in the science of behavior modification, he’s not buying this. 

ALASHE: So unfortunately, the security industry for some time has really kind of really lent into heart this kind of idea of training. If you just train people, they'll behave better. But the truth is, Rob, that training isn't the issue. What do I mean by that? It's not that the training is not important, but this kind of mantra and this kind of drumbeat will trick, train and entertain. I'm going to send you some phishing simulations, and I'm gonna effectively trick you and try and get you to do something wrong. And maybe you'll learn if I do that to you. I'm going to train you. I'm going to give you some training and onboarding, I'm going to get you to do some elearning. I'm going to get you to watch some funny videos. And this entertained me. I'm going to make you laugh at all of these things in the hope that they will change behavior. It's just hilarious when you consider that there's a science behind behavior change. And trick, train and entertain isn't the answer to this. 

VAMOSI: Wow. That’s a powerful mantra. Trick, Train, and Maintain. And that’s what the social engineering companies want to do. 

ALASHE:  There are all sorts of industries that are focused quite heavily on influencing people's security behavior, and they realized some time ago that even though training might be a very small part of it, it's not the answer. Most of us need help.

[MUSIC]

VAMOSINot really engage in behavior modification. It seems like they just want to maintain. It doesn’t really go deep enough to make a change in reali life.

ALASHE:   To give you an example, in another completely natural sphere. For most of us, my watch tells me when to stand up. It literally buzzes and tells me when to stand up your device, probably if you're going from A to B would give you some directions that included giving you visual cues, and maybe even physical cues in terms of haptic colors. All of these things are examples of nudges and prompts that can be applied to influence your behavior. We need to be doing the same insecurity.

VAMOSI: So think about the fitbit example. You want stimulus when you meet your step goal--so your watch will vibrate  and display balloons or confetti. You did it. With security, we need that positive feedback loop as well.  The idea that we respond to nudges to increase our security posture. How would a security system monitor know when and how it needs to provide that nudge? Is it looking at keystrokes?

ALASHE: So we're monitoring behavior? Yes, but we don't monitor keystrokes. Because in essence, a keystroke really is helping you determine whether once somebody's typing, and to what they might be typing, but actually what we're interested in, as I said, are those security behaviors 120 of them that are related to very specific risk outcomes that are at risk outcomes that most security professionals in most organizations want to avoid? And so those behaviors include, as I said, things like, have people set their devices to auto lock after a period of time? Do people use decent passwords? Are people using passwords to be compromised, these risky behaviors or behaviors that ultimately we can see that by? what device are you or are they visiting browsers that they shouldn't be visiting or trying to download things from sites clicking on links that they shouldn't be? All of these behaviors are behaviors that actually many of us exhibit very naturally in our everyday kind of work and life. And therefore we can get those behavior event intelligence signals, that's what we call them behavior event intelligence signals. We can get those new data integrations within the organization. 

VAMOSI: This might seem like splitting hairs, but what Oz does it not classic monitoring. Yeah, behavior event intelligence signals sounds like a marketing term -- it’s not.  For example, a Behavioral Event Interview when hiring is to get a very detailed behavioral description of how a person goes about doing his or her work. Something similar is happening here.

ALASHE: One of the reasons we are careful with the language monitoring is one most people don't want to be monitored, but to actually think of it as help you know, we give off and indeed, provide all sorts of data sources in our use of technology every day. And that can be used to provide us really personalized guidance. And that's less about monitoring. And that's more about using data sources in order to support people who need it because most of us can't remember everything all the time.

VAMOSI: So this sounds like more of a human security audit. 

ALASHE: Yeah, that's right. It is more of an audit process in that sense. I guess. The good thing about audit processes in general, is MC can be configured to whatever you think is important. You know, in most cases, when you in order is something that somebody determines that you should do and should have and they come and check. And in that sense, that's exactly what the security team can do because it's different for every organization. Depending on the risk outcomes you want to avoid that will determine which behaviors are important to you. And then those are the ones you should focus on.

[MUSIC] 

VAMOSI: Oz mentioned that his work is based on scientific fact. There are models from which he derives his analytics. So I'm fascinated by behavior science.

ALASHE: Indeed, there's a fair bit of science behind what it takes to influence change behavior. And there's lots of behavioral change models. One of the ones that we like is called Combi, it stands for capability, opportunity and motivation. It's effectively a scientific model that looks at the different things that are needed in order to influence human beings' security behavior. And so we built a catalog, like I said, every single security behavior database that literally explains every single security behavior for risk outcomes, and we help security professionals identify the risk outcome they want. To avoid. So identity theft, data leakage. All sorts of privacy violations, identify the security, the risk outcome they want to avoid, and then look at the specific behaviors that relate to that risk outcome. And then we provide them digital behavior change intervention to do so every single employee has access to our mobile device, and our mobile app on their device, or indeed a desktop device, again, the desktop app on that device, and then that can provide them the alerts, the guidance, the nudges, the prompts, if indeed the training have to be compliant as well. All of this provided for a software solution that personalizes you as an individual. Because the reality is, you sometimes need nudges that other people don't need, where you need them at different times. Or you need them delivered in different ways, where you respond better to some things and other things, or platform learns all about.

VAMOSI:  So digital nudges can lead to positive behavioral changes?

ALASHE: so it's not exclusively digital at all. In fact, it's, it's, again, comes from behavioral science. So there's lots of different models that actually can be modeled as one fantastic model. Another one that again, we subscribe to and also use for some sorts of interventions. one would be BJ Foggs. Be a beamer model. So looking at motivation, activities, prompts, these things are really quite important because they help organizations think through the intervention, and then all of the things that need to be in place in order to drive behavior change. For most of us, our behavior is actually an equilibrium between driving forces and restraining forces, you have to look as much about why people aren't doing things as much as what you're going to do to drive and try to remove those frictions. There's barriers in academic circles, so we publish all of our research. We conduct a lot of open source research with a lot of academic institutions, both in the US and in the UK. And as I said, we build all of this and these interventions into our platform.

VAMOSI: So there are actual scientific models on how to go about affecting behavior change. It’s not Tony Robbins in the late night infomercial selling you a methodology; this is actual research and analysis at work.

ALASHE: So we start to see some really nice correlations, really helpful correlations between certain types of behaviors? So one of the things that we want to be able to help organizations do is to preempt risk before it occurs, and then to predict what type of intervention is going to be most effective and what types of circumstances so we can see linkages people who generally have certain password practices may have taken steps to implement their or adjust their privacy settings on social media, and also have completed some form of training and received an alert within a certain period of time are more likely to respond to that alert. And that's quite helpful for us now, up to 67%. More likely to respond to that unless in some cases, and we see a 67% increase in the number of people who change their default password on a Wi Fi router, if you provide the intervention in the right way at the right time. So these things are helpful. And that begs the question, what's the right way and the right time, you know, how do you actually learn that and we use a series of data points to optimize our models. And they build a variety of behavioral models. But interestingly, in some cases, even if you don't have access to that you can literally allow people to choose themselves. What time would you like to be reminded of me? 23% increase in people are actually going to take the activity because they chose what time to remind them about it. Often insecurity gives people the right information at the wrong time.

VAMOSI: So Oz identifies a bunch of behaviors, and that leads to predictable outcomes, specific risk outcomes or specific risk outcomes.

ALASHE: So risk outcomes include things that are generally things that organizations are trying to avoid. And again, quite often when we talk about security, we mix up our language, we mix up language like risk and vulnerability threat, but ultimately, they're not necessarily the same things even though often people use them interchangeably in the case of risk outcomes. Things we're talking about are things like data leakage, identity theft, malware, infection, privacy violation, even physical damage. These are all examples of risk outcomes that most security teams are keen to avoid. The reason it's important to distinguish between the different types of risk outcomes is because different types of behaviors might be relevant for different types of examples.

VAMOSI: So of those data linkage is kind of an obvious one. But what is one of the more subtle outcomes that you identify?

ALASHE: So one of the more subtle and often less focused on even though it sounds like it should be is there is privacy violation. Many organizations, quite understandably, recognize the need to protect data. And they also recognize the importance of privacy, whether that's the privacy of their employees, the privacy of customers, etc, etc. But in many ways, there are things that we do that lead to a violation of privacy accidentally, or indeed, maybe the action wasn't accidental, but the outcome was, and remember, we're talking about risk outcomes. So we're talking about the likelihood multiplied by the probability. So the likelihood multiplied by the impacts, how likely is it that nothing's going to happen and what's the impact if it happens? And most organizations, again, if they really are managing this property will be interested in making sure that they focus on the highest risk, rather than just the most common thing.

VAMOSI: So as an international company, how different is say the emphasis in the United States versus UK or any other country that year?

ALASHE: It's a great question because we're a global business we are being used in our platform has been used in 22 countries, it's available 24 different languages. And so we do see some differences across the geographic regions where a majority of our customers, either in the UK or in the US. And actually those two markets, the UK and the US, are actually very similar. They're both mature security markets with relatively well established regulatory bodies and architecture. That means that albeit some industries more than others, but ultimately, across those nations, there's a real understanding of the importance of looking after data, managing risks as it relates to networks and systems and, of course, devices data. So we see quite similar practices. We actually, for the last few years, and again, we're in the process of doing it now we run a piece of research with the National Cyber Security Alliance here in the US. They're the body that basically founded and brings security awareness month to the country every year, and we run a piece of research with them called the old behavior report, and it looks at security behaviors in both the US and the UK. This year. We're adding France, Australia and Canada. And in those cases, we do see some slight differences between the drivers behind behaviors for different individuals.

VAMOSI: I wonder if Oz has seen any difference between more technologically advanced countries and those that are new to the digital world. What is that gap like?

ALASHE: It is actually rather large is the truth. So we have been used in the countries in the GCC, the Gulf Cooperation Council, and indeed in various parts of the continent of Africa as well. And in those cases where clearly they've bought the cybersafe platform and are looking at addressing that type of risk. There's a real understanding that the human aspect is important, but actually that understanding is not necessarily universal, across the nation. And indeed, we do see that many organizations because the way our platform is configured, they can start small and grow into it. Many organizations are simply just trying to get to grips with providing their peoples and training and that's okay as a start point, but as I said to you, in the beginning training doesn't necessarily change behavior. And if you're not changing behavior, you're not reducing risk. So they've got some way to go.

VAMOSI: I assume that culture in different parts of the world has something to do with that as well. Cisco, for example, used to publish a report every year about security risks around the world and it would talk about how shoulder surfing is unavoidable in this country, but unheard of in another country based on cultural norms. Does that factor into this analysis?

ALASHE: It does, it does. We, platform is being used in some countries where there was a much more kind of, dare I say, it's a built in culture to authority where people will listen to authority and really kind of act upon the guidance of or the direction from authority in the workplace in particular, and that's maybe interesting from a kind of you see high completion rates when it comes to compliance, but of course, the challenge with that is you also see low challenge rates when it comes to things that people don't necessarily understand and unnecessarily asked the workforce just kind of just does it and get on with it. And unfortunately, that's not necessarily great. It also means criminals obviously find their lives a little bit easier too, because if you're impersonating somebody in a position of authority, we see that those impersonations are much more likely to be followed up in those in those cultures because people have this kind of authority bytes, which is, which is the scientific term.

[MUSIC] 

VAMOSI: So as a security audit, behavior monitoring is important to start to correct how your employees are interacting with your network on a day to day basis. Let’s talk about malicious intent.

ALASHE: It is not an area that we actually focus on hugely right now because as I said that insider threats as many people refer to it or insider risk where t at work. The phrase that I really like that I'm seeing use more often is inside of trust programs, but you're right. There are some actors within the organization. Some people are members of your team, who unfortunately, intend on doing harm to the organization. Ultimately, everything that we do is underpinned by a, a behavioral model and a framework that really just needs to be contextually relevant and focused. And so that's what we've done at cybsafe. We've developed the security behavior database, a behavioral model that basically looks at all of those behaviors that most people exhibit all day every day and don't realize they're introducing risk to themselves in their organization. We might adjust that to look at the more malicious side and the more deliberate bad actors at some stage in the future, but it's not something that's immediate roadmap. 

VAMOSI: I agree with Oz that the term Insider Threat is ill defined and gets bandied about far too often to cover a lot of things. For example, there’s a degree of maliciousness from someone who’s inside the organization. And there’s misinterpretations of actions as well.

ALASHE: They might actually not necessarily mean it as severe as it actually is. In some cases, they're just taking information because they think they have the rights to have it and they don't it's not there as it belongs to the organization. They may be leaving the job. Maybe they're disenfranchised, or maybe they're upset with something, and they're taking information that's not there. So that would be a really classic example quite common in many organizations, if not most organizations. But there are of course, other actors as well that have been working on behalf of another actor, and that of course is even more damaging and challenging for most organizations. Whether that's people working for nation states, or even just competitors. Art, as we know, is a really key challenge. So these are all examples of insider threats, as many people describe them. But remember what I said there are insider risks, and those are people who mean to do harm. And so actually, the security behavior models for those individuals are actually different and this is one of the reasons why for us. This is not something that we're focused on as a platform right now, but we are very focused on the research, because we're interested in that linkages, but they are different. As I mentioned, I give you a list of examples of things that people do that introduce risk to the organization. In essence, by mistake, they haven't changed the default. password on their iPhone. They are using devices that are running the latest software. They are sharing devices with people and maybe sharing their credentials or they're using credentials that have been compromised without meaning. So there's all mistakes and there's this one type of model. But the one that you're talking about are people who are intentionally doing things to harm an organization. And actually in those cases, it's actually quite limited what you can do about that. You can do things but they're limited.

VAMOSI: So does the scenario come up with a phantom employee? Someone who created user access and gave them themselves privileges? But it's not, in fact,

ALASHE: not in our particular problem set. It's not a use case that we're focused on at sites. Again, the web focuses on that kind of more common and actually more numerous use case, which is people who are using genuine, real credentials. They have legitimate access. Sometimes maybe they have too much access, but then it's legitimate. And, and then unfortunately, they introduce risks to the system through things that they do and either

VAMOSI: Let’s flip the tables. Does Oz  look at outside influence? And by that I mean that someone extorts someone in the company to behave against their wishes?

ALASHE: Yeah, the suggestion of some sort of psyops is probably too high. Industrial espionage so we look at outside influence in the context of the challenges that most of us face or indeed actually just the context that most of us face on things like social media. We work alongside a number of academic institutions, some of whom are looking at individuals who have been influenced poorly they've they really are surrounded by many people who are not necessarily taking security practices as seriously as they should. But what we what we don't do with our software is look at individuals who have been turned in my previous life. I happen to have a former National Security and Counterterrorism background that was something that we would look at quite closely, but actually, in most organizations, our experiences the bigger loss and threats to the organization come from people who are actually under this sort of influences.

VAMOSI: So is the idea of psyops campaigns to get sensitive data from organizations a bit too romanticized by the media?

ALASHE: It's not romanticized. It exists, but it's actually a much smaller challenge in terms of percentage numbers of people, even sometimes in terms of percentage volume of financial impact. Then, for most organizations, and for many organizations, it's just not a challenge at all for that. If you're a large organization with 1000s of people. The reality is that that kind of risk always exists with somebody new business, particularly if you have linkages to government that were deemed sensitive IP.

VAMOSI: What about a ballpark figure for people who just don’t know what they’re doing.

ALASHE: Oh, gosh, that's a that's a really good question. Because that varies. It varies from from case to case. And even as the context the context in terms of the way that you ask the question. So in the UK to give you an example, most large organizations so there's organizations with more than 1000 people. Ours have some form of security awareness training program in place, but again, remember what I said at the very beginning training, and therefore awareness about something doesn't equal careness about something that lots of people know about things but don't necessarily take the right steps. And so there's a bit of a danger with looking too much about what people know, because it's not the same as how people behave. I know. I shouldn't eat as many biscuits as I'm probably going to eat this week. I'll do it anyway. Most smokers know that they shouldn't smoke and they might do it. Anyway. I know I should walk more than taking the escalator but I sometimes take the escalator anyway. So the reality is that knowledge therefore that percentage of people who don't know or do know and behavior they're not necessarily as correlated as we would think.

[MUSIC]

VAMOSI:  It’s all well and good to have a scientific model for human behavior, but there’s that one variable. Human. We’re not predictable. We’re not infallible. So I wonder what the metrics might be for the work that Oz is doing.

ALASHE: One of the things that we're really passionate about is helping organizations see whether what they're doing is working. So one of the real unique points of the cyber safe platform and we actually just make this really freely available to everybody is that they can see whether the interventions that are applying nudges and the prompts are actually leading to any form of behavior change, and that being able to measure that data helps to give them a really clear view about the risk reduction. The reason that's important is because actually we need to get a feel for this at scale. What does it look like across sectors? What does it look like across organizations of different sizes? What does it look like in different geographic regions? And we know that for example, the insurance community is really quite keen to understand risk better. But there's very little actuarial data, for example, about human risk and mysteries about how many people have completed their training, how many people done sufficient simulations, and how many people have attended an event of some prescription watched something that might be security related? And again, as I said those things don't change.

VAMOSI: So I asked Oz that if he’s seen improvement in behavior, is that something that he can quantify in a ballpark way?

ALASHE: Yeah, absolutely. So we see between 27 and 67% increase in the behavior change for certain behaviors, with individuals with the right interventions in the right context. It takes time and behavior change doesn't happen overnight. That's something that's absolutely key. And I gave you the example of smart devices or wearable devices, and altering the number of steps that you take by simply giving you nudges and prompts and maybe leaderboards or various other things that help you influence your behavior. It's the same with security. This takes time. But ultimately, we can quantify it. And this is the key thing and this is the thing that most security professionals are waking up to which is gone are the days where the human aspect is this kind of nebulous gray. Let's train people in the hope that people will take that there are still organizations doing that, but they don't need to. They can be much more deliberate, much more data different and much more quantifiable when it comes to risk.

VAMOSI: This makes me wonder what industries are interested in affecting behavioral change fore security purposes. And the answer is not that surprising.

ALASHE: So without doubt for us, we've had the most traction in industries that are highly regulated and often quite mature when it comes to security. So banking, financial services and assurance. They are key industries, not just for us, but actually unsurprisingly, when you look at the security landscape, they've made the most progress but we're also seeing a huge amount of traction from customers who are having really great success in the legal sector. We see the same in software as in the software industry. And unsurprisingly, in Europe, there is legislation like GDPR, that ultimately drives organizations to be able to demonstrate that they've taken the appropriate steps and measures. And that's the case here in the US and again in other parts of the world too. So this landscape is changing quite a lot. And industries are ultimately adjusting to reflect that.

VAMOSI: Are there verticals that are more likely to see that improvement than others?

ALASHE: There are certainly verticals who've got further to go and others as far as their security behaviors. What do I mean by that? Well, actually, like I said, if you think about it, financial services, government, government sector as well, so the public sector, specifically if you're looking at security and law enforcement and the intelligence community, these are groups of people who have for quite a long time known that there are bad people trying to trick them into doing things. And so even though there are clearly improvements that can be made, in the way that they look after networks and systems and data, ultimately, people are much more live and much more willing to go the extra mile to protect information and data and assume that not everything they receive is true or good but actually if you will call into the charity sector, maybe the education sector, we definitely see in media and sports. Take it that's the bit of time but they're ultimately moving quite more aggressively into looking at more mature ways to address their security risks. And when it comes to the people component that still lags behind most companies start with investment in the technology and the process and then turn to the people is the truth. And so in those particular sectors, there's a fair bit of work to do

VAMOSI:

ALASHE: There are certainly verticals who've got further to go and others as far as their security behaviors. What do I mean by that? Well, actually, like I said, if you think about it, financial services, government, government sector as well, so the public sector, specifically if you're looking at security and law enforcement and the intelligence community, these are groups of people who have for quite a long time known that there are bad people trying to trick them into doing things. And so even though there are clearly improvements that can be made, in the way that they look after networks and systems and data, ultimately, people are much more live and much more willing to go the extra mile to protect information and data and assume that not everything they receive is true or good but actually if you will call into the charity sector, maybe the education sector, we definitely see in media and sports. Take it that's the bit of time but they're ultimately moving quite more aggressively into looking at more mature ways to address their security risks. And when it comes to the people component that still lags behind most companies start with investment in the technology and the process and then turn to the people is the truth. And so in those particular sectors, there's a fair bit of work to do.

Share this post

How about some Mayhem in your inbox?

Subscribe to our monthly newsletter for expert insights and news on DevSecOps topics, plus Mayhem tips and tutorials.

By subscribing, you're agreeing to our website terms and privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Add Mayhem to Your DevSecOps for Free.

Get a full-featured 30 day free trial.

Complete API Security in 5 Minutes

Get started with Mayhem today for fast, comprehensive, API security. 

Get Mayhem

Maximize Code Coverage in Minutes

Mayhem is an award-winning AI that autonomously finds new exploitable bugs and improves your test suites.

Get Mayhem