Consent

This site uses third party services that need your consent. Learn more

Skip to content
Blog

Measure and manage human cyber risk with Secure Practice

At Secure Practice, we have always believed that understanding people is key to managing human risk. But security awareness and training methods have for too long suffered under a disconnect from people and an understanding of what basically makes us all human.

Consider the metrics which are so often relied upon to assess security awareness among employees in an organization:

  • How many people have completed the mandatory microlearning course(s)?

  • How many people were tricked into clicked the simulated phishing link?

Then imagine how you can achieve 100% microlearning completion and 0% phishing simulation clicks without becoming an inch more secure than before.

In fact, a "perfect score" on poor metrics may be harmful to your organization's security.

People will only click through courses to avoid nagging reminders, and people stop clicking links in emails completely because they are afraid to fail a test. Did we all end up here because no better options exist?

Why knowledge and interest matters

From two decades of research, including some of our own, we drew up a model to make security more human. And to make human risk more manageable, we had to make it more measureable.

First, our human risk model is built on two dimensions, namely knowledge (digital risk understanding), and interest (security affect), and it looks like this:

Our main idea is that people are different. While we cannot model all individuals in the world, we can still differentiate between five broader groups:

  • Positive interest / high knowledge: Sometimes referred to as champions, these are people who know how to stay secure, and are willing to spend the effort needed to do so in practice.

  • Positive interest / low knowledge: These people may come across as a bit naïve or uncertain, but at least they are willing to learn! Cherish their interest and show them how, and their risk understanding will grow on every encounter.

  • Neutral interest / neutral knowledge: The gray fuzzy blob in the middle are people who know and agree that cyber security is important, but they are apathetic or making excuses to not doing anything about it themselves.

  • Negative interest / high knowledge: Identified by actions to circumvent compliance requirements, while they do know how to stay safe on their own. Their expertise may however cause risky situations outside their limited field of view when less competent people follow their lead.

  • Negative interest / low knowledge: They do not know how to stay safe, but do not care about it either. Despite of recklessness they have made it so far, but as long as their affect is negative, it is also difficult to help them improve.

While none of these groups perfectly capture every factor which has an impact on security behavior, we can use the model to understand our next move.

Consider now that size of each circle in the diagram represents the size of a population among your colleagues. How large do you think each group would be, relative to one another?

The case for better human risk metrics

This brings us to our next question; would it be possible to measure how many people there are in each of our five groups above?

While measuring risk is difficult enough already, if so many security awareness programs hinge on simply the poor metrics mentioned above, surely we can do better.

One of our greatest inspirations has been a multi-disciplinary team of researchers in London. Not because they discovered the ultimate formula for measuring human risk, but because of how they give us perspective to see humans in security as simply more human.

Users are not the enemy, and people are not just the weakest link in the security chain. On the contrary, people are simply human, and to support them better as security professionals, we need to understand why people do what they do.

On this backdrop, we designed a way to collect data in relation to a number of risk factors.

Each risk factor is in turn described through data on people's knowledge and interest.

Take two-factor authentication, for example. If people know what it is, and they adopt it on services wherever possible, they are likely to have both high knowledge and positive interest.

If however people don't know what two-factor authentication is, or they don't (believe to) be able to get it configured themselves, they will need a very strong interest to overcome the skills barrier. However, the more positive the interest, the stronger the likelihood they will in fact be able to do that.

If people instead have a negative interest towards spending their time and energy on security, they would only do very low-cost efforts to stay safe. And this makes it less likely that people will do something which is often considered pretty difficult, such as enabling two-factor authentication.

While the ultimate measure to behavior is actual behavior, we believe it is possible to measure probability of behavior based on knowledge and interest factors.

And when we are talking about probability, we are also arriving at the concept of risk, and risk metrics.

What you can measure, you can manage

While metrics are nice, actionable metrics are even nicer. Therefore, we did not want to only give you nice visualizations of risk data.

We also wanted to provide a way to do targeted intervetions towards all these different people, and groups of people.

But instead of telling your team the individual identities of people in each group, we have built a much more powerful feature.

By extending our dynamic targeting groups, you may now target learning content, including phishing simulations, e-learning courses, surveys and newsletters, based on people's interest and knowledge.

In addition, we have built a way for you to target content based on specific risks factors, and how people measure towards them. 

For instance, if your organization's human risk metrics show that SMS scams is a topic of high risk, you can create a newsletter (or smishing simulation) to show what SMS scams look like, and send it to only users who have say a risk score above medium on SMS scams, as an overall topic. You can also narrow down your audience further, to people who have a high (or low) degree of interest to learn.

The road ahead

Our team is super excited to show you in practice how our platform allows your organization to measure and manage human cyber risk. While our focus has been on building and testing, we will now spend a lot of effort in the coming months to demonstrate practical use cases, create guidance material, build learning content tailored for targeted audiences, and of course continue to collect real world feedback.

As with any innovation, there are still some open questions, new ideas to explore, and potential research to do in an operational environment after launch. While we have worked with key partners – including scientific researchers – to get this product iteration ready and working, we are in no way ending our development here.

Our next steps on the technical side include even more automation and intelligent suggestions to be integrated in a workflow for managing human risk.

PS! We recommend to also have a look at the work we have done to ensure privacy and trust for everyone using our services, in our article "Privacy considerations for human cyber risk measurements".

Please share your thoughts, ideas and experiences along the way. We are always happy to talk security and people!

Explore