Many of us cybersecurity professionals find us wrestling with tasks that a typical person would not consider cybersecurity. One such task is taking care of our metrics.
At Lunarline, we routinely run into organizations that are trying to achieve meaning from the piles of data they have. One organization may have a large group of metrics none of which seem to positively affect the operation, while another has collectively thrown up its hands and does not measure at all. Neither extreme is desirable. At Lunarline, we use a process called M3, or Mission, Makings, Metric to develop meaningful metrics in a practical, effective manner
Mission, Makings, Metrics (M3
M3 traces it lineage to two other metric development approaches. One is called GQM, or Goal, Question, Metric, an approach to software metrics, developed by Victor Basili of the University of Maryland, College Park. The second is called GQIM, a derivative of GQM, developed by the Software Engineering Institute, Carnegie Mellon University, which actually consists of Objective, Goal, Question, Indicator, Metric. All three approaches help professionals create metrics that provide value that help the organization achieve its mission. With M3, we based on our experience of applying these methods and tweaked the process to be simpler and more memorable, while still being complete.
Instead of starting with what to measure, let us start with why to measure.
The militarily term “mission” is defined by the U.S. Army Field Manual 1-02 as, “The task, together with the purpose that clearly indicates the action to be taken and the reason therefor…a duty assigned to an individual or a unit.” Ask yourself, “What does our larger organization do and why? What does our cybersecurity organization do and how do those things enable the larger organization to accomplish its mission?”
Let’s say I support Gray’s Widgets, an ecommerce company, or GW for short. One part of GW’s mission is to provide an ecommerce web presence. A big part of my mission as a cyber defender is to ensure the confidentiality, integrity, and availability of that web presence. So, what should I be thinking about when it comes to ensuring my mission supporting GW’s mission is successful?
Let’s use another military term, “end state,” defined by Field Manual 1-02 as, “the set of required conditions that defines achievement of the commander’s [or the organization’s] objectives.” What’s one of my end-states to achieve my mission? Well, one end-state is ensuring that training of personnel who manage our web presence’s assets is complete and effective. This is different from ensuring that they completed training because you are also concerned with checking to see if the training made a difference. So, let’s state our mission this way: “Ensure GW’s web presence is supported by effectively trained individuals.”
Next, we will talk about how to collect the information we need to do this.
Once you know the purpose of the measurement, the next step is to identify where the source information resides and how you will get it. Remember, we want to see if the training is complete and effective, so we will likely need the training records of IT service providers, specifically:
- Personnel who should have received training
- Training completion dates
- Types of training received
Next, we want to see if the training made a difference. We want some standard of effectiveness, maybe patch management. In this case, we will need to know:
- Our assets, and who is responsible for maintaining them.
- The risk score from our vulnerability scanning tool.
During this step, we will look at the data, assess its accuracy and completeness, and correct any problems we see. We will also determine how we will collect this data.
Next, we need to put these items together. In this case, let’s say we want to know if having a particular type training reduces the risk score. We will aggregate the vulnerability scanning risk score by its custodian, and take the average score. (Caution: One extreme score can pull the average in a particular direction. If you want to eliminate this possibility, then using a median – say the 13th highest score out of 25 assets – might be better). Then we can correlate the number of days since the last training by the average risk score. A scatter plot, like the one below, can be helpful in determining whether there’s a correlation.
There are some keys to ensuring a successful metrics program that does not become overly burdensome, including:
- Keep a library. As metrics are developed, keep them in one place, such as in a spreadsheet, or a SharePoint list. Only operationalize the metrics you need.
- Keep it simple. Keep documentation straight and to the point, so development and maintenance of metrics does not become overly burdensome.
- Keep it transparent. Socialize the metric to ensure everyone has the same understanding of its meaning and purpose to avoid confusion.
- Keep it up-to-date. Retire old metrics when they have outlived their usefulness.
- Keep it operational. Only capture metrics you plan to act on, and decide up front what those actions are for certain outcomes
Lunarline stands ready to help your organization in building a sustainable and effective cybersecurity program that meets your agency’s needs and available resources. To learn more about us, visit Lunarline.com or contact one of our experts today.