Dr. Nicole Tschierske

6 Principles to Make the Most Out of KPIs and Metrics

KPI stands for Key Performance Indicators and is the language of business. It’s the measurement tool used to evaluate performance. When I recently interviewed Mark Graban for the Better Work podcast, we explored how to use KPIs and metrics in a meaningful way. Read on to learn how to use them to measure and drive your company’s success.

(You can also 🎧 listen to the episode or 📷 view the bite-sized version of this article.)

1) Create an environment where people want to be part of improvement initiatives.

I’ve coached teams doing Lean projects and problem-solving in different areas in central business functions (science, finance, and supply chain…). At the beginning of these projects, people tend to be hesitant to engage. They fear I’m going to dictate the solution to them. But as soon as they notice that I’m just teasing out their own thoughts and helping them structure their own expertise, they open up and enjoy this process a lot. Because they understand I truly want to listen to them, and that their expertise and their insight is valued. This creates a climate where people can think creatively, where they go above and beyond and are engaged with the work. 

Creating that kind of environment, Mark says, starts with leadership. People do want to improve and do want to do good work. Sometimes leaders need to get out of the way, stop imposing change on the workforce and engage people. Instead of doing it to them, do it with them (it being improvement).

Don’t mandate improvement, even when it comes to metrics.

Sometimes the things that are easiest to measure aren’t the most meaningful to the customers or to the frontline employees. IT’s good practice to ask the teams what good process measures would be and what measures they think they can influence through their improvement activities. This is much more collaborative instead of just imposing any of this on people.

2) Don’t obsess about metrics and KPIs, but focus on processes and results.

We need to focus on both process and results. It’s the process that leads to results. And more broadly, you can describe systems as a collection of processes.

The most harmful, the most dysfunctional environment is when people are being pressured to hit numbers that the system is incapable of achieving.

That leads to dynamics where people might simply fudge the numbers to comply. We don’t want people being pressured into being deceitful. Instead, leaders and staff need to collectively take responsibility for improving the processes and the system that creates those numbers and results.

Numbers don’t tell you everything.

Mark wrote a book about metrics: „Measures of Success„. The methods he describes in there are used in conjunction as part of a Lean management system. Numbers can’t replace going to the shop floor or the workplace (the „Gemba“). Leaders need to have that connection to the frontline employees to see and to hear from people firsthand.

But having a set of metrics is still very important. Unfortunately, often the way leaders react to the latest data point ends up being very counterproductive and distracts people from the real improvement work that would actually improve performance. Looking at the numbers shouldn’t trigger action, but …

Metrics and KPIs tell you where a deeper investigation is needed.

Has performance changed in a statistically meaningful way? Or is it just fluctuating around an average? Using process behaviour charts can help you discern if what you’re looking at is noise or if it’s a signal that says in a statistically meaningful way „performance has gotten significantly worse, or significantly better“. They’re a clue for where to investigate further.

If performance has gotten worse, work with teams to look for causes or root causes and bring performance back up. If performance has gotten better, aim to understand why as well. Because if there’s been some sort of change to the system that’s positive, we want to make sure that becomes part of the new standardised work for that system.

3) Use data wisely to make decisions.

The charm of being data-driven is in acting based on facts rather than opinions. At the same time, we need to pair quantitative (numbers) with qualitative (context) insights. And the starting point is to understand what data you should react to.

We have to learn not to react to noise in the system. 

Within a stable or predictable system, Mark explains, the numbers you measure are always fluctuating around a stable average. But that doesn’t mean they’re fluctuating completely randomly. Sometimes you may have a couple of data points that decrease consecutively:

When flipping a coin, heads may come up three times in a row. That doesn’t mean there’s anything wrong with the coin.

This shows that just because a decision is data-driven, doesn’t mean it’s a good decision. We need context and using process behaviour charts helps us with that.

It also highlights that a change in the system (e.g. a new manager for the department) that coincides with a change in the data (e.g. performance improves) does not mean that there’s a cause-and-effect relationship. 

Be mindful that decisions based on data points that are within this realm of statistical noise, probably aren’t good decisions.

And it will save you time: Let’s say that out of 20 of your metrics nine are worse than the month before. Those don’t all require an equal level of problem-solving. There might be one or two of those metrics that have a data point that’s statistically relevant. Those are the ones we want to dedicate more time to.

4) Once you’ve identified a statistically meaningful signal, ask „What has changed?“.

When we see a data point in a process behaviour chart outside the lower and upper statistical limits, that would be highly unlikely to be randomly occurring. It’s a signal that the system has changed and a case where a single data point is absolutely worth reacting to.

If we have eight or more consecutive data points that are all either above average or all below average, that’s also statistically unlikely to be randomly occurring. That would be again, a signal that something has changed in the system, whether that’s for better or for worse.

Now we need to go do the investigation.

One option is to investigate further is to continue quantitatively taking a deeper dive into a different level of metrics. Looking at sub measures can help us understand the causes of these data points.

A more qualitative way to understand the data better is talking to customers, talking to staff, talking to the people who do the work. Understand what their theory or hypothesis for this change is.

Then we have to go and test the hypothesis.

The information you gather is observational and opinionated. If you decide to take countermeasures based on these hypotheses alone, you might start changing the wrong thing. 

It’s better to test if there’s a cause-and-effect relationship that seems reasonable between the metric and your understanding of it. This helps to better evaluate your attempts at improvement.

That’s why in Lean and other structured approaches to problem-solving, we always take tiny steps toward a conclusion:

  • We have a hypothesis of what might be a potential cause of this problem. And then we go on and test it to verify that it is in fact a direct cause of the problem.
  • Similarly, once we’ve identified the causes of the problem we brainstorm solutions and test again if they make a difference.

5) Don’t seek to optimise, but pursue the ideal.

I’m a fan of always aiming to do better work, not just in terms of the output that we produce, but also in terms of how we go about it. But we don’t want to ‚fall off the other side of the horse‘. How do we balance the striving for excellence and optimization and pursuing the ideal states on one side with over-processing on the other side? 

Mark cautions us to be mindful of the word „optimisation“ because optimisation is looking at trade-offs. Take for example optimising inventory levels: Increasing inventory, in a lot of settings, will increase customer service. But it also increases cost and risk. It’s a mathematical approach to identify where the optimal point is, given a set of trade-offs.

Instead of thinking about an optimal level of inventory, or an optimal safety level, or an optimal level of defects and quality, we can start aiming for ideals, we can start working towards zero harm and zero defects.

We can achieve this by working together, through error proofing and other proactive methods. For important things like improving safety, there would be no over-processing, but there might be other dimensions of our work that aren’t as meaningful to customers.

This brings us to the importance of vertical goal alignment.

Knowing in the bigger picture: Where do we want to go? What will truly make an impact, truly add value to our customers? And what are the actions, the operations and the systems that deliver on that?

(I’ll stop here because this could be a whole episode in and of itself.)

6) Measure KPIs and metrics as much as possible, but don’t overreact to what you see.

Thanks to computers we can measure KPIs by the minute, for example on the shop floor of a factory. Or we choose to measure them monthly, for example when tracking inventory in your supply chain or the budget on your cost centre. Or anything in between. 

How can we determine how often to measure KPIs, how often to look at the data, and when to react?

If the cost of measurement is low, Mark says, and you measure more frequently, then the key is to use either process behaviour charts or statistical process control charts to not overreact to noise in the metric. 

Here’s a manufacturing example: Assume you’re making engine parts where the diameter is very important. So of course you measure the diameter to a very precise level. The absolute worst thing that you can do is to measure every single part and then adjust constantly based on the results of the previous part. If, for example, the diameter it’s five microns too big the machine would be adjusted to make the next one five microns smaller. Then you measure the next part and it’s 10 microns too small. So the machine adjusts 10 in the other direction.

The practice of constantly adjusting actually increases variation, instead of decreasing it.

And that’s very counterproductive to people. So we want to make sure we’re not overreacting and over-adjusting based on noise.

What you also don’t want is to only measure and act at the end of each shift though. Because the diameter of the cylinder might vary dramatically over 8 hours. You want to be able to intervene if necessary and that’s where frequent measurement can be helpful.

We can measure more frequently as long as we aren’t overreacting to the more frequent measure.

Two character strengths will help you with that. One is curiosity. Instead of going in knowing, you go in exploring and investigating. The second one is patience and trusting that the processes and systems you’ve built so far will not break from one day to the other. You give yourself time to observe.

Overreacting causes harm to the process, to the product or service and to the culture.

As an employee, if you’re in a constant state of urgency because your leaders treat every deviation from the norm in crisis mode, you’ll be stressed and not thinking at your best.

It’s reactive, and it’s unnecessarily stressful.

There are dynamics and workplaces where people are doing the same work the same way every day and still, sometimes the performance measure is going to be higher, and some days, it’s going to be lower.

Conclusion

There’s a common theme here: Understand the difference between assumptions and knowledge.

  • Do you know performance has gotten worse? Or do you think it has gotten worse?
  • Do you know there’s a trend in the data? Or do you think there’s a trend in the data?
  • Do you know the root cause? Or do you have a hypothesis about the root cause?

How do we test our assumptions? If we have a hypothesis, are we open to disproving it? That’s what scientists would try to do.

It’s not about proving ourselves, but see if we can learn and close the gap between our assumption and reality.

Assumptions can be tricky. They sometimes sneak up on us and we don’t even realise we have them. It takes a lot of reflection, taking a step back, and the discipline of constantly questioning what we’re thinking and why we’re thinking it. 

On Key

Related Posts