When measuring doesn't lead to managing
Organization Acceleration

When measuring doesn't lead to managing

Organizations can make a positive difference to ineffective performance management systems if they use these guiding principles as part of their redesign.
Jessica Spungin
Success Stories

Changing the Conversation

The supply chain organization that had collated more than 1,500 metrics into a monthly 40-page summary but had done nothing with it decided to focus instead on one single metric — “on-time, in full delivery” — and tracked it throughout the supply chain on a weekly basis, hoping to shift the emphasis from measuring to managing.

A new weekly management forum agreed collective accountability for the end-to-end supply chain with the 20+ representatives, each of whom had accountability for a slice of it. They hired a coach to observe their conversation and behaviors, and to help change the nature of the performance conversation and build the following skills:

  • Investigation and interrogation of the data to find the root cause of a performance delay, as opposed to just noting red traffic lights
  • Changing the nature of enquiry from “what have you done wrong?” to “how can we help you?”
  • Building a sense of collective accountability by using the forum to identify where hand-over and accountability gaps were damaging performance

Over a six-month period, the effect of these changes was a substantial improvement in the performance of the supply chain. The score on the crucial measure of on-time, in-full delivery leapt from roughly 50% to more than 98%.

As time went by, they built more metrics back into the PM system by automating the collection and delivery of the data, and were able to use this information at a weekly meeting to focus on such questions as: how can we collectively improve the current performance? Not: “Who messed this up?”

At a glance

This article lays out the things that tend to go wrong with performance management systems; it lists some “early warning signs” to enable you to determine whether your system is fit for purpose, and concludes with some tips for ensuring that you embed a PM system in the best possible way.

Moreover, we hope it challenges you to think about what process and information you need to manage the “delivery” of strategy. It seeks to remind you that just by measuring something you haven’t usually done anything about it. And it asks you to assess whether you have got stuck in an unhealthy paradigm and need to rethink the core of how you manage performance.


In responding to Peter Drucker’s implicit challenge — that you can’t manage what you can’t measure — some organizations’ practices have led to a detrimental combination of over-measurement and under-management. Part of the problem is knowing what to measure. Kaplan and Norton’s balanced scorecard approach, now more than two decades old, reflects the insufficiency of measuring financial outcomes for gaining a good picture of an organization’s overall performance; the model’s proponents suggest that such an inquiry should encompass the four elements of financial, customer, internal business processes and learning, and growth.

Recent research on the subject of performance and health supports this,1 and highlights the kind of metrics you might include in order to measure the health of an organization to ensure long-term performance. In the process of developing their performance management systems so as to adopt more sophisticated health-focused or balanced scorecard approaches, organizations may end up over-engineering things. If the scope of the performance indicators is too great, the result will be a slew of difficult-to-track metrics for which the data will be demanding to collect and costly to process. There will be a danger of losing sight of the managerial objective the metrics are intended to support.

When Performance Management Systems Go Bad

Over the last 20 years, we have observed three common categories of shortcomings in performance management systems:

1. Omission

One of the most common shortcomings is to omit implementation of PM at one or more of the three necessary levels: namely, as it relates to external audiences, to business units, and to individuals. Another version of this shortcoming is to misunderstand the connection between these three levels.

Organizations may implement PM effectively at one or two of these levels while neglecting others. Effective performance management requires engagement at every level and the creation of a holistic system that informs, manages, and motivates the organization.

The most common omission is the lack of a coherent process. In such cases, organizations may be quite effective at the external communication of their objectives and outcomes, and competent in offering personal incentives or feedback for personal development, yet they forget to spend time managing the business, and strategy delivery.

Common warning signs for omission
  • The conflation of PM and personal incentives and bonus or share allocation calculations
  • Absence of regular and planned interactions between the Stakeholder, Business, and Individual PM processes and outcomes — e.g., no annual plan driving different levels of PM across the organization to the appropriate timeline

2. Excess

All too often, PM systems are over-engineered through the generation of a substantial, albeit logically-linked set of performance indicators that become impossible to measure without allocating a small army of people. The distracting effect of this over-engineering is that the weighty task of measurement becomes an end in itself.

In one remarkable case, we encountered a supply chain organization that logged more than 1,500 metrics. The results were collated monthly in a 40-page report, a task that kept 70 people occupied full time collecting and managing data throughout the supply chain. And what did they do with this 40-page report? Nothing. The report was taken to a monthly meeting at which each area’s “traffic light” status (red, yellow, or green) was “noted” and recorded but no decisions were made, not one activity was changed, nor one metric outcome questioned. They were so busy collecting the information that they had no energy or strategy to do anything with it.

Over-engineering is often exacerbated by the desire to include all four of Kaplan and Norton’s balanced scorecard elements; it is a very mature organization, however, that could develop and utilize a full set of such metrics, maintaining a focus on the appropriate level, appropriate action steps, and clarity of accountabilities as a result of the scorecard. And in striving to be “balanced” the organization can then tip into unbalanced by shifting focus from “doing” to “measuring.”

The knock-on effect of developing a scorecard that is too complex happens when the management team then tries to automate the KPI delivery. Attempts at automation may result in performance management becoming subsumed in an overly ambitious (and never completed) IT project, leaving the scorecard partly populated or, indeed, shoring up a case for the continued utilization of the 70-odd people who collect the data manually. Performance management systems that can be very simply automated have a much higher degree of success and involve a lot less complexity for the organization than manually driven approaches.

Common warning signs of excess
  • Chronic absence of insight arising from the overuse of “traffic light” summaries in which large quantities of data are aggregated to the point of being meaningless
  • Abandoned attempts to automate data collection
  • Constant changing and redesign of the performance indicators resulting in no trends observed
  • Frequent complaints about the number of performance indicators required resulting in no trends observed
  • Analysis/historical data comparators, which are meaningful, and excess effort on the redesign each quarter/year

3. Inaction

The third shortcoming is inaction: The situation in which measurement has become an end in itself and there is no active management or intervention.

Paralysis of this sort occurs in either of two faulty organizational set-ups:

  • Though data is collected and distributed, there is no forum or mechanism for it to be discussed or used, either regularly or ad hoc;
  • Though data is collected and distributed to a forum, or through a mechanism in which it could be discussed and used, the forum is ineffective because no practical discussion or activities take place at the forum.

It is easy to speculate about the reasons for this kind of failure. Organizations may fail to understand the value of the data that can emerge from a powerful performance management system. Organizations may be exhausted by the early phases of the process and have no interest or energy left to pursue the analytical conversations.

It may be that a further penalty of excessive data collection is the accumulation of a truly overwhelming quantity of data. Managers may lack the skills — of questioning, challenging, supporting, and problem-solving — that are required to investigate findings and steer a revised course. They may also lack the will to do so: if an organization’s culture leans towards one of “window dressing” or “blame games,” this is not conducive to actually managing strategy through data.

Whatever the reason, this is the heartbreaking consequence seen over and over again in organizations. Where the strategy has been well thought through, the way in which managers plan to track delivery and resource utilization is defined, where even accountabilities have been made clear. But then the organization runs out of steam and interest in having the right conversation with the right people in the room, with the right frequency, and the right culture and managerial skill to investigate and support rather than judge and point fingers. Or even worse, where the right people get into the room at the right frequency, but then no useful conversation actually takes place. Any of these outcomes is tragic and leads to performance management redundancy.

Common warning signs of inaction
  • Reports that lack ownership or allocated accountabilities for each of the performance indicators.
  • The absence of any forum or mechanism for the discussion of performance data and for deciding on implicit actions.
  • Performance discussions (at business or individual levels) that are bland and unchallenging — where a “red traffic light” is noted, rather than investigated.
  • Performance discussions (at business or individual levels) that reflect a culture of fear and avoidance rather than an open discussion about the issues and solutions.
  • Lack of differentiation between personal development and compensation performance conversations.

Shifting to Effective Performance Management

How should organizations shift to more effective performance management?

The task of management is to gather information and data and then decide what to do; it is not simply to monitor situations. Observing performance management in many organizations over some years suggests that even the simplest set of metrics (perhaps only 2–3 measures) can — if tracked systematically, with differentiation among stakeholder, business, and individual levels — be a much more effective tool for driving decisions, actions, realignments, support, reward, and other business actions than a complex and potentially rich but poorly executed balanced scorecard.

The task of shifting to a more effective PM can be quite complex and depends on your starting point. For some organizations, it may mean adding a Level of PM which was missing, for others it is about redesigning something overly complex to become simple, and for others, it’s more about changing the actual conversation which takes place around the existing system. In that context these 10 guiding principles for a system of management rather than measurement might be helpful:

  • Be clear on the intent. What are you measuring to manage, and why? Be clear not only about the direction and context of the organization which you want to track and how they fit with your system of performance management but also about the processes and behaviors you want to encourage or adjust.

  • Keep it simple. Reflecting the clarity of intent, focus on those very few metrics that matter. Though two or three metrics will not give you the full picture, it can be transformative to combine a relatively small number of measures with the capacity to drill down when necessary, with a culture of positive intent, and with alignment about the way these metrics contribute to desired outcomes. Measuring two to three things very well and using them to drive the actual management of the organization is much more effective than measuring 120 things but doing nothing with them.

  • Demand accountability. Roles, structures, and reporting lines must enable the identification of people’s contributions. Shared accountability works too, as long as people are aware of what role they play and what they can therefore do to help. Accountability is not the same as finger-pointing. It is about making people feel ownership for finding the solution — not just hiding the problem.

  • Don’t confuse measurement with management. Move beyond the mere definition of the key performance indicators by embracing all aspects of PM, such as having the right dialogues at the right frequency with the right people, all with a focus on effective action.

  • Automate where you can. Cumbersome performance management systems can contribute substantial complexity to organizations, and create a sense of bureaucracy or high data demand. Automation of the collation, delivery, and distribution of information will conserve organizational energy for doing something with the data.

  • Don’t forget to differentiate between the levels. Organizations too often conflate “personal performance management” (including bonus calculation and/or feedback and developmental objectives) with “business management.” An organization ought to be able to have regular conversations about how a business or function is achieving its goals independent of how an individual manager is doing. And these conversations should look and feel very different too.

  • Go with the grain of the organization. Identify the strength of the organization, the style, the context, something which gives you a good anchor point to design the system around — e.g., a strong culture, a shared direction, a strong customer orientation. So for example, if you’re in an organization where face-to-face conversations are always supplemented with written conclusions, make sure the performance management has a written element to circulate for everyone to absorb after the sessions. If you are an organization where customer orientation is the source of motivation, focus first on ensuring the customer-driven metrics work at all relevant levels and use that to drive any revisions in your overall system.

  • Be patient. Carefully pace the implementation of a new or revised PM system; aim to syndicate each stage of development to avoid the systemic rejection of something new.

  • Be prepared to tweak. It is uncommon to get the implementation of a new system right in one go. Furthermore, any new system has the potential for unintended consequences and unpredicted behavioral implications, so be prepared to make some changes after the first cycle.

  • Be prepared to raise skill levels. Organizations that have been stuck in a paradigm of inaction may not find it easy to make a sudden transformation without acquiring new skills: in managing through data, in managing difficult performance discussions, and in working collaboratively to address performance issues. Those organizations that have a blame culture will tend to need help to see performance issues as opportunities to learn and grow. Coaching and top teamwork are important in making the shift from inaction to action.

Organizations can make a positive difference to an ineffective existing performance management system if they use these guiding principles as part of their redesign efforts — as the stories below try to illustrate.

Cascading the Right Metrics

A mining organization had developed a quarterly balanced scorecard of 40+ metrics, the results of which were presented to the board by the executive management team as a way to track overall business performance and performance to shareholders. The metrics were tracked by a “traffic light” measure (red, yellow, green) however; they were not supported by any parallel analysis of business-unit delivery or tied to accountability in the business. Metrics around individual objectives were missing, and all performance and compensation discussions were based on an annual dialogue without any supporting metrics or MBO-type development discussions. Half of the metrics in the original balanced scorecard were outside the organization’s scope of accountability (e.g., commodity prices, risk of regulatory change).

The management team redesigned the scorecard and its application to achieve the following:

  • A simplified board-level reporting system focused on the 10–20 things that were within management’s control and were directly linked to the delivery of the strategy.
  • A clear separation of strategic indicators and business metrics:
    • Strategic indicators were understood to be important things to note and track regularly, to provide strategic insight/market awareness (e.g., commodity prices, economic indicators, regulatory changes), but were not directly under the organization’s influence.
    • Business metrics were presented numerically (rather than by “traffic light” summaries) based on ratios/metrics versus budget and each one had a designated owner or paired owners.
    • Each business unit developed a monthly set of “on track” metrics of approximately 10 items to enable discussions with their teams about the delivery of the annual strategy and budget.
    • Single-page performance contracts derived from business unit performance metrics contained behavioral and individual personal development objectives.

The result was a more focused set of metrics to use in discussions with the board/shareholders, with business unit managers and with individuals during quarterly personal performance reviews.

About the author

Jessica Spungin is an adjunct associate professor of strategy and entrepreneurship at London Business School. She also works as an independent consultant across a variety of industries, and has years of experience and expertise in the translation of strategy into action to help deliver strategic change.

References

1  See Beyond Performance, by Scott Keller and Colin Price; John Wiley & Sons, 2011.

Stay connected

Stay connected to our expert insights, thought leadership, and event information.

Leadership Podcast

Explore the latest episodes of The Heidrick & Struggles Leadership Podcast