The Evolution of Performance Measurement

The Evolution of Performance Measurement

G. Lance Jakob, Featured in the April/May 2017 Edition of Uptime Magazine

In the current environment of ever-increasing demands to deliver exceptional results with limited resources, leaders are placing greater emphasis on performance measurement. Performance measurement is defined as the process of analyzing information to determine the progress toward a desired outcome for a given organization.

You have the incredible fortune of experiencing firsthand a transformational crossroads in this field; it’s a front row seat to the fascinating evolution of performance measurement.

In 2016, Gartner, a U.S. information technology research and advisory firm, recognized this evolutionary shift in performance measurement by separating its archetypal business intelligence (BI) market segment into two distinct categories: traditional BI platforms and modern BI platforms.

According to Gartner, “The BI platform category is undergoing the biggest transformation it has ever seen as spending has come to a screeching halt in traditional BI platform investments.”

Those organizations with the capacity to embrace the newest generation of performance measurement can leverage the principles as a catalyst to accelerate their performance beyond their competition. Sadly, organizations that haven’t recognized the shift will be left behind.

Interestingly, this isn’t the first major shift modern performance measurement has experienced.

ORIGINS OF PERFORMANCE MANAGEMENT – MEASUREMENT 1.0

To explore the origin of modern performance measurement, one must travel back to the early 20th century. This early type of analysis was characterized by accounting-centric measures, mostly because that was the only meaningful data available at the time. These measures were generally considered a supplement to financial and accounting results, and showed high-level financial trends to give users a general idea of the success and trajectory of historical organizational performance.

Unfortunately, the underlying data became available only after the books were closed at the end of the accounting period. This meant the measures provided limited value since they were, by and large, lagging indicators for which organizations had little direct control.

Let’s call this phase of results evaluation Measurement 1.0. The age of Measurement 1.0 continued for several decades, with only minor advancements. Meaningful data from which performance information was derived was generally found in onerous account ledgers.

In the 1920s, DuPont made an incremental advancement when it began measuring return on equity (ROE), broken down into three primary subcomponents that ultimately became known as The DuPont Analysis (or The DuPont Identity).

In the late 1930s, Saint-Gobain, a French glass, insulation and building materials manufacturer, began supplementing its balance sheets and income statements with narrative-based statistics, which accompanied financial data to provide additional context for those interpreting the results. The goal was to standardize the measurements across the diverse enterprise and then distribute the measurement results to provide new insights into performance that had never been visible before.

In the mid-20th century, General Electric initiated a series of performance measurements that included results outside of the existing realm of the general ledger. Much of the results were largely subjective, but it was an important step that communicated to the market that success is not measured solely by short-term, financial values.

“Organizational dogma – the unwillingness of an organization tochange course, even when presented with empirical evidence to the contrary – is the enemy of continuous improvement.”

Finally, in the 1970s, General Motors began measuring non-financial performance measures tied to production and operations.

One major flaw in Measurement 1.0 was what scholars refer to as the strategy to execution gap. The strategy to execution gap is the philosophy that, while companies spend countless resources on developing sophisticated organizational strategies, those strategies are meaningless without clearly aligned processes to properly execute the strategy.

Measurement 1.0 seems so far behind us, yet a 2011 study from Forbesmagazine found that while “82 percent of Fortune 500 CEOs feel their organization did an effective job of strategic planning, only 14 percent of the same CEOs indicated their organization did an effective job of implementing the strategy.”

As organizations began branching away from accounting measures, evidence-based decisions were still rare. Instead, “gut feeling” continued to drive organizational direction. Organizational dogma – the unwillingness of an organization to change course, even when presented with empirical evidence to the contrary – is the enemy of continuous improvement. Geocentrism may seem naïve in retrospect, but prior to the 16th century, nearly the entire human race believed the sun revolved around the earth. This belief existed despite heliocentric models being introduced by Aristarchus of Samos over 1,800 years prior to the Copernican Revolution.

Even in modern times, Jeffrey Pfeffer and Robert Sutton, authors of Hard Facts, Dangerous Half-Truths and Total Nonsense, point out that key business decisions “are frequently based on hope or fear, what others seem to be doing, what senior leaders have done and believe has worked in the past and their dearly held ideologies – in short, on lots of things other than the facts.”

SECOND GENERATION – MEASUREMENT 2.0

The second generation of performance measurement,  Measurement 2.0, is characterized by a major influx of data.

Businesses found themselves asking the question: What can we do with all this data? Internal IT departments held the belief that traditional BI tools were the panacea that would solve the data-rich, information-poor dilemma.

Measurement 2.0 was data and technology driven, and implementations were almost exclusively owned by IT departments. Unfortunately, IT departments typically lack the intimate knowledge of the business to deliver what is truly needed to drive informed business decisions.

Requirements analysis was also run by IT departments, which generally took months to complete and years to begin delivering value. This paradigm created a rigid environment unable to pivot with the changing demands of the business; by the time IT delivered its answers, the questions had often changed.

Some enterprising companies addressed the inherent development delays by marketing key performance indicators (KPI) catalogs with the promise of delivering thousands of off-the-shelf measurements to their customers. Sadly, these solutions falsely assumed the performance measurement needs of all organizations were homogenous.

No two organizations are alike; they all have varied strategies and face unique challenges. Moreover, the data collected in support of the key business processes are never the same. For these reasons, leaders quickly abandoned the one-size-fits-all solutions.

Since Measurement 2.0 focused on data and technology, this generation also experienced the proliferation of bloated BI support organizations just to maintain the cumbersome tools. Further, the BI tools placed the burden of pullinginformation from BI systems on the business users. Business users lacked the technical understanding to interact with the complex systems, thus placing further reliance on the already overburdened IT support groups. This environment ultimately prevented businesses from fully embracing BI tools as had been expected.

A lack of consistent, credible, trustworthy data also played a large role in poor BI adoption rates. Specifically, business groups placed blame on IT for not delivering usable information, while IT argued the business wasn’t collecting consistent, usable data.

Finally, leadership recognized that the data delivered through BI initiatives frequently fostered the wrong behaviors. Measurement 2.0 users were often more focused on “chasing their numbers” rather than addressing improvements in the underlying processes. As with Measurement 1.0, this is a definite side effect of the strategy to execution gap.

Sports Authority was a great example of a Measurement 2.0 organization. With more than 450 stores throughout the country and a sophisticated business intelligence investment, Sports Authority collected nearly 114 million customer records and 25 million e-mail addresses. Unfortunately, it was unable to leverage that information to help in executing its strategy and avoid complete liquidation of its stores. The data collected contained enormous potential, so much so that Dick’s Sporting Goods recently purchased the data, along with other intellectual property, for $15 million.

EVOLUTION – MEASUREMENT 3.0

Measurement 3.0 is the current generation of performance measurement. It’s characterized by a transformative focus on objective-driven performance management.

Objective-driven performance management is the foundation for operational excellence. It’s a process-centric approach that aligns the execution of key processes to strategic goals by measuring and improving what matters most to an organization.

In her latest publication, Prove It!, Stacey Barr points out that: “Performance measures are supposed to be the evidence that convinces us we’ve achieved, or at least are making progress in the right direction, toward our goals. But most of what is measured in organizations doesn’t do this. We measure the easy stuff, where data is readily available. We measure the traditional stuff, what we’ve always measured. We measure the obvious stuff, the resources we use, the effort we put in, and the widgets we produce. We measure the popular stuff, the measures that everyone in our industry seems to be measuring.”

Measurement 3.0 keeps ownership and the configuration of performance measurement in the hands of the business, those with the most intimate understanding of the strategies and processes being measured. Such a decentralized, self-service paradigm is integral to meeting the ever-changing demands of the business by removing the restrictions and bottlenecks historically imposed by internal IT departments. Agility and flexibility are innate tenets in Measurement 3.0.

Objective-driven performance management also incorporates unambiguous performance goals so everyone maintains a consistent definition of good performance and unacceptable performance. Predefined response plans are typically associated with performance goals. The philosophy states that if you know all the variables that go into the measurement and you know how each variable can affect the direction of the measurement, then you should also have a reasonable idea of the steps you should take to get back on track if performance results are outside the acceptable tolerances. Response plans clarify accountability and establish a direct action framework when performance targets are missed.

Measurement 3.0 also includes the capability to visualize the corollary effect of business initiatives. This equips business leaders with a rational basis for selecting which improvements provide the greatest value to the organization. It also validates whether the outcomes of a particular initiative match the expectations as defined prior to their implementation.

Southern Power Company, the wholesale energy subsidiary of perennial Fortune 200 luminary Southern Company, is an industry leader in Measurement 3.0. Southern Power understands that while it operates with an admirably high degree of efficiency, there is always opportunity for improvement. One example is the active approach the company takes to empirically measure the effect of strategic initiatives on corollary performance results.

Timing risk is the potential negative consequence associated with making decisions later than would be ideal, often as a result of aggregation and publication delays in traditional business intelligence techniques. Measurement 3.0 reduces timing risk by delivering information with which to make better decisions to the right people at the right time. Rather than relying on users to navigate a series of static dashboards, users are provided exception-based notifications when performance falls outside of acceptable levels. Information is pushed directly to users, allowing them to focus on the most important aspects of their job rather than navigating BI reports.

Another major shift of Measurement 3.0 is the injection of trust in the results through the concept of data confidence. Data confidence, which measures the adherence to a defined process, should not be confused with data quality. Data quality defines the volume of records that can be evaluated for a given measure (i.e., the number of records that contain the minimum data elements necessary to derive a measure). Conversely, data confidence measures the volume of records that can demonstrate the process was consistently and accurately followed.

While data quality risk is controlled through data governance techniques, data confidence risk is managed through process controls. Data confidence reinforces the idea that if processes are not consistently followed, users will view any resultant performance measurements with a high degree of skepticism and are less likely to fully commit to decisions based on the results. In his seminal publication, Transforming Performance Measurement, Dean Spitzer argues, “Trust is an essential ingredient of effective measurement. … If people don’t trust the measurement system, they are likely to view it as an enemy.”

Finally, Measurement 3.0 is characterized by the eschewal of the one-size-fits-all mentality. Leaders have realized that what works for one organization will not necessarily work for them. Measures must be tightly aligned to the organizational objectives and to the structured processes that support those objectives.

Throughout each discrete process, key data elements are being captured. All of these elements contribute to the individuality of the measurement. Any attempt to wedge the data into a rigid model dilutes the applicability of the results.

The inherent flaw in the reliance of a one-size-fits-all structure can be demonstrated through the story behind the “myth of average.” Gilbert Daniels was a first lieutenant in the United States Air Force and a recent Harvard graduate. Daniels was stationed at Wright-Patterson Air Force Base in Dayton, Ohio, in the 1950s when he began studying cockpit designs.

During this period, an alarmingly high number of accidents led to an investigation that determined the root cause was poorly fitting cockpits. Military aircraft were designed to fit an “average pilot,” as determined from measurements taken on hundreds of pilots back in 1926. Leaders initially assumed the average size of pilots had changed. Daniels had a different hypothesis.

“Measurements designed for typical organizations will never enable the attainment of extraordinary results.”

Daniels took measurements across 60 dimensions for more than 3,300 pilots. He then chose the 10 dimensions he thought were most crucial to proper cockpit fit and defined the average for each of the 10 dimensions as +/- 15 percent of the 50th percentile ranking. Finally, Daniels determined how many of the 3,300 pilots fell into the “average” range on all 10 crucial size dimensions.

Surprisingly to almost everyone (other than Daniels), not a single pilot in the study fell into the average range on all 10 dimensions. ZERO! Daniels was able to demonstrate that there is no such thing as “average.”

As a result of his findings, the United States Air Force began forcing manufacturers to provide adjustable cockpits across a variety of dimensions. More importantly, Daniels proved a one-size-fits-all cockpit design injected a myriad of unnecessary risks.

Flexibility of design allowed pilots to perform at their full potential. Likewise, Measurement 3.0 champions expect flexibility of design in performance management systems to enable businesses to perform at their full potential. Measurements designed for typical organizations will never enable the attainment of extraordinary results.

CONCLUSION

Measurement 3.0 relies on the deliberate alignment of objectives with a structured methodology of performance measurement. In business, nothing of consequence gets accomplished accidentally.

While Measurement 2.0 was focused on data and technology, Measurement 3.0 is focused on value and engagement. The objective is not as much about predicting the future as it is about illuminating the current trends and identifying opportunities to change course or capitalize on exceptional performance.

In viewing the performance measurement framework in your organization, ask yourself these questions:

  • Do you experience people in your organization more focused on “chasing their numbers” than improving their underlying business processes?
  • Do you measure performance in some areas of your organization simply because you have the necessary data?
  • Are you making decisions in your organization based on information some may view as untrustworthy?

If you answered YES to any of these questions, it likely your performance management philosophy is stuck in dogmatic concepts of the last century. It’s time to evolve. It’s time for Measurement 3.0.

Click here to view the full PDF.

References

  1. Hare, Jim; Woodward, Alys; and Sood, Bhavish. “Update: Gartner to Expand Its BI Platform Segmentation.” February 18, 2016. https://www.gartner.com/doc/3215522/update-gartner-expand-bi-platform
  2. Liesz, Thomas J., and Maranville, Steven J. “Ratio Analysis Featuring The DuPont Method.” Small Business Institute Journal Vol 1 No 1, 2008: 17-34.
  3. Brudan, Aurel. “Learning from Practice – A Brief History of Performance Measurement.”
    Performance Magazine, August 7, 2010. http://www.performancemagazine.org/learning-from-practice-a-brief-history-of-performance-measurement/
  4. Pezet, Anne. “The history of the French tableau de bord (1885-1975): evidence from the archives.” Accounting Business and Financial History, Taylor & Francis, 2009, 19 (2), pp.103-125. https://halshs.archives-ouvertes.fr/halshs-00498670/document
  5. Pfeffer, Jeffrey and Sutton, Robert I. Hard Facts, Dangerous Half-Truths And Total Nonsense: Profiting From Evidence-Based Management. Boston: Harvard Business Review Press, 2006.
  6. Addady, Michal. “Dick’s Just Paid $15 Million For Sports Authority’s Name.” Fortune magazine, June 30, 2016. http://fortune.com/2016/06/30/dicks-sports-authority/
  7. Barr, Stacey. Prove It! How to Create a High-Performance Culture and Measurable Success. Milton, Queensland: Wiley, 2017.
  8. Spitzer, Dean R. Transforming Performance Measurement: Rethinking the Way We Measure and Drive Organizational Success. New York: AMACOM, 2007.
  9. Rose, Todd. The End of Average: How We Succeed in a World That Values Sameness. San Francisco: HarperOne, 2016.
  10. Daniels, Gilbert S. and Meyers Jr., H. C. Anthropometry of Male Basic Trainees. Wright Air Development Center, Air Research and Development Command, United States Air Force, July 1953. http://noblestatman.com/uploads/6/6/7/3/66731677/cockpit.gilbert.report.pdf

G. Lance Jakob

G. Lance Jakob, PMP, CRL, in his 25-year career, has focused primarily on process improvement strategy and execution for asset-intensive industries, such as power generation, oil and gas, the U.S. Department of Defense and telecommunications. The breadth and depth of his experience has been shaped while working for such extraordinary organizations as IBM, Accenture, Mirant and Cohesive Solutions.


Picture of idconadmin

idconadmin

EXPLORE BY TOPIC

Join the discussion

Click here to join the Maintenance and Reliability Information Exchange, where readers and authors share articles, opinions, and more.

Get Weekly Maintenance Tips

delivered straight to your inbox