Archive for category Measurement

smartKPIs.com Performance Architect update 44/2011

Advice on KPI documentation and configuration

Configuring KPIs following their selection is represented by the documentation of the complete set of relevant details for each KPI and the activation of KPIs so that data can be reported and analyzed.

  1. Link KPIs upstream with business objectives and downstream with organizational initiatives. KPIs should be connected to organizational objective as they make objectives SMART. Initiatives should be establish to support the achievement of objectives by improving KPI results.
  2. Assign a data custodian responsible for gathering measurement data for the KPI. Data gathering for each KPI requires clarity and ownership. Having a responsible for collecting KPI data is a management approach to ensure accountability with data being available for analysis on time.
  3. Assign a KPI owner responsible for the achievement of the desired results. Each KPI should have a manager allocated as its owner, to ensure responsibility regarding its analysis, results and improvement options.
  4. Avoid tunnel KPI definitions – repeating the KPI name in the definition doesn’t add value. Good practice in working with KPIs requires thorough documentation of what they reprezent. Proper KPI definitions should go beyond repeating the KPI name, by providing a plain English explanation of what the KPI is about.
  5. Categorize KPIs by their reporting status - active = data is tracked, inactive = data not available. Activating KPIs is the process of moving a KPI status from inactive, when the data is not available to active, when data is reported and a clear process is in place for doing so on a regular basis.
  6. Clearly identify the unit type, most of the time % (percentage), # (number) and $ (dollar value). KPIs being measurable entities, they have an associated unit type. To simplify communication, the symbol should be used instead of the word expressing it.
  7. Data accuracy for each KPI should be evaluated as low, medium and high and treated as such. Not all KPIs have the same data reliability. Survey based KPIs are always going to be less reliable compared to revenue KPIs, due to objectivity issues. Other aspects to be considered are data automation and auditing.
  8. Determine the frequency of data generation and the frequency of reporting for each KPI. Data for some KPIs, such as ‘# Website visits‘ can be easily gathered on a daily basis. For other KPIs, such as ‘% Employee engagement‘, data gathering requires considerable costs and efforts, impacting a large number of staff. The frequency of reporting is influence by factors such as cost, efforts and technical complexity.
  9. Develop a customized KPI documentation form that contains the relevant details describing the KPI. Documenting KPIs can be easily done in a template that structures the main description fields considered relevant for the organization. smartKPIs.com contains such a model that can be customized at organizational level.
  10. Document if the trend is good when increasing, decreasing or when data is within a range. For some KPIs the results are good when they are decreasing from a period to another - for example ‘# Customer complaints’. For others, such as ‘$ Revenues‘, the results are good when increasing, while in the case of ‘% Budget variance‘, the results are good when within a specific range.
  11. Document where the reporting data for each KPI is sourced from and who produces it. Understanding a KPI relies on having a clear understanding of the data behind it and its source.
  12. Don’t worry too much about a KPI being leading or lagging. Differentiating between the two is debatable and confusing. What is considered a leading KPI for some is a lagging KPI for others. As agreement around this differentiation is oftentimes difficult to achieve, it is secondary in importance and impact.
  13. Ensure each KPI is clearly explained in a definition and has a purpose for usage. The separation between definition and purpose is essential. The purpose expresses the reason for using the KPI and is one of the key components of the documentation form.
  14. KISS - keep it short and simple: Use the # and % symbol to replace “number” and “percentage” in KPI names. Standardizing KPI names and shortening them supports communication and enables clear data visualization of KPIs in dashboard and scorecards.
  15. Simplify KPI names by eliminating the word “of”. As a “common denominator” it can be cut from the name. KPIs are analytical in nature and where possible, their names should be as concise as possible. Definitions, calculation and purpose fields provide context and can be more wordy.

Aurel Brudan
Performance Architect,
www.smartKPIs.com

smartKPIs.com Performance Architect update 43/2011

Advice on KPI selection

Selecting KPIs is a process which seems simple, yet is inherently complex, due to the interdependencies involved. Here are 15 things to consider before embarking on this journey.

  1. Review existing internal reports and support documents at the beginning of the KPI selection exercise. These may include previous business / strategy plans, annual reports, performance reports and other documentation that relates to performance management, measurement and benchmarking.
  2. Use external lists of examples and other secondary documentation to inform and support KPI selection. It is always a good idea to begin a journey having the end in mind. Reviewing KPI examples used in the industry or functional area, by competitors or other organizations provides context around what is in used in practice by others and improves understanding around the desired output.
  3. Engage internal stakeholders in the process of KPI selection through interactive workshops. KPI selection is not a desk exercise. It is an opportunity to communicate and learn, hence an open discussion in a workshop format is a better approach for enabling not only KPI selection, but also understanding and ownership.
  4. Calibrate KPI selection around business objectives and value drivers. KPIs are not used in isolation. They are just one component of the value creation chain and of the performance management system. A simple way to position them is al links between business objectives and related organizational initiatives.
  5. Select KPIs based on the realities of organizational activity and environment. Each organization is different, operating in different environments, with different guiding principles. Hence the KPIs used need to reflect the specifics of each organization first and industry/functional area characteristics second.
  6. Maintain a centralized catalogue of KPIs for the entire organization. Structuring KPI documentation in a central repository facilitates their understanding and usage in a similar way across the organization, growing the know-how and facilitating KPI selection and usage on an ongoing basis.
  7. Understand the difference between input, process, output and outcome KPIs. This value creation sequence is essential in facilitating the understanding of KPIs in the context of the value added by the process/activity they are related to. It is an essential mapping technique that facilitates KPI selection.
  8. Don’t hesitate in changing KPIs in scorecards and dashboards. KPIs should reflect activity and activity should adapt to a changing environment. The use of KPIs should be fluid and flexible, reflecting the change in business priorities as a result of the change in the operating environment.
  9. Review KPI relevance regularly. If new KPIs are required, they can be established at any time. An essential aspect of double loop learning. Using KPIs is not only about achieving set targets and objectives, but also about ensuring the objectives and targets were the right ones to be set in the first place and the KPIs used to track their achievement were the appropriate ones.
  10. KPI selection and target setting should be done in accordance with organizational maturity and direction. There is no one size fits all approach when it comes to using KPIs. As strategies vary from one organization to another, the use of KPIs also varies.
  11. Project milestones are not KPIs. Understanding the difference between what is and what is not a KPI is a prerequisite of successful KPI selection.
  12. Targets are not KPIs. Understanding the anatomy of a KPI is essential in KPI selection and usage.
  13. Some things are not worth measuring. For example measuring love might not be such a good idea. Not everything that can be measured should be measured with KPIs.
  14. Some things are too difficult to measure. For example cuteness. The “measuring everything that moves’ mentality should be avoided.
  15. Eliminate or replace inactive KPIs with simpler, yet measurable ones. Using some KPIs may have seemed a good idea at the time of their selection, however if measuring them proves to be too costly or time consuming, they should be replaced. An active KPI is better than an inactive KPI.

Aurel Brudan
Performance Architect,
www.smartKPIs.com

smartKPIs.com Performance Architect update 36/2010

A taxonomy of sources used for KPI selection

Working with Key Performance Indicators (KPIs) requires selecting a group of relevant KPIs first. There are many options for this: start with a blank page, review other sources, or get someone else (such as a consultant) to do this for you among others.

Some of the general rules to follow on embarking on such a journey are:

1. Do your research. Selecting KPIs is a learning experience, a journey in itself. There are many insights to gain by taking it step by step instead of just getting to the destination. Research is an important component of this journey.

2. Acknowledge the uniqueness of your environmental settings. While some KPIs are widely used across organisations (i.e. % Satisfied customers, $ Sales revenue and % Profit rate), others are unique to each organisations as they reflect their strategy and specific conditions of operating. Each organisation should select the KPIs based on their relevance and not on their popularity.

3. Clarify what you want to achieve. If you want to improve things and learn from KPIs, you should not avoid selecting challenging KPIs, difficult to measure or difficult to improve. The easy choice is selecting KPIs that make you look good. While this may serve some purpose on the short term, on the medium to long term it will impact the relevance and credibility of KPIs in the organisation.

Having these general rules in mind, the question is: “Where do we do our research to inform the KPI selection process?”. The main sources of information can be grouped in three categories:

Primary Sources

  • Front-line employees input – they are at the core of the value generation chain and know what matters for operational success.
  • Input from managers – due to their perspective across the value generation process, role in shaping strategy and their relationship with various stakeholders.
  • Board input – in many instances they mandate the use of specific KPIs and their selection in strategic / operational is non-negotiable.
  • Input from suppliers – their insight in the supply chain is valuable as they can bring an external perspective to what needs to be measured and improved
  • Customer input – their opinion matters.

Secondary sources

  • Strategic development plan (3-5 years)
  • Annual business/strategic plan
  • Annual reports
  • Internal operational reports
  • Competitor review reports

External sources

Individually or in combined, these sources can generate a list of prospective candidates for KPI selection, anchored to organisational objectives. Ultimately the decision on which KPIs will be used should be based on discussions within the organisation to determine the most relevant ones. Consultants can be useful in this process as facilitators, but not necessarily as “fountains of truth”. Their role should be more as guides on this journey, providing tools, information and advice, but not developing the final list of selected KPIs in an ivory tower. Enjoy the journey!

Stay smart! Enjoy smartKPIs.com!

Aurel Brudan
Performance Architect,
www.smartKPIs.com

Walker, Rob 1992, “Rank Xerox – Management Revolution”, Long Range Planning, Vol. 25, No. 1, pp. 9 to 21

smartKPIs.com Performance Architect update 35/2010

10 Key Performance Indicators for 2010

smartKPIs.com contains over 5,000 KPI examples from 14 functional areas and 24 industries. A question raised by many is: ‘If you are to pick a handful, which ones would stand out?’

I have selected below 10 KPI examples of what we consider to be smartKPIs: they are widely used and relevant, the superstars of KPIs. This is not to say any company should use them. Simply, a list of 10 KPI examples anyone should take note of:

% Net profit rate – A profitable business is a sustainable business. It is however important to have realistic expectations. Returns of over 30% may be speculative, while in some economies returns of under 5% are lower than interest rates.

$ Revenue – Growing revenue is an expression of having the right product/service mix, supported by the right team delivered at the right time. Converting opportunities in sales is the essence of a sustainable business.

% Profitable customers – getting the balance right is the basis for financial success. Although oftentimes it is difficult to track, it is ads a great deal of insight and informs decision making. Activity based costing is key to getting this indicator right.

# Net Promoter Score – having customers that are not only satisfied, but are actively endorsing a company/product/service. Recently is has become a favourite indicator of customer satisfaction, due to its simplicity and relevance.

% On-time delivery – an operational focused KPI with wide reaching implications. It can be used in a variety of industries and functional areas, as time is an important resource to anyone. Oftentimes it acts as a bottleneck as it is influenced by many indicators and it impacts a great deal of other indicators.

% Projects on time, on budget and according to specifications – getting the triangle right is difficult and priorities may vary from one project to another. It is however a useful base to start from. Can be customised as per the preference of project boards and project managers to cover only specific aspect of the triangle.

% Processes optimised – one key managerial responsibility is creating the right environment for the staff members to operate in. This includes using a management system that is well thought of and refined. Mapping and improving work processes is key to using a performance oriented architecture.

# Employee engagement – Some say money can’t buy it. It is that extra level of commitment that is induced by motivating purposes, inspiring leaders and working environments that facilitate happiness in the professional life.

# Proposed improvement ideas per employee – inspired by H.W. Heinrich’s work in the 1930s or “the Pyramid Theory” as some call it. They main results are visible at the top, but you need to monitor the base to ensure the right outcomes are achieved.

$ Investment in learning per employee – Not the ideal indicator of training impact, but a widely used substitute. It monitors both training spend and the wide allocation of funds to avoid serial trainees.

An issue with KPI examples is that names don’t tell the complete story. To find out more about each of these examples and thoroughly understand them a separate blog post would be required for each, complemented by a complete KPI documentation form. In the meantime, www.smartKPIs.com is available to further explore relevant and well documented KPI examples.

Stay smart! Enjoy smartKPIs.com!

Aurel Brudan
Performance Architect,
www.smartKPIs.com

Walker, Rob 1992, “Rank Xerox – Management Revolution”, Long Range Planning, Vol. 25, No. 1, pp. 9 to 21

smartKPIs.com Performance Architect update 31/2010

Learning from practice - A brief history of performance measurement

Measurement is in the realm of mathematics. It is about keeping track, about establishing dimensions. Some of the earliest measurement activities in human history track back to 35,000 B.C. (Lebombo bone) and 9,000-6,500 B.C. (Ishango bone). Researchers consider them the first measurement tools in human history used for measuring intervals of time.

The Salamis metrological relief, dating back to the 4th century B.C. is considered an important measurement tool for architecture, as it illustrates the correlation between the different measuring systems used in Ancient Greece: Doric, Ionic and Common. This unification facilitated the construction of one of the symbols of civilization: the Parthenon, incorporating beauty, science and art.

In a business, measuring is linked to the use of money and can be traced back to Mesopotamia, where writing was first invented (3100 BC), banking was first developed (3000-2000 BC), and laws were first used to regulate banking operations (1792 – 1750 BC, The Code of Hammurabi).

Standards around measurement in a business environment are owed to the Venetians, who evaluated the performance of their sailing expeditions by calculating the difference between the investment made by the ship owner and the money obtained by selling the goods brought back by the journey. Venice merchant’s need for a more elaborate approach to evaluating outcomes lead to double-entry bookkeeping system, described by Luca Pacioli’s: ‘Summa de arithmetica, geometrica, proportioni et proportionalita’ (‘Everything on arithmetic, geometry, proportions and proportionality’), published in Venice in 1494. While Pacioli is considered today the “father of accounting”, the emergence of the discipline represents one of the earliest illustrations of learning from practice.

From this point on, the evolution of measurement in business was driven by three institutions: church, military and the public service, at both organizational and individual level. In mid 1500s, Ignatius Layola instituted a procedure to formally rate members of the Jesuit Society. In 1648 Dublin Evening Post in Ireland evaluated legislators by using a rating scale based upon personal qualities. Most Western armies did appraisals as early as the 19th century.

One of the earliest books on performance measurement that used used the term “measure” in the context of evaluating performance is: Efficient Democracy, by William Harvey Allen. It was written in 1907, not before the age of management consultants, business schools and strategy gurus. Allen was a practitioner, secretary of the Committee on Physical Welfare of School Children and General Agent of the New York Association for Improving the Condition of the Poor. He wrote on education, healthcare and philanthropy.

In 1920-1925 DuPont started using Return on Investment as a performance measure, one in a long series of business and technology innovations that emerged from the company.

In 1951, General Electric introduced the use of key corporate performance measure, through an initiative commissioned by the then CEO, Ralph Cordiner. The selected measures were grouped in categories such as market share, productivity, employee attitudes and public responsibility.

In the 1970s, General Motors used a system of performance measures that included non-financial indicators, considered a precursor of the Balanced Scorecard as measurement tool as introduced in 1992.

In the 1990s, performance measures use gained in popularity across a variety of sectors, most importantly in government. Not all implementations of performance management systems were smooth sailing and sometimes they generated more harm than good. However, both good and bad experiences contributed to making more informed decisions about the use of measures by learning from practice.

Where does all this history lead us? Practice has lead the emergence of management concepts and not the other way around. The use of performance measures has evolved organically over time, consultants being facilitators and enabler of better results, but not drivers.

Regarding the popularity of performance measurement terminology, as of August 2010, www.google.com searches illustrated the following results:

  • “kpi” = 9,670,000 results
  • “kpis” = 3,480,000 results
  • “key performance indicator” = 215,000 results
  • “key performance indicators” = 1,190,000 results
  • “performance measure” = 1,150,000 results
  • “performance measures” = 2,180,000 results

Ultimately, as Protagoras of Abdera said in Ancient Greece:“Man is the measure of all things.”

Stay smart! Enjoy smartKPIs.com!

Aurel Brudan
Performance Architect
www.smartKPIs.com

Walker, Rob 1992, “Rank Xerox – Management Revolution”, Long Range Planning, Vol. 25, No. 1, pp. 9 to 21

smartKPIs.com Performance Architect update 28/2010

From performance management for control to performance management for learning

The following is an excerpt from a conference paper presented at the 2009 Performance Measurement Association Conference in Dunedin, New Zealand. An edited version of the paper was published in the Measuring Business Excellence Journal in 2010 (vol. 14, No. 1), under the title: “Rediscovering performance management: Systems, learning and integration

Traditionally, organisational performance management has been concerned with control, by setting and monitoring achievement of targets at strategic, operational and individual levels. Measurement has its benefits as it provides valuable information and measuring in itself stimulates higher performance. The Hawthorn effect and the Westinghouse effect or “Observer’s paradox” (Cukor-Avila, 2000) demonstrate the delicate nature of the measuring process and the impact that measurement itself has on the results.

At strategic level, senior management supported by management accountants and finance professionals focus their efforts in translating organisational objectives in quantifiable targets. These objectives and targets are delegated to functional areas for implementation. Compliance with set targets is checked on a regular basis. These strategic objectives are then aligned with operational objectives and individual performance objectives. However, empirical evidence shows that the focus on the measurement and control in the context of performance management has started to diminish in the 1990s, driven by the increase in popularity of the Balanced Scorecard (BSC), Knowledge Management and Systems Thinking. Even the BSC was first presented in 1992 as a measurement tool promoted by the management accounting school and having roots in the quality movement. However it evolved quickly to become a complete management system supporting strategy implementation as a core competency. As a performance management concept, the BSC enables not only measurement and control, but also communication and learning.

This shift is supported by proponents of the knowledge management/intellectual capital school of thought who argue that “the main problem with all measurement systems is that it is not possible to measure social phenomena with anything close to scientific accuracy” (Sveiby and Armstong, 2004). They invoke Heisenberg’s uncertainty principle to illustrate the inherent imprecision in measurement that exists even in “exact” sciences such as physics. The principle states that uncertainties, or imprecision, always turn up if one tries to measure the position and the momentum of a particle at the same time (Cassidy, 1993, 1998). Neils Bohr famously stated that “Accuracy and clarity of statement are mutually exclusive” (for further details see Pais, 1994).

Measurement for rewards leaves room for interpretation in the process of setting targets and measuring results and quite often leads to abuse. Using targets for control and linking the achievement of these targets to individual performance has the risk of staff members manipulating the system to their benefit and the expense of other teams and even the entire organisation.

The alternative proposed to measurement for control is measurement for learning, as illustrated by the table below:

Characteristic Measurement for control Measurement for learning
Measurement drivers Management Employees
Measures development Top-down commands Process-oriented bottom-up approach
Measurement role Measuring and managing work in functional activities. Measuring and managing the flow of work thought the system
Measurement focus Productivity output, targets, standards: related to budget Capability, variation: related to purpose
Results communication Restricted Open
Target driven by Budget/political aspirations Understanding achievement versus purpose
Follow-up to results Rewards, punishment and action to improve results Dialogue and improvement
Learning cycle Single loop Double loop learning
Link to rewards Link to individual rewards and recognition system Group rewards, based on improvement

Table: Measurement for control compared to measurement for learning, Brudan, 2010

A mechanistic view on performance management, focused on measures and targets in isolation, pay-for-performance, control and rhetoric leads frequently to unoptimized results. Opposed to this is a Systems Thinking based view on managing performance, that coupled with the emphasis on learning, highlight the need for an integrated approach to performance management. Effective performance management requires more than measuring and reporting in isolation, more than control and rewards. It requires an organic performance architecture, that values more performance management for learning is informed by a more humanistic performance philosophy.

Stay smart! Enjoy smartKPIs.com!

Aurel Brudan

Performance Architect,
www.smartKPIs.com

References

Brudan A.N. (2010) “Rediscovering performance management: systems, learning and integration”, Measuring Business Excellence, Vol. 14, No. 1, pp. 109-123.

Cassidy, D. (1993) “Uncertainty: The Life and Science of Werner Heisenberg”, W. H. Freeman, New York, pp. 226-246.

Cassidy, D. C. (1998) “Answer to the Question: When Did the Indeterminacy Principle Become the Uncertainty Principle?” American Journal of Physics, Nr. 66, pp. 278-279

Cukor-Avila, P. (2000) “Revisiting the Observer’s Paradox”, American Speech, Vol.75, Nr. 3 pp. 253-254.

Pais, A. (1994), “Niels Bohr’s Times: In Physics, Philosophy, and Polity”, Oxford University Press, Oxford, England, pp. 304-309.

Sveiby, K. E. and Armstrong C.(2004). “Learn to Measure to Learn!”, opening keynote address at the Intellectual Capital Congress Helsinki, 2 Sept 2004.

Walker, Rob 1992, “Rank Xerox – Management Revolution”, Long Range Planning, Vol. 25, No. 1, pp. 9 to 21

smartKPIs.com Performance Architect update 24/2010

Benchmarking, Rank Xerox and Canon

Benchmarking as a management concept is reported to have its roots in land surveying, where the altitude of objects is estimated based on a pre-established point of reference on an arbitrary landmark (McNary, 1994). Frederick Taylor is reported to be the first to use benchmarking along with other principles in a business enterprise to improve performance. Elements of benchmarking can be recognized in Taylor’s scientific management approach applied during his time at Bethlehem Steel Company (McNary, 1994), popularized in “The Principles of Scientific Management” .

Benchmarking as we know it today was first applied by the Xerox Corporation in later 70s, early 80s. Faced with increased competition from Japanese imports, Xerox set upon improve its order fulfillment process and other processes deemed unproductive. One of the first accounts of the “competitive benchmarking” approach at Xerox was given in 1992 by Rob Walker, the Director of Business Management Systems and Quality at Rank Xerox (U.K.) Ltd. at the time. In his article “Rank Xerox – Management Revolution”, he describes in detail the challenges, changes made and impact of the “competitive benchmarking” approach at the company. Under the “competitive benchmarking” initiative. Xerox compared itself to its Japanese competitors as well as large organizations operating outside of the industry: “American Express for billing and collections, American Hospital Supply for automated inventory control, LL Bean for distribution, warehousing and order-taking” (Walker, 1992).

The ascent of benchmarking in the 80s resulted in numerous books and articles published, reflected in the business environment by an increase in the use of benchmarking around the world. Comparing to others is natural to humans, so benchmarking was rather easy to understand in theory. Applying it in practice and generating value from it is a different story.

In sports and tennis in particular, performance metrics are monitored by players and coaches to track progress and how the game plan was executed. In terms of benchmarking KPIs between players, this needs to be explored with care. The playing style is different from one player to another. One player might have a very powerful serve, but generally inaccurate. Another might have a high percentage of net approaches, but ineffective. On top of this, in tennis the concentration power and determination is in many instances more important than game statistics. Similarly, in business, many companies zoom to a different tune. While benchmarking sounds good in theory, there are many practical issues relating to data accuracy and relevance of results. There are many questions organisations need to clarify before embarking on such a road:

a. Who may the beneficiaries of such an exercise be?

b. What is the added value?

c. Who has done this well?

The graph below raises another question:

Source: Google Finance, 2010

Did it ultimately work for Rank Xerox?

or even

What did Canon differently to generate such a gap between the stock price performance over the last 10 years?

Comparing performance across entities is even easier today. Availability of information technology and rich datasets facilitates benchmarking across multiple dimensions. However, embarking on benchmarking initiatives because “it seems to be a popular tool” or because it was recommended by a consultant can be risky. Same as if it is pursued “just because we can” or with unreliable data. Done properly, it might still be a good idea overall, but then another question needs to be asked:

Are there any other better ideas?

Yet again, Study puts initiatives management in a new light.

Stay smart! Enjoy smartKPIs.com!

Aurel Brudan

Performance Architect,
www.smartKPIs.com


References

McNary, Lisa D. 1994, “Thinking about excellence and benchmarking”, The Journal for Quality and Participation, July-August 1994, v17, n4, p90(1)

Walker, Rob 1992, “Rank Xerox – Management Revolution”, Long Range Planning, Vol. 25, No. 1, pp. 9 to 21

smartKPIs.com Performance Architect update 22/2010

Performance Management case study: Plan - Do - Check - Act (PDCA) in a non-profit organization

Improving children’s quality of life in developing countries is today a priority of thousands of non-for-profit organizations. It is a difficult journey, influenced by many macro and microeconomic, political, social, cultural and religious factors. Many such efforts are structured in programs and projects. Monitoring not only their implementation, but also their impact is a requirement not only for tracking if they make a difference, but also for attracting new funding and other resources for future programs.

Overall, many non-profit programs employ robust performance management systems to support the achievement of their purpose. Designing and using such systems is not as straightforward as it may seem.

Organisation

A non-profit organization.

Setting

The organization operates in both urban and rural regions, implementing programs and projects targeting specific health and early childhood development issues.

Mandate

Improve the health and education of children in at risk communities in developing countries.

Instruments

A performance management system is in place, linking objectives, performance indicators and initiatives.

Performance indicators

To monitor the achievement of this objective a set of performance measures can be established, targeting some of the specific issues to be addressed. For example:

% Incidents of anemia

# Average scores on language and communication skills for toddlers

# Average scores for vocabulary tests

Scenario

The organization is following the standard Deming cycle applied in a performance management context: Plan-Do-Check-Act (PDCA). Each year it formulates a plan of activities, specifying objectives, performance indicators and projects to be implemented. It monitors results every six months, when following an analysis of these results, review meetings take place. They generally result in a recalibration of initiatives and sometimes new ones are established. Several programs and projects are running at any time, aimed at raising awareness in the community of health and educational issues. Additional projects targeted specific issues such as improving the economic situation of the families in the community, better equipping the kindergarten / primary school and training the educators.

Some success was reflected by the reduction of the incidents of anemia and improvement in the scores.

However, after a while, the performance reports started to reflect a stabilization of results and no further improvements were achieved.

Questions

  • What changes to the existing portfolio of projects and programs should the organization make to improve results?
  • How should the organization alter the Performance Management System in use to facilitate better results?
  • What approach to stakeholder management should the organization take to facilitate sustainable changes in the community?

Stay smart! Enjoy smartKPIs.com!

Aurel Brudan

Performance Architect,
www.smartKPIs.com


Discuss the case in the smartKPIs.com Forum

This Performance Management Case Study is now available in the smartKPIs.com Forum, where members of the smartKPIs.com community are invited to contribute to the discussion (after logging in by using their registration details). New members are invited to join (for free) the smartKPIs.com community.

smartKPIs.com Performance Architect update 18/2010

Performance Management case study: Balancing on-time service and pay-for-performance in urban public transport

Delivering urban public transportation services today is a challenge due to the slow process of upgrading infrastructure and the general trend of population increase in large cities. Finding a balance between service delivery and punctuality requires careful planning and active monitoring of results. The case study illustrated below highlights some of the challenges and trade-offs that have to be explored by each operator.

Company

Urban public transportation operator.

Setting

While in many cities the local public transport is operated by the government, a trend that gained momentum over the last 15 years is the outsourcing of the operations of the infrastructure. This way the government becomes a customer of a separate entity responsible for the service delivery to the wider public. The arrangement as benefits for both sides. The government shifts some of the pressure from citizens regarding the quality of the public transport services towards the operator, limiting a sensitive issue at election time. The operator manages a monopoly or in some instances is part of an oligopoly of service providers.

Mandate

Operate a safe and reliable public transportation system, delivering quality services for the public.

Instruments

To stimulate the improvement of services, Service Level Agreements clarify responsibilities of both parties and outline performance standards that the service operator needs to meet. A common element in such agreements is a pay-for-performance arrangement that rewards or penalizes the operator based on the achievement of set targets.

Performance indicators

Some of the commonly used KPIs in such agreements are:

% Planned services delivered (monitoring if services are operated)

% Punctuality (monitoring if the set schedule for each stop is followed)

These two KPIs are key to evaluating the customer experience. The first outlines if the routes planned to be serviced each day were delivered at all, while the second monitors how well were these routes serviced in terms of punctuality.

Scenario

One afternoon, a passenger on the way home from work gets on a bus, validates the ticket and takes a seat, waiting for the bus to arrive at the desired stop, the second last of the line.

At the fourth stop before the end of the line, the bus driver announces all passengers that he has been requested to finish the route early at that stop. Everyone is invited to get off the bus and once the bus is empty, it skips the last three stops of the route, continuing with the return service.

Our passenger has paid the travel fare, expecting in return the arrival to destination, as planned. The early termination of the route by the public transport operator resulted in diminished utility of the amount spent and in an incomplete journey.

Questions

* What is the relationship between the KPIs used to track service performance and the decision to change the route of the bus?
* How is the estimated impact on customer satisfaction of such actions?
* Why aren’t customer satisfaction KPIs used as widely as service delivery KPIs in transport operator SLAs?

Stay smart! Enjoy smartKPIs.com!

Aurel Brudan

Performance Architect,
www.smartKPIs.com


Discuss the case in the smartKPIs.com Forum

This Performance Management Case Study is now available in the smartKPIs.com Forum, where members of the smartKPIs.com community are invited to contribute to the discussion (after logging in by using their registration details). New members are invited to join (for free) the smartKPIs.com community.

smartKPIs.com Performance Architect update 17/2010

Performance Management case study: Ford Pinto – business ethics and performance measurement


Company

Ford Motor Company

Setting

In late 1960s, Ford was facing increasing competition from domestic carmakers and Japanese imports.

Mandate

In June 1967 Ford started planning a new model that would outdo the competition. Lee Iaccoca, Vice-President and head of production at the time championed the project that was meant to deliver what was nicknamed “Lee’s car”. lee formulated a set of performance indicators with specific targets to define the parameters of the new product: “The Pinto was not to weigh an ounce over 2,000 pounds and not cost a cent over $2,000″ (Dowie, 1977).

Approval process

  • December1968 – Project “Phoenix” obtained the approval of Ford’s Product Planning Committee for Pinto’s basic design concept (Schwartz, 1991)
  • January 1969 – Ford’s Board of Directors, chaired by Henry Ford II gave his approval for Ford’s first domestic sub-compact: Ford Pinto. (Schwartz, 1991, Consumer Guide Auto, 2010a)

Performance indicators

The product objectives were listed in Pinto’s “Green book”, “a manual in green covers containing a step-by-step production plan for the model, detailing the metallurgy, weight, strength and quality of every part in the car” (Dowie, 1977).

1. True subcompact

• Size

• Weight

2. Low cost of ownership

• Initial price

• Fuel consumption

• Reliability

• Serviceability

3. Clear product superiority

• Appearance

• Comfort

• Features

• Ride and Handling

• Performance

Targets

The main targets were (Dowie, 1977):

# Weight of the car – Target: under 2000 lb (907kg)

$ Cost – Target: under $2,000.

# Time from conception to production – Target: 25 months (At almost half of the average of 43 months, this was estimated at the time to be the shortest production planning period in modern automotive history).

Results

The achievement of the main targets was as follows (Consumer Guide Auto, 2010):

# Weight – Actual: 1,949 lb (884kg)

$ Cost – Actual: $1,919

# Time from conception to production – Actual: less than 20, as it was launched on 11 September 1970 and the first delivery took place on 13 September 1970

Major design problem

At rear-end collisions of over 30 miles/hour (48km/hour), the rear-end of the car would buckle and the fuel-tank would break and burst into flames. Ford did 11 rear-end crash tests, averaging a 31-mph impact speed, before Pintos went on sale. Some records reveal that rear-end collision tests on the Pinto in took place in December 1970, months after it was already in production (Consumer Guide Auto, 2010). Regardless of the date, out of the 11 tests at 31 miles/hour, only three cars passed the test with unbroken fuel tanks.

Explored solutions

Option 1

Replace the fuel tank with the one used in Ford Capri. It would have been located over the rear axle and differential housing, with much better protection from rear-end impacts.

Decision: Option disregarded due to the impact on trunk space.

During the analysis process a Ford engineer stated: “But you miss the point entirely. You see, safety isn’t the issue, trunk space is. You have no idea how stiff the competition is over trunk space. Do you realize that if we put a Capri-type tank in the Pinto you could only get one set of golf clubs in the trunk?” (Dowie, 1977).

Options 2-4

Three alternative solutions analyzed pre and post production (Consumer Guide Auto, 2010)

• A plastic insulator fitted on the differential that would keep the bolts from ever making contact with the fuel tank. Cost of this item was less than $1.

• The use of a rubber bladder/liner produced by the Goodyear Tire and Rubber Company, at a unit cost of $5.08 per car.

• An extra steel plate attached to the rear of the car just behind the bumper, at a unit cost of up to $11 per car to install.

Decision: Options disregarded due to the impact on costs.

A cost-benefit analysis was conducted to determine the costs associated with implementing such solutions versus the benefits generated by avoiding possible lawsuits resulting from accidents where the gas tank position played a role in injuries or fatalities.

The Pinto went on sale without the gas tank issue being addressed and however meeting all the targets outlined in the green book.

Financial and market outcomes for Ford

* The first domestically produced Ford passenger car with a four-cylinder engine since 1934.
* The segment market share of imports was reduced from 15.2 % in 1971 to 14.8% in 1972
* Made an important impact on Ford’s profits during the 1974 OPEC oil embargo. As the most fuel efficient model Ford produced at the time more Pintos were built (544,209) than the sales for full-sized models taken altogether (461,000) (Consumer Guide Auto, 2010)
* 2,924,773 Pintos built between 1971-1980 (Consumer Guide Auto, 2010)

Reputation and ethical outcomes

* 500 burn fatalities of people who would not have been seriously injured if the car had not burst into flames (Dowie, 1977). National Highway Safety Administration records place this figure at 27 fatalities (Schwartz, 1991).
* In September 1978, Ford issued a recall for 1.5 million 1971-76 Pinto sedans and Runabouts, making it the largest recall in the industry up to that time.
* Millions of dollars in lawsuits were filed and won against the automaker, including the largest personal injury judgment ever.
* In the 1979 landmark case State of Indiana v. Ford Motor Co., Ford notoriously became the first American corporation ever indicted or prosecuted on criminal homicide charges. Ford was found not guilty in March 1980 (Schwartz, 1991).

Other outcomes

1970 - Lee Iacocca, the “father” of the Pinto became President of Ford Motor Company

1977 - “Pinto Madness”, an article revealing the story behind the Pinto to the public was published by the Mother Jones magazine. It went on to win a Pulitzer Prize.

1978 – Lee Iacocca was fired by Ford Motor Company. His relationship with Henry Ford II, chairman of the board and chief executive officer (CEO), became tensed as a result of the Pinto scandal.

1979 - Chrysler Corporation recruited Lee Iacocca as their President and CEO, where he served until his retirement in 1992.

1991 “The myth of the Ford Pinto’s case”, an article revealing some of the inaccuracies in the Pinto scandal was published by the Rutgers Law Review. It concludes with: “ If the Ford Pinto case did not exist, law professors would need to invent it: for the case raises essential issues about both the form and the substance of modern products liability doctrine” (Schwartz, 1991).

From a Performance Management point of view the set targets were met and the desired financial outcomes were realized. However this case raises questions regarding both ethics, risk management and the purpose of using objectives and targets.

Stay smart! Enjoy smartKPIs.com!

Aurel Brudan

Performance Architect,
www.smartKPIs.com


References

Dowie, M 1977, Pinto Madness, Mother Jones, September / October issue

Schwartz, G 1991, The myth of the Ford Pinto case, Rutgers Law Review, Vol. 43, Issue No. 1013

Consumer Guide Auto, 2010, 1971-1980 Ford Pinto Available at: http://auto.howstuffworks.com/1971-1980-ford-pinto.htm, accessed on 3 May 2010.