# 4: The Challenge of Measurement

## The Challenge of Measurement

### Aims

This chapter aims to summarise the different ways in which quality can be measured and to identify the tools and techniques available to measure quality in practice. It also aims to highlight some of the challenges of measurement.

### Outcome

After reading this chapter, the reader should better understand the dilemmas associated with measuring quality and be familiar with a number of tools that can help with quality measurement.

### Introduction

What gets measured gets done. It is a dictum from the earliest of civilisations and has been at the heart of all human endeavour and discovery; Egyptian wall paintings from around 1450 BC show evidence of measurement and inspection.

In practice, measurement tools enable us to measure many variables, from defects in our clinical procedures (impressions and casts for example), to costs in relation to revenue. Key performance measures need to be easily understood, cost effective to measure, should not distort what is being measured and be aligned with the objectives of the management and processes that they are measuring. We need to measure so that we know that changes made in clinical practice actually lead to improvements. A feature of a measure is that it is precise, objective and consistent so that two individuals should obtain the same value as a result of carrying out the same measurement.

Quality should be measured based upon the exchange that is occurring. Depending on the objectives, it may be measured differently for the exchange between the commissioner and its provider, the provider and the patient and the provider and his/her peers. This is highly relevant where there is a shift away from nationally managed services to locally managed services.

### Terminology

A measure is an operation for deriving a numerical representation: “If we put someone on a scale, we can measure how many kilos they weigh.”
A measurement is the discrete implementation of the measure: “If we put me on a scale, we can see that I am about 90 kilos.”
A metric is an interpretation of a measure; it assigns meaning to the number provided by a measure. It is reflected in the statement: “I weigh 90 kilos.”
Performance measures should be developed so that they are reliable and valid.

### Reliability

A reliable measure will give the same result every time that it is applied to the same aspect of healthcare. A thermometer is a reliable measure – it will show the same temperature each time it is used to measure the temperature in a particular location in a living room (provided the temperature has not changed). The latest generation of apex locators is reliable for the determination of working length during endodontic procedures – they will give the same reading if the reading is repeated. In contrast, electric pulp testers, for example, may not be quite as reliable.

### Validity

A valid measure will be reliable and will also measure the intended aspect. In the case of the thermometer, the position of the thermometer should measure ambient temperature in the room, but if placed by an open window, will not provide a valid measurement.

### Example

A practice may record the number of written complaints it has received – for example, seven over the course of a year. The measure is reliable (no matter how many times you count the letters, there will be seven), but it is not valid because it does not relate the number of complaints received to the total number of patients seen, and/or the number of completed courses of treatment. A ratio is required for the measure to be valid. If the definition of a “complaint” is unclear, then the reliability of the measure is also compromised. This is particularly important for inter-practice comparisons, because some queries may be perceived as complaints and vice versa.

### Quality Indicators

The pursuit of quality in everyday practice is a long and never-ending journey; quality indicators are the signposts along that journey. In Chapter 1, quality indicators were defined as units of information which reflect, directly or indirectly, the performance of the practice in maintaining or increasing the wellbeing of its patients. By definition, indicators are imprecise. They are tools that are useful to clinicians and managers but, like any tool, the benefits from their use come from the way they are applied and also the purpose for which they are used.

Writing in The Medical Journal of Australia, Neil Boyce observed that: “Most current indicators of healthcare performance should be viewed as tools that prompt additional inquiry, rather than allowing definitive judgements on quality and safety of care. They may be defined as norms, criteria, standards, and other direct qualitative and quantitative measures used in determining the quality of care and can therefore be used to judge performance.”

Think of an indicator as a torch that shines light on an area of practice that merits further scrutiny. It is the more detailed inquiry prompted by the indicators that will lead to the development of quality measures.

Quality indicators can:

• Allow comparisons to be made between practices and against the gold standard.

• Help to identify unacceptable performance.

• Stimulate informed discussion and debate about the quality of care.

• Facilitate an objective evaluation of a quality improvements initiative.

A good example is the so-called practice visit in the UK. These visits or inspections are undertaken by a number of professional organisations for various purposes ranging from educational bodies that assess a practice’s suitability for participating in a postgraduate education programme for recent graduates, to NHS Trusts who often undertake visits when new practices are established in their area. The forms and processes used for these purposes are indicator-driven. When such visits and assessments take place it is important that the results are interpreted in the broader context of the delivery of care rather than making judgements about them.

### Quality Measures

Writing in the January 2000 issue of Experience in Practice, Professor John Øvretveit cites the lack of measurement as: “Often the weakest component of a quality programme. Employees often do not have the skills to gather and use quality data, or do not see the need to do so. Professionals do not have the time.”

We should note that:

• No measure is perfect, but together with companion measures, it may still be quite useful.

• Consistency is important. A fundamental concept with measurement is that if we change our method of measuring – our operational definition – we change the data collected.

• If you cannot measure it, then you can’t manage it.

Øvretveit states that: “Measurement should speed improvement, not slow it down. Often, organisations get bogged down in measurement, and delay making changes until they have collected all of the data they believe they require. Measurement, per se, is not the goal; improvement is the goal.” Micromanage and we are likely to lose sight of true purpose.

Quality can be measured in many different ways. The most appropriate measure to use in a given situation is determined by what we are attempting to understand. A measure that is appropriate for one purpose, like measuring the quality of care received by a group of patients under a particular system of remuneration, may not be appropriate for, say, measuring the effectiveness of a clinical guideline.

In general practice, we are likely to use measurements for quality improvement rather than research. This means that the measures need not be as robust or detailed as those required for research. The essential differences are summarised in Table 4-1.

 Measurement for research Measurement for quality improvement Purpose To discover new knowledge To introduce new knowledge into everyday clinical practice Tests Large randomised double-blind clinical trials Many smaller, sequential, and observable tests Biases Methodology is designed to control for as many biases as possible Stabilise the biases from test to test Data Gather as much data as possible, “just in case” Gather “just enough” data to learn and complete another cycle Duration Can take long periods of time to obtain results and for the results to have an impact on clinical practice “Small tests of significant changes” accelerate the rate of improvement

Quality measures can be either quantitative or qualitative. They can be used to look at processes, outcomes and the patient experience.

### Quantitative Measures

A quantitative measure provides a quantitative indication of the extent, amount, dimensions, capacity, or size of some attribute of a product or a process.

There are different types of measures summarised in Table 4-2 below:

 Binomial (i.e. binary) Like a switch, it either is or isn’t. Discrete values such as “0” or “1”: “Y[ES]” or “N[O]”: “T[RUE]” or “F[ALSE]” are examples of binary values.When stated in terms of a range of values, the value lies within (meets) or outside (exceeds) a threshold Additive Simple counts, or rates, of the entity of interest. Best suited for integer-value data Ratio The proportion of one value relative to another value Averages The mean of a population or sample of data Statistical Descriptive and inferential statistics

### Binomial Measures

They are most useful for compliance checks or criteria checks – they are often yes/no answers. These are useful for checking the structural elements in a practice, compliance to legislation, and may also be used in some types of audit. The audit example of post sterilisation debris on instruments given in Chapter 6 used a binomial measure for recording the data.