Guest Post: ‘Is it time to ‘NUSAP’ your data?’

Home / News, Views & Updates / Guest Post: ‘Is it time to ‘NUSAP’ your data?’

Guest Post: ‘Is it time to ‘NUSAP’ your data?’

This post was written by Bob Kennedy PhD, Lecturer, Institute of Technology, Sligo.

One of the eight quality management principles underpinning the ISO9000 series and indeed reinforced in the excellence models is the need to base management decisions on facts. These facts, for most of us, are the data we collect either through observation or measurement.

Data collected through observation can be called attribute, discrete or count data. There are six eggs in the carton is an example. Others would be: the place is very crowded, the patient is flushed or 55% of the people present are women.

Data collected by measurement is called variable or continuous data. The stent has a coating of 6 microns. Here we did not count the six microns but we determined it through some form of measurement. Similarly we might say the patient has a temperature of 39 degrees C or that on average the height of people present is 1.72m.

There are six eggs in the carton – attribute data.

There is a coating of six microns on the stent – variable data.

Variable data is always a number but attribute data is sometimes just an indication or scale of things e.g. the patients is flushed.

For now I want us to focus on variable data which has been collected through measurement. This data requires special attention before we bestow the lofty title of ‘facts’ upon them. Kimothi [2002] advises us to test the validity of variable data by using the NUSAP approach. NUSAP is an acronym for: Number. Units. Spread. Approach. Pedigree. It really is a check on the quality of the data we wish to use to help us make decisions. The first three are the most important and will be addressed here.

Returning to our stent coating. As you look at the recorded data you see 6 microns [6µm]. Immediately you know the number and the unit so we are two fifths along the way to satisfying the NUSAP criteria. But now you are left wondering how certain are you about the 6µm result. Pondering about this uncertainty leads you naturally to think about the 6µm and range or spread that value really represents. In effect you are wondering if this exact measurement were repeated, would it give the exact same result. Without even knowing it, you are grappling with the concept of measurement uncertainty.

From experience and thanks to the work of statisticians we know that all variable data will have a spread of uncertainty associated with them. Repeat measurements carried out under identical conditions will have a level of variation, a spread associated with them. This variation or spread is normal, it is common and unless you change the measurement process there is nothing you can do about it.

This reality confronts us with two questions:
Do you know what the spread or uncertainty is in your measurement processes?
Is it acceptable?

Determining the spread, variation or uncertainty of a measurement process is a scientific matter. It can be very complex using some heavy statistics [Type A evaluation] or it can be equally scientific based on experience [Type B evaluation]. Here I will just show you a simple approach.

A micrometer has a stated accuracy of ±1µm. Suppose we used this micrometer to measure the coating thickness on the stent. Now when you look at the data 6µm you know that based on the accuracy of the instrument alone that there is an uncertainty of ±1µm associated with every result of measurement. But this isn’t the whole story. You also know that measurement is a process involving many interacting elements. These include the: instrument, method of measurement, person, product, environment, calibration process. You know the instrument alone is contributing ±1µm so what do you think is being contributed by the others? You might wish to compile an uncertainty value for your own measurement processes by assigning a level of expected variation associated with each element. In doing so you will be constructing a crude version of what is known as an uncertainty budget. For a more ‘scientific one you will need to apply Type A and/or Type B evaluation as mentioned earlier.

I’m going to shortcut this and tell you that a person using a micrometer is unlikely to get an uncertainty better than ±5µm. Wow! This means that when I look at the data 6µm that there is a level of uncertainty of ±5µm associated with them. In other words the recorded 6µm could be any value from 1µm to 11µm. Without changing the measurement process there is nothing you can do about it. But is this normal, common variation or uncertainty of measurement acceptable?

An unwritten rule exists to help us answer this question. It is called the Test Uncertainty Ratio [TUR] and it is a follows: The ratio of product tolerance to measurement uncertainty should be at least 4:1.

A product characteristic of 6mm±4µm requires a measurement process with a measurement uncertainty of no more that ±1µm. The micrometer described earlier is not fit for this purpose. While the micrometer has an accuracy of ±1µm the measurement process of which it is a part has an uncertainty of ±5µm.

As always there is more to this than meets the eye but I hope I’ve stirred your curiosity in giving your data the NUSAP treatment. You don’t need all the statistics stuff to get a feel for the level of uncertainty, the spread, associated with your measurement processes. When you arrive at that figure apply the 4:1 TUR rule to determine if your measurement processes are really fit for purpose.

Reference:
Kimothi S.K. The uncertainty of measurements. American Society for Quality 2002.

Share this Article

Blog Sign up

Sign up to receive the latest industry and company news direct to your inbox.