A key benefit of the TEAMS solution is that it helps Field Service Organizations improve the Quality of Service (QoS) while lowering the Cost of Service (CoS).
Without TEAMS, organizations rely on the intelligence of the individual Field Service Engineers (FSEs) to solve complex troubleshooting problems. However, not all the FSEs are equally smart, leading to inconsistent performance that no amount of training can overcome.
The consistent QoS delivered by the TEAMS solution is therefore a key motivation for adopting the TEAMS solution. But, how do you measure and demonstrate a consistent performance when there are so many variables that affect the field service process? More to the point, while variability is to be expected, how much variability is cause for concern?
Fortunately, quality measurement is a mature science, and we need to look no further than “Control Charts for attributes“, originally conceptualized in 1924 by Walter Shewart and extended by Edward Demming.
The underlying theory is very simple.
Supposing you want a consistent success rate of S%, where success implies the Guided Troubleshooting Solution enabled the FSE to identify the root-cause of the failure and apply the appropriate corrective action. How would you periodically verify that you are still achieving S% success rate?
If periodically you sampled N cases, the standard deviation, s, associated with the estimate of S from N measurements is inversely proportional to the sqrt(N). This makes intuitive sense; the more data you have the more precise your estimates of S will be.
Assuming the cases you sampled are independent of each other, the expected number of successes from N trials will be between (S – 3s) and (S + 3s) with 99% confidence. This is basic property of normal distribution, and we will assume that it is applicable to S.
So, what does this all mean? Let’s plugin some values to make sense.
Let’s assume your goal is to have a success rate S = 80%. So, when you have a large enough data set, say thousands of troubleshooting test cases, you will get approximately 80% successful troubleshooting outcomes.
But how many successes should you expect if you just sampled 30, 100, 300 or 1000 test cases? The following table gives you the answer:
Number of Cases | Minimum Number of Successes | Expected Number of Successes | Maximum Number of Successes |
---|---|---|---|
10 | 4 | 8 | 10 |
30 | 16 | 24 | 30 |
100 | 67 | 80 | 93 |
300 | 217 | 240 | 263 |
1000 | 758 | 800 | 842 |
You could monitor the accuracy of your Guided Troubleshooting Solution by periodically sampling N test cases and counting how many of those are successful. How to interpret these results?
- 80% success rate does not mean that every 5th case will be unsolved. When you look at small enough data set, expect to see large enough variation, and that is no reason for alarm. Even if you see 5 successes in 10 trials, that still does not prove you are not going to have 80% success rate. Gather more data, maybe you just had a bad day!
- Whenever you get successes outside these bounds, do take a closer look. There is still a small chance that there is nothing wrong. But may be there is something else going on. Has your target system changed, and models need to be updated? Is this a new operating condition? Are the FSEs getting confused by the instructions? Whatever it is, it is worth a second look.
- Do track the moving average. If your average is falling and you are consistently getting worst case results, chances are you are falling behind on your Success rates. On the other hand, if you are consistently beating the average, may be you are overachieving!
- These tests are known to be too insensitive to small variations in the underlying process, and are therefore suited to measuring the performance of a Global Field Service Organization. However, you can add a rule of thumb or two to improve its sensitivity. For example
- If you get two or three back to back data sets that are more than two-thirds of the way to the best or worst case, or four or five back to back data sets that are more than one-thirds away from the expected value, you may want to take a closer look at the underlying data.
- If you get 8 or more data sets that are all above or below the expected value, it may be time to reevaluate S.
Hope this helps you in measuring success in your Field Service Organization. Let me know how it works out.