‘Measuring the effectiveness of Learning & Development’ by Mary Jane Flanagan, Training Journal, December 2010

Ben Crowley2 Blogs Leave a Comment

Feedback indicates that blog readers still appreciate reviews of articles about measuring Learning & Development effectiveness, even though the conclusions are normally highly unconvincing. Perhaps readers enjoy searching for the holy grail, perhaps it is reassuring to know that others have had similar problems in their searches and experiments.

As with similar articles, the opening paragraphs emphasise the need to measure ROI while giving no indication of understanding of what it really means. Perhaps my financial background means that I am too pedantic here but there is a real equation that can be forgotten in the loose talk of many training consultants; ROI is incremental profit as a percentage of investment and the problems arise from isolating and quantifying the incremental profit in a world that is complex and ever changing.

The author does not even try to address this issue but moves immediately into five non-financial metrics that are interesting and useful to measure but which do not necessarily relate directly to ROI. These are:

1. Motivation of people
2. Productivity and development of people
3. Net promoter scores
4. Labour turnover
5. ‘Mystery’ visit results

The choice of metric perhaps indicates that the author’s bias and experience is towards employees that are directly customer facing, rather than the mix of roles that might typically face us in a range of training activities. It is hard to see how 3 and 5 above could be usefully applied to a cross-functional group of managers whose business skills and knowledge need to be improved.

There is also a lack of practicality in some of the suggestions to apply the metrics. Employee motivation surveys – before and after – are suggested without mentioning the likely cost and the problems of isolating the impact of the course from the many other motivation factors in an organisation. Similar problems apply to number 2 above; the author’s idea of a ‘Values Wheel’ is not without merit but again the suggestion that you can take a ‘before and after’ score out of 10 for each criteria is simplistic in the extreme.

The ‘net promoter’ score makes the assumption that those who rate a product 9 or 10 out of 10 are likely to recommend it to others and then proceeds to suggest again that ‘before and after’ measurement can be applied to measure ROI. Even if those being trained are directly customer facing, this approach assumes that there are no other factors driving customer satisfaction; and for others being trained, you need to define the customer – who will often be internal – as well as measuring their feedback. My own thought was that measuring the proportion of 9s and 10s is more useful as measure of feedback on the learning experience – on the basis that these are the ones who will really apply their learning – but this is really only a variation on conventional ‘happy sheet’ measurement.

Measuring labour turnover is subject to the same problem of other influences and also makes the assumption that low turnover is always good; some turnover can be a useful driver of change and it all depends on the types of people who are leaving. The idea of mystery visits is also limited to those who are external customer facing; hardly something you can apply to those involved in providing internal services or more indirect product benefits.

So overall this article is not impressive. It presents as solutions a number of approaches that can only be used in specific contexts and ignores the key problem that faces anyone who wants to evaluate Learning & Development; how do you isolate the benefits of training from all the many other factors in a business and how do you quantify the impact? Unless you accept these questions as the main challenges, you will get nowhere, which is precisely where this article takes us.

Click here to read the article in full;

Leave a Reply