When you start measuring learning impact, you quickly find you need different metrics for different types of courses. A module on Workplace Safety should be measured against workplace incidents while a course on Sales Tactics is better correlated with attributed revenue.
It turns out there's one metric you should be collecting for all learning content, irrespective of subject matter, course length and audience. It also happens to be the easiest metric to collect. If you're dipping your feet into measuring learning impact, this is the single, highest value thing you can start doing right away.
Collecting feedback ratings has become ubiquitous in the modern age. Apps ask for store ratings, businesses generate Net Promoter Scores from feedback, even airports want you to rate your experience. We're becoming desensitised to the onslaught of feedback requests but here's the kicker: they still work.
Start collecting Reaction Surveys.
In the corporate world, most learning (especially compliance-based learning) is something employees are forced to undertake. This puts learners on the back foot – they're busy, they have jobs to do, their only motivation to learn is to avoid negative attention. As a learning designer, your goal is to overcome this mountain of negativity and provide content that learners find worthwhile.
Learner Reaction is your gauge for measuring that success.
Even for self-motivated learners, reaction data provides value. It may not directly convey retention, confidence or skill acquisition but it's what we'd consider a proxy metric. Proxy metrics are extremely powerful. Often you can't access the data that directly relates to learning outcomes, but you'll find a metric that has a second order effect on the outcome you wish you could measure.
How does learner reaction convey impact? If you provide training on conducting Performance Reviews and a learner rates it negatively, chances are they haven't understood the concept and don't feel capable facilitating a Performance Review. If someone leaves a positive rating, it's a safe bet they've understood the learning and can now put it into practice. Neither of these scenarios indicate the individual can actually run a Performance Review (you'd need to assess that directly) but lacking more specific data points, this the next best thing. This is considered a second order effect.
How to measure learner reactions.
The easiest way to measure reactions is also the most obvious: you just ask the learners to rate the content. However, there are considerations to the way you phrase this question. The below aren't equivalent and can prompt different responses:
How much did you enjoy this course?
How would you rate this course?
To what extent would you recommend this course to a colleague?
Enjoyment doesn't equate to understanding and an excellent course may not be something that a learner would recommend if they didn't think its subject matter was suitable. Consider each of the following points when writing your feedback question:
Decide on the overall value metric for learning inside your organisation. Is it engagement? Is it understanding? Is it the ability to perform your job better?
The reaction question should be consistent for all learning content. Asking different questions for different modules makes it difficult to compare scores across the board. You might use an alternate question for self-directed vs assigned learning but the less variation, the better.
The rating should be all encompassing. When a learner assigns a rating, you want it to represent the module as a whole, not just one aspect of it.
If in doubt, we recommend the following as a safe question that works across a wide range of use cases:
How valuable did you find this content?
You'll also want to consider which scale to use when collecting responses. Be aware of the trade-offs when using a dichotomous scale (yes/no, thumbs up/down), the Likert 5-Point scale (star ratings), 10-Point or anywhere in between. Reaction surveys should be optional and providing a scale that requires too much thinking or doesn't align with a learner's true feelings may reduce participation rates.
Some experts recommend omitting the neutral option to remove the "easy out" for respondents. However, more recent research indicates this isn't the case and significantly decreases response validity. ClearXP's own study on learner reaction surveys demonstrated that fewer than 5% of respondents selected the middle option when rating courses. It seems that some people really do feel ambivalent about content and you should let them express that.
If you want to get more sophisticated, you can ask follow-up questions that drill into specific aspects of the course:
Did you find the content an appropriate length?
How engaging did you find the material?
Was the information conveyed relevant and concise?
Be warned, whilst quantitative data from these questions is fantastic for analysis, more questions can result in fewer respondents completing the survey. Our recommended approach is to include a single, free-text field where users can optionally provide further information to justify their rating. Free-text questions allow learners to provide feedback on all of the above, along with anything else you might not have pre-empted with a fixed rating question.
Qualitative data from free-text inputs is much harder to analyse but we'll cover some techniques for speeding that up shortly.
Deriving value from reaction scores.
The conventional way to gauge impact is to take the average rating and compare them across all courses. This creates a relative ranking of how valuable your learners find learning. This ranking is useful but if your content is reasonably consistent, you will end up with a tight cluster of ratings with very little variation. Take the following dataset as an example.
Looking at the average alone, these two modules appear to be performing at a similar level. As a learning designer, you would probably be satisfied with both these courses. However, the average only tells a small portion of the story. Aggregating by the number of respondents for a given score, we see that Module 2 is actually performing worse, despite the higher average rating.
The two low scores may belong to disgruntled employees or could point to some other issue. By segmenting these results, we may be able to drill into a specific problem worth fixing. For example, we could look at:
Geographical area: do the two users belong to a different location to the others?
Department / position: is there an organisational problem making the content ineffective?
Device: is a technical issue causing the content to not work correctly on a specific device?
Reading feedback comments left by these users could also help point to issues with the course.
The ISO standard on L&D metrics recommend calculating the percentage of favourable ratings by taking the highest two scores in the rating scale and aggregating them together. This is also effective in highlighting the potential issues mentioned.
Finally, beware of negative skew. People are more inclined to leave a negative rating when dissatisfied over a positive or neutral rating when complacent. This doesn't invalidate reaction surveys, the results still provide meaning relative to the scores of other courses rated. However, always take context and potential biases into account when interpreting results.
Tools for delivering reaction surveys.
Here comes the fun part where we tell you how to actually administer these surveys! Most learning platforms should support sending reaction surveys for you, but if for some reason they're not suitable, you have a few options for managing this yourself internally:
Google Forms - free with any Google account
Survey Monkey - free for forms with less than 10 questions
Even plain old emails, if you can stomach the manual collation.
It's as simple as creating your reaction survey in one of these tools and then either inserting the link onto the final slide of your content, or by emailing it to participants directly based on a completion report.
Don't forget to collect what course is being rated so you can attach the rating data to a specific learning module. Google Forms has a neat trick where you can generate a link that will populate a response on behalf of your learners so this can be used to create module-specific links that are embedded into each course.
We’ve even created a free Google Forms template that you can clone to get started right away.
When should you administer a reaction survey? Conventional wisdom indicates as soon after a learner completes a module as possible. You want the experience to be fresh in their mind to record an accurate reflection. Real-time is feasible if you've automated the process, otherwise try to send an email every Friday to anyone completing the module that week. Any longer than that and your accuracy will decline.
Don't discount learners who never complete the course. Especially for self-paced learning, non-completions are a key indicator that content isn't performing. Reaction surveys help identify weaknesses and areas for improvement for courses that don't provide value. You'll want to wait a longer period before issuing a reaction survey to non-completions and of course, rephrase the question as necessary.
Reaction surveys are just the beginning but they're an easy way to get started measuring impact. Oh and if you want to see a reaction survey in action, why don't you tell us what you thought of this article by responding to our survey?