By now you have a rich calendar of learning events. Training is happening. How do you know if it’s working?
Think about measuring the following:
- Quality: how good is your learning program.
- Volume: how much of learning happens at your organization.
- Impact: what’s the result of all this.
Quality
How good is your learning program? One way to think about quality is to use Kirkpatrick’s model of Training Evaluation. Here’s my take on measuring training along Kirkpatrick’s four levels:
1. Reaction
What did participants think about the training? Ask them right after the class. We used Net Promoter Score (NPS). It’s simple and provides enough signal, both qualitative and quantitative. We set the bar for NPS over 30. At Twitter University, we were running at around 60 NPS, an A/A+. Over a third of attendees would provide feedback at the end of the class.
2. Learning
This would the measure of how well people understood and retained the content. You could conduct a quiz or a test at the end of the class to assess what people learned. We didn’t run these tests. Micro classes (1–2 hours) weren’t conducive to testing.
3. Behavior
Have we changed attendee’s behavior? This is important. At Twitter, we did some fancy experiments of looking at attendees work. For example, has an engineer who learned Scala from us commit any production code in Scala?
We had all data (too much of it!) and so were able to analyze behavior. But, the data was too noisy. Asking participants to self-rate their application of what they learned was simpler. We’d ask them 30–90 days after the training. “On a scale of 0–10, how applicable was what you’ve learned to your job?” and “Why/why not?”. We targeted an average of over 7.
4. Results
What is the impact of what the participant learned to their job? Giving someone skills to create the best work ever is the goal. But this is hard to measure.
Again, we looked at the data first. We looked at the correlation of learning to promotions. We compared manager and peer feedback over time for those who took or taught classes. We even tried to link quality of code to the level of training. There were very few insights.
At the end, the best yet simplest yardstick was to ask. We looked for anecdotal experiences where skills learned affected one’s wins at work. We started uncovering vivid success stories. They were often inspiring.
Quality is important. But distinguish between feel-good class feedback and the real business results.
Volume
How much training is going on? What is the right metric?
We used butts-in-seats as a proxy for value. Our assumption was that everyone is busy. So, time is a form of a currency. If engineers choose to sit on a class, that must be valuable to them.
While you may not have a baseline, it helps to measure if the volume is growing as your organization scales up. A lower volume means fewer people enjoy your classes or your offerings are getting stale. An upswing in volume could be that you’ve uncovered a new “bestseller”.
Watch the volume dial as you experiment with your course catalog. It’s a good indicator if you’re on the right track.
Impact
Quality x Volume = Impact.
Roughly. We tried to come up with the precise math. That didn’t work.
Pulse survey for the organization provided more visibility. Quarterly, we’d ask engineers what worked and what didn’t. Over time learning got to the very top. People felt they had the right skills at the right time to do their job well.