“What gets measured gets done.”
That’s how the old saying goes.
Maybe this is why ‘training’ remains at the heart of the L&D offering? With bums on seats and days of training still being primary measures that are reported back to our stakeholders along with e-learning completions. When you add in the aggregated Happy Sheet scores, all looks rosy. Thousands of hours spent and largely happy ‘customers’. Can’t be a bad thing, can it?
As an interesting aside, in my research about the origin of the phrase at the top of this piece, I discovered that the phrase was re-interpreted somewhere along the line from original the utterance:
“If you can measure it, you can manage it.”
Which is ironic seeing as L&D don’t measure ‘learning’ either. Instead, we measure ‘being present’ and ‘exposure to content’ and labelled our efforts to quantify this exposure as ‘learning metrics’.
But how do you know somebody has learned something outside of organisational life? For example, how do you know if somebody has learned to play the guitar? Answer: They can play the guitar. How do you know if somebody has learned to speak Chinese? Answer: They can speak Chinese. So why do we measure (and report on) learning at work largely in terms of attendance, completion, and satisfaction? In some instances, short-term memory recall is assessed with multiple choice questions or an observation exercise. But is it learned? Will you observe it in the wild as you would a guitar player or Chinese speaker?
The problem with how measurement and data is applied to L&D is that it’s in service of ‘learning delivery’ (which doesn’t make any sense) and not in the achievement of its intended results.
I’ve noticed that data has largely been used in L&D in 3 ways (there may be more you’ve seen or used):
- Reporting On ‘Learning Activity’ And How The Experience Was Received
As highlighted above, the obvious use case of metrics in L&D are the inputs (who, what and or for how long?), assessment, and satisfaction. This doesn’t go any way to demonstrating the effectiveness of L&D only the justification of its existence and how satisfied its customers were. This is low-level in L&D terms, and I was pretty sure not too many people were deluded into thinking otherwise, until I read this:
“While big data is not being used much in learning today, we have been using ‘little data’ quite effectively for some time. We routinely measure number of participants, courses, hours, costs, completion dates, participant reaction and amount learned.”
This doesn’t so much suggest a blindspot as perhaps a blindfold!?
The author continues to state that:
“The lack of application [of big data] to L&D currently reflects our relative maturity.”
I’ll leave that one there…
2. Informing Training / e-Learning Design
This recognises that there is a training event, people experience it, and iterations are recommended based on their experience. Got it!
But that’s more about observational and anecdotal feedback than analytics, isn’t it?
Not according to the author of this interesting piece:
“Learning analytics helps educators to understand diverse learning styles and preferences.”
Eh? Learning styles?
Again, I think it’s important not to confuse what we’ve always done with the opportunities that analytics provides us today.
3. Impact On Performance
Now we’re getting somewhere. After all, L&D is an extension of any organisation, helping people and teams to achieve their results. This is explained here:
“Most often, the backbone of every year-end review comes back to usage rates, as most systems tout the ability to track usage rates. Yes, usage rates are very important, but what value do usage rates bring to your company? Stack the cards in your favor and change the way that the Learning and Development department is perceived in your organization.”
So we don’t just track and report on our inputs (who showed up) but we then use creative license to attribute increased learning system usage to business performance?
“Usage rates are up 7% from last year, we specifically noticed a 2% uptick in the Sales Department, we were able to identify that due to the 2% increase, sales increased X dollars in 2017.”
I know L&D value creative solutions but… Wow! Let’s hope your stakeholders aren’t smart.
Where People Analytics Is Going
In contrast to the inward looking applications of L&D metrics, People Analytics is taking a different and – for me – a more useful path looking outwardly at factors that are critically important to the organisation.
As opposed to the claims that L&D are more mature in the use of data, CHROs and business leaders are less convinced that they’re getting the results they expect:
“No longer is analytics about finding interesting information and flagging it for managers: It is now becoming a business function focused on using data to understand every part of a business operation, and embedding analytics into real-time apps and the way we work.”
To be clear, L&D – in its ‘maturity’ – is finding interesting information, flagging it for managers and using reaction-level feedback to iterate. This is neither where the field of People Analytics is or is going. What People Analytics allows us, as HR and L&D, to do is to recognise where our value is really required and then upgrading our “decision making based on anecdotal experience, hierarchy and risk avoidance.” This will help us to think bigger and have not just more impact but organisation-critical involvement in the success of our clients by understanding more about “human behavior, relationships and traits to make business decisions,,, based on data analysis, prediction, and experimental research.”
I’ve often wondered where L&D will be disrupted from, seeing as so many practitioners themselves perpetuate the status quo by valuing training activities over business performance and productivity gains. And it seems we will be disrupted from outside of the profession as business leaders’ seek and gain access to People Analytics. To be clear, it’s senior leadership that is driving this change.
At the same time, L&D are using data to justify out-dated approaches and neglecting wider applications on individual, team and organisational performance. But, as you can see, the net is closing in as expectations change:
“Predictive analytics tools have arrived, making it possible to analyse data regarding recruitment, performance, employee mobility, and other factors. Executives now have access to a seemingly endless combination of metrics to help them understand, at a far deeper level, what drives results.”
Just to be clear, and make a distinction between data of the past, which focused on HR topics, i.e. retention, engagement, learning and recruitment. The new focus of People Analytics is on:
So Where Does This Leave L&D Today?
“As organizations increasingly look to data to help them in their transformation efforts, it’s important to remember that this doesn’t just mean having more data or better charts. It’s about mastering the organizational muscle of using data to make better decisions; to hypothesize, experiment, measure and adapt. It’s not easy. But through careful collection and analysis of the right data, a major transformation can be a little less daunting – and hopefully a little more successful.” Harvard Business Review, 2018
The obvious thing to do now might be to call round to vendors and place your faith in their technology. But before you do, let’s consider how that’s worked out for us in L&D over the last 2 decades as we’ve continued to chase engagement in our LMS and e-learning whilst knowing very little about digital ourselves. No, let’s not outsource what will become a core part of our practice. It’s time to up-skill ourselves within L&D on the fundamentals of Data Analytics.
As I’ve mentioned before, acting on too little information and quickly translating performance and productivity issues into learning needs frequently results in the creation of ‘learning solutions’ that address the real problem like a cuddle fixes a broken boiler. It’s at the very earliest stages that we should challenge ourselves to get the appropriate data to show there is a real problem that needs our attention and quite precisely where the problem lies.
Begin with a hypotheses (i.e. we need a new induction) and then find the data to back that up, otherwise it’s a hunch and whatever you do, it can’t ever be right (beyond the satisfaction ‘measured’ on happy sheets). As this Harvard Business Review article demonstrates, if you have a hypothesis, then you can test that by finding the data to validate or challenge it. Let’s take induction as an example.
As the HBR article states:
“Start with something… “
Perhaps you think you need to work on induction?
“Whatever it is, form it up as a question and write it down…”
What’s currently not working in relation to new starters?
“Next, think through the data that can help answer your question, and develop a plan for creating them. Write down all the relevant definitions and your protocol for collecting the data.”
Relevant definitions may require you distinguishing between types of new starters. If you work in a Contact Centre then you may recognise your core Operators as being a key target group, distinct from all others. In a Head Office environment, there may be less obvious commonalities between new starters. If you can, then be very clear about who you mean.
Another definition to be clear about may be when the induction period begins and ends? Is it from Day One? Is it until they pass their probation?
Next, think about the data that may help you answer your question. What are your organisation’s key determinants of new starter success, i.e.:
- What percentage of successful candidates don’t show up on Day One? This seems like a silly question but do successful candidates fall through the cracks prior to their employment officially beginning?
- What percentage of your new starters successfully complete their probation?
- Do your new starters stay with you? What percentage of new hires didn’t stay with the company more than 6 months?
- How engaged are new starters, with both their immediate team environment and the company at large? Perhaps let’s get more specific: Does engagement increase or decrease the longer they are with you?
- How quickly are they able to consistently attain the KPIs of their role?
These are just a few quantifiable suggestions for what might be (or what might not be working) in the new starter experience that affects all parties.
Then collect the data.
Plenty of the data you need will be close at hand and once you know who has access to it, then it’ll be easier to get it again to check your progress. But spend the time to put measures in place on your key metrics so you can trust your data.
If you know what, and where, the real problems are to be solved then you begin with data and a starting point. Everything you then do should be focused on moving the needle in the direction of improvement by running small experiments and seeing if you’ve made a difference. The required difference that affects results.
What I like about the approach advocated in this HBR article is that it’s accessible and closely relates to the type of work we do in L&D. We don’t need to be Data Scientists but it will certainly help us to starting thinking like one.
If we’re not being asked already, we will be asked for the data, by our leaders and our stakeholders. So let’s develop ourselves and disrupt our own practice before we’re disrupted from outside. This is a huge opportunity for us – and it’s up to us to grab it.
Now an authority in contemporary L&D practices, David works with businesses to develop and implement social, agile and digital learning strategies that make learning work, with Looop.