training

Using Learner Segmentation in Online Hospitality Training Programs

This post was originally written by Peter Matamala who works with our friends at Matchstick Inc. We're cross posting this post with his permission. We hope you find this content useful. Let us know what you think in the comments below! 

The core objective of any quality training program is to create relevant and meaningful content that drives performance results and positive change for the learner and the organization. To achieve this, one must take into account company culture and employee profile. This is a critical consideration in today’s hospitality workplace which is a vibrant collage of - cultures, languages, experiences, beliefs and age groups.  Unfortunately, the diversity we so value creates complexities and can prevent training programs from achieving their desired outcome.  Our job as instructional designers is to accommodate all learner needs and acknowledge this diversity in the hospitality training programs we create.  In this article we will explore how instructional designers can leverage the organization’s data about our diverse organizations to design more relevant and targeted training.

Obtain your Organization’s Data

The first step in tackling the challenge of diverse learning needs is to obtain an understanding of the learner requirements.  It is essential that during the assessment phase of your project, instructional designers 'peel' back the layers on your training audience to understand their needs better.

Today’s organizations are data rich, collecting employee and technology data from many sources and making it available to various departments.   The instructional designer that successfully evaluates and assembles this data can create a vivid picture of the organization’s learner landscape. Consider the data available in a typical IT department.  End user computing and security teams would almost certainly have data pertaining to employee devices, browsers, and wireless access at learner locations.  This useful information allows your ID to leverage existing technology, determine optimal learning modalities and implement effective design techniques.  Human Resources will likely be able to provide learner location, language and job level among many other employee attributes, thereby providing key information that could influence translation costs, accessibility to training content and desired training topics.

The designer needs only to assemble this data to paint a picture of the learner population.  Let’s now take a look at how that would be accomplished.

 

Create a Matrix

After obtaining learner related data, begin assembling it into a matrix.  Choose a primary attribute and compare that against the other attributes and capabilities. For example, take Job Role and compare against computing device, location and language.  Use spreadsheets and pivot table analysis to filter and summarize the data.

The output should look something like the table below.  Notice we have placed the role on the y-axis and learner attribute on the x-axis.  

 

Where x and y intersect, we show the counts of learners meeting that criteria.  Use of a spreadsheet and pivot table analysis will make this task much easier.  

Analyze the Matrix

Once you have created your matrix, take a look at the data and volume counts and begin to develop some core and outlier requirements for this online program.  In our matrix example a few things stand out:

1.     Heavy use of mobile devicevs. PC's

2.     First language of this training population is predominately the local language.

3.     In comparison with the total population, the number of learners with an advanced degree is about half.  

As a response to finding #1 noted above, the designer would absolutely need to consider designing for mobile.  In finding #2, voice overs and on screen text would need to be in the local language and could impact on screen text and voiceovers.  For finding #3, the matrix shows that there are large portions of both manager and non-managers who have advanced degrees.  

A savvy Instructional Designer can take the analysis further to determine mobile device type, types of languages and degree levels of the manager/non-manager bases.  This more detailed assessment and review will help further refine the training and, based on the above, allows the designer to confidently design a mobile program in the local language.  Perhaps the training should be offered in manager and non-manager versions.  In instances where there are large percentages of foreign languages spoken, a specific language matrix can help prioritize which languages are most widely spoken and would be in scope for translation.Delivering the Online Training

For the eLearning program outlined in our example above, the instructional designer can use the matrix to create segments of learners.   Building segmentations can be a great tool in determining how best to deliver and deploy the training.  The segment analysis of this data shows an array of learner device and PC access.  We also know that there are various job roles in the learner population. Finally we also know the HR data can show the number of learners in certain regions or locations.

With this information at hand, the instructional designer can work with the LMS team, training managers and regional staff to deploy the online training in a targeted and deliberate fashion.  For example, designers might use this information to deploy to the segments of:

  • Managers only
  • Non-Managers only
  • Back office staff
  • Mobile devices users
  • PC users
  • Location
  • Language

Designers could use this data to deploy to managers first so they can take the training prior to their staff to encourage top-down support.  Perhaps the operations teams want to avoid training deployments to back office staff during quarter end or year end activities.   Knowing which learners have only PC’s or mobile devices can help the deployment team target instances where a learner should receive the Flash or HTML5 version of the training.

Segmentation of the learner population can be a powerful tool in the reporting and analytics for any hospitality training program. We will follow up in a later post and explore that in greater detail.

CONCLUSION:

This method of identifying smaller, homogenized groups, from a larger and potentially highly diverse learner population is highly valued by designers hoping to tailor the learning to the target audience or control the rate and reach of how hospitality training is deployed.    While this does represent some additional effort, the time spent at the beginning of a training project can help designers deliver training with an exceptional understanding of their learner environment, thereby positively impact the take rate and application of knowledge learned.

To learn more about Match Stick and their e-learning solutions check out their site

 

Are you maximizing the use of your data?

Perhaps one of these situations is familiar to you.

Situation 1

Client: “We just completed this training program and I’m glad you’ve collected all this data using different multiple choice and open-ended questions but it’s overwhelming. I just want overall numbers that I can show my bosses.”

Situation 2

Client: “We need to do a training evaluation but I want something as simple as possible. Just use a smile sheet. We just need to know if the trainees liked the training enough to continue this program.”

Situation 3

Client: “I’m really concerned about the training program and I’d love to look at the qualitative and quantitative data we collected but I don’t know how to make sense of it. I’m not sure what to do with it or if I even its useable.”

These are some situations that I have encountered when dealing with clients (i.e. HR directors, business owners, line managers). I hope I have captured some element of the post-intervention (in these examples a training program) challenges that many consultants face when dealing with evaluation.

One of the most frequent challenges of post-training data analysis is that you might collect an enormous amount of data and then not be sure what to do with it. The data you have start to look like vacuum cleaner attachments. Sometimes you look at your data and all you see are extra attachments for a device that you are not sure how to use.

Can we use all of these attachments? Of course!

Do we always use them? No!

We’re not certain what the attachments are for. All we know is that we have them. 

In many situations, I find that HR managers, generalists, training managers, directors etc. often have the same reaction when faced with the survey data they collected from trainees. They feel overwhelmed. This is especially true if the clients did not participate in developing the evaluation process. Clients are a necessary component to any evaluation because they know what they are looking for out their training program.

One of the most common sources of overlooked data are qualitative comments from trainees. In many cases, this data is not overlooked because the clients feel that the data is not valuable. In general, my clients have found the qualitative comments invaluable. However, there is typically hesitation about how to analyze the comments made by trainees and how to use the comments to develop next steps.

A recent study by Harman, Ellington, Surface, and Thompson (2015) illustrates the importance of comments to an evaluation of training program effectiveness. The researchers conducted three field studies in a series of simultaneous training classes. In each study they assessed the commenting behavior of these classes. I strongly recommend that any learning, HR, OD, or IO practitioner to read the entire study. However, I wanted to highlight some of my major takeaways.

1)      Classroom experiences affected the likelihood of commenting. As individuals who have experienced training in a variety of contexts, we intuitively know this. However, it’s great to know that the comments you receive in your training evaluations will reflect real differences in classroom experience. Pay attention to the comments because they will tell you what happened in these training programs.

2)      As class-level learning decreased commenting increased. In other words, there was a negative correlation between learning and commenting. This is a very powerful finding as it relates to the first big takeaway. If trainees do not feel like they are learning in the classroom they will say something about it. That kind of data is important to pay attention to.

3) Trainee reactions are multidimensional. Many times we want to reduce the data to a single number or a single value i.e. “What percent of the trainees liked the program? 60%.” However there’s a lot more going on in any training program than just what the trainees felt about it overall. Trainee reactions can lead us to understand many changes that are important to be made in subsequent training classes. If we look deeper at the data we can learn about components of the training program, the trainer, the class environment, and the relevance of the material.

4) If there is no expectation of change, you will receive no comments. In other words, if your employees feel that your organization won’t change anything based on what they have to say, they won’t say anything at all. If you are conducting a training evaluation (or any evaluation) your organization needs to be committed to making the necessary changes and that should be communicated to your employees. You depend on data from your employees and by communicating your commitment to making changes you will elicit comments from those who have experienced your training program.

 

The message is clear from these research findings. Your employees want great training programs and want your programs to improve. It’s up to us to leverage the comments made in addition to the quantitative ratings to get the most benefit from the metrics we employ.

If you do feel like you’re missing out on some data or you have training evaluation data that you feel like you don’t know what to do with, what are your options? Here are some suggestions:

1)      Reach out to colleagues and discuss your options. Some of my best ideas have come from discussions at ATD meetings or at Metro Applied Psychology meetings. As IO practitioners, we live for this discussion.

2)      Talk to your vendor. If you are using a vendor, ask them about options they offer for data analysis. Much like the vacuum cleaner, they probably have options you have not fully investigated.

3) Reach out to a consultant. If you are truly lost in the evaluation process, reach out to a consultant that specializes in training evaluation who can guide you through using this data.

 

Can you think of other situations where you may be missing out on this hidden data? Is there data that you collect that you feel you do not get the most out of? Feel free to list these in the comments below. I would love to hear your thoughts. Make sure to read the full article. The reference is below!

 

References

Harman, R. P., Ellington, J. K., Surface, E. A., & Thompson, L. F. (2015). Exploring qualitative training reactions: Individual and contextual influences on trainee commenting. Journal of Applied Psychology, 100(3), 894.

 This post has been cross-posted from psychologyofwork.wordpress.com