If there is one thing I took from Learning Solutions and Ecosystem 2014, it’s that L&D Pros still seem to be very focused on outgoing signals. Don’t get me wrong- there were some very good sessions on how to make the content we deploy much more effective. I thoroughly enjoyed Will Thalheimer’s session on subscription-based learning which addresses the impact of forgetting by applying the spacing effect. Ray Jiminez also had a session on how to chunk information into smaller bits for greater impact. We are making vast improvements into packaging our outgoing messages to be more effective. However, I still feel we’re missing something big. THE thing, actually. Today, we have more tools than ever to understand our users. I am not talking about our traditional view of metrics (read: test scores- where it is a bit late to really help the user, isn’t it?). We have analytics tools at disposal to listen- really listen– to what our users are DOING (focus on performance vs content was a common theme at the conference, but I mostly saw items on how to make content more performance-focused, and very little on how to understand how users are performing to better support it). [blockquote type=”blockquote_quotes” align=”left”]It’s time our profession stopped holding the megaphone to our mouths and start holding it to our ear first.[/blockquote]With the tools available, why not listen first to understand needs, then design content to respond to users needs? I am not talking formal needs assessment, but in-process metrics to understand our users at a deeper level and respond in a timely manner to best support them. More like games which constantly monitor a player’s actions and adapts accordingly to provide feedback and guide actions. That. I think now we have a great opportunity to transform L&D from being barely valued and irrelevant in many organizations to a strong and indispensable support partner. But that will require us upgrading our skills from one-way content deployment to an actual dialog with our users.
Next week is the eLearning Guild’s first ECOSYSTEM conference. I am very happy to be speaking. A good story and a nice presentation are fine, but the sessions that always gave me the most value were those providing concrete take-aways I could bring back to apply to my work. With that in mind, I’ve committed to ensuring my session attendees (and anyone with interest) are offered a tool to use. My tool is an Excel spreadsheet to help evaluate systems. It offers a structured format to define and prioritize requirements. Then, you send these requirements to vendors to demonstrate how their systems address your needs-not claims-make them show you (I suggest short Jing or Screenr videos). On each vendor tab, you input your rating of the vendor’s ability to address each requirement. You also evaluate usability from multiple perspective (end users, administrators, managers, etc.). When you have completed your evaluation, the spreadsheet highlights top 5 performing systems for each category and overall. Another tab with charts will map functionality and usability ratings into a matrix so you can focus on the vendors in the “magic quadrant”. Here is an example of a completed evaluation: sampledata. The data in the spreadsheet is from actual evaluations. Vendor names have been changed. All samples were removed along with comments (with exception of one vendor whose ratings are not included in the spreadsheet). But other than protecting identity, all data is valid. You can apply filters to the charts to get different views, and change the ratings on a vendor to see how it would change their results. You can even change priority weighting for criteria which can significantly influence the overall outcomes. This allows you to see how certain priorities determined by a client drive which systems are a fit. And that’s the point: if you get your criteria and prioritization correct, it should lead you to “best fit” systems (I find most systems evaluations far too light in defining criteria, let alone prioritizing). Here is the blank Template of the EvaluationTool. Don’t like my criteria? Change it! The sheet is intended to capture YOUR needs. Although I offer over 70 unique criteria, there are obvious biases. It’s just that some criteria has not been a consideration in evaluations I have performed (not much on social, nothing on gaming or badges, little on Talent Management, content management). Whatever doesn’t work for you, change on Tab 1 – Evaluation Sheet. All the other tabs read from Tab 1. So changing criteria on the Evaluation Sheet (Tab 1) will update criteria across all other worksheets. However, if you remove a row, you will have to do the same on all worksheets. You also can just zero out priority for the rows with criteria your organization won’t value. The Excel document is completely open and unlocked. Adapt it in any way to fit your organization’s needs. If you make an adaptation to the spreadsheet that you think I should consider including in the template, please don’t hesitate to let me know. The process to use the spreadsheet to perform an evaluation:
- Define your criteria in Tab 1- Evaluation Sheet (put it in terms of what you need it to DO for your business) and think of multiple perspectives/functions
- Remember- if you don’t like what is there, change it. Make this yours.
- Assign a Priority of 1 (low) to 3 (high) for each criteria (or zero it out if you don’t want to include it in the evaluation).
- Once your list is complete, copy the CRITERIA ONLY (don’t share priority weighting – it influences vendors’ response) and send it to vendors, requesting for a quick demonstration (5 mins or less) how their system can address each need.
- Tools like Jing or Screenr that limit recordings to 5 mins are ideal. Show me. Fast.
- Provide resources as necessary. If you use Articulate, and you publish to SCORM2004, provide sample files. I highly recommend generic examples (i.e. “ACME, Inc.”) since this is labor-intensive for vendors. Using generic files and specifying that your organization’s name is not to be used as part of the demonstration can allow vendor to post the demonstration publicly for marketing or support purposes.
- If a vendor balks at providing quick recordings, think about how serious they are about winning your business. Many will request scheduling hour-long demos to the full evaluation team. Most stakeholders will truly only need to evaluate a small portion of the criteria, so this is not a wise use of time.
- When vendors reply, select an empty vendor tab in the evaluation spreadsheet, put the vendor’s name in cell A1 (this backreads into the main Evaluation Sheet – Tab 1 and Tab 2 – Charts) and copy in their responses to the second column. ONLY paste in their response column, don’t overwrite criteria column A, which reads in from Tab 1- Evaluation Sheet).
- Before going too far with rating, do a QA to ensure you’ve put the correct responses in the correct rows, or you may be rating against the wrong criteria.
- Rate how their system fulfills the requirement. Grade critically.
- For any criteria that the vendor’s reply is confusing or absent, follow up as necessary to give them the opportunity to properly represent their system’s capability.
- In Tab 1- Evaluation sheet, provide a Usability Rating in the Vendor’s column for each area (End User, Manager, Reporting, Admin…).