August 7, 2017

The world doesn't need a hero. All it needs is an instructional designer.

"Can SME be an instructional designer?", "Do you need an education, or can you just learn Storyline?" or "Who needs those instructional designers anyway?" Somehow, I see a lot of these questions recently, so here are my unsolicited two cents...

To answer these, I would look at the definition of instructional design. With some variations, it will say something like: "Instructional design is a systematic and systemic application of scientific knowledge to improve learning processes and outcomes". If you ever had a course on instructional design, you will most likely remember either this, or spending a lot of time discussing what is "systematic" and what is "systemic".

So, there are two questions to ask when choosing an instructional designer (or deciding to become one).  Does the candidate (or you) have scientific knowledge? Can they/you apply it in a way that

  • makes sense
  • takes into account many interconnected factors and 
  • leads to measurable results (i.e. more than the existence of a training program)? 

The obvious, but for some reason elusive, point is: you need to have both: a solid theoretical grounding and a knack for its application. It sounds bizarre, but in my experience, it is hard to be practical without a good knowledge of theory. Without it, it is hard to have a critical view of the reality. For example, would the "10 ways to make your e-learning better" actually make it better? And why? Education alone won't get you anywhere, but practice alone will get you someplace random.

As for the usefulness of instructional designers... It is tempting to create solutions, which replace either the knowledge, or the skill of its application. For example, I've often seen such interesting internal documents as "pedagogical styleguides". Their purpose is to document the standards of training development in a company. While it can be a nice idea from the point of view of process documentation, more often they do not make sense. Particularly when these standards deal with something that cannot actually be standardised. For example: "Company X uses exclusively constructivist approach" or "We're using only andragogical principles".

Apart from sounding funny to an informed reader, these standards are impractical. The pedagogical theories are not "right" or "wrong". They are concerned with different aspects of learning. One cannot "choose" one theory and abandon all the others. Similarly impractical are suggestions to "include a knowledge check every 10 slides". Or "not include more than 5 learning objectives". What is the point? What is the purpose?

In other words, professional instructional designers are not outdated, but extremely important. And no amount of guidelines or instructions to have a "20 questions quiz for each module" will replace them.

July 2, 2017

Building Games in Storyline: River Crossing Puzzle

E-learning Heroes Challenge #173 is all about brain teasers: not only do users get to solve some puzzles, but you, the developer, have something to think about. For this challenge I decided to recreate one of the river crossing puzzles, where you need to move three things across the river: the wolf, the sheep, and the cabbage. There is space only for one cargo item on the boat. Naturally, when left alone, the wolf will eat the sheep and the sheep will eat the cabbage. You can see my version of it here.

In this tutorial I will focus on creating the interaction and will omit the decorations, such as the moving water and clouds, animated boat, or the design of  the feedback slides. However, if you'd like to hear more about these topics - let me know. The tutorial assumes that you know the interface of the Storyline and know how to start creating basic triggers, variables and states.

The Assets

If you want to use the same assets as I did, you can download the exact images here. Please note that these images are designed by Freepik. You can download the originals as follows:

The Project

Before we begin, let's take a look at our final goal. Not counting the title slide, this project is using four slides in total. Two slides are "end slides", which the user will see in case either the sheep or the cabbage got eaten. The other two are the river shores.

The slides used in this challenge

There's a saying in Russian that "laziness is the driver of progress" and I tend to wholeheartedly agree. As you notice, I did not (and usually do not) build slides for each possible combination of animals on each shore. I know some Storyline users tend to do that, saying that it is easier than dealing with cumbersome variables. While this is true in some cases, looking back at the projects I built, in most situations I don't find this easier or more helpful than working with variables. Instead of keeping track of all this information myself, I prefer to give instructions to Storyline and let it do all the hard work. See? Laziness in action. I hope this tutorial will help you embrace your inner sloth and find ways to do less work in Storyline.

Create the Slides

Go ahead and create three slides:

  • Shore 1
  • Sheep Eaten
  • Cabbage Eaten

Note that we are not creating the slide "Shore 2" yet - this is intended.

Leave the "Shore 1" empty for now and put some text on the "Eaten" slides, so that you know which one is which when you test-run the project. You can, of course, design them a bit more, if you wish.

Create Variables

We have three items (the wolf, the cabbage, and the sheep), two shores and two possible tragic outcomes. In other words, we need to know where our cargo is (in shore 1, shore 2 or in transit on the boat) and if it's still faring well.

Let's start by creating several True/False variables, as shown below:

List of variables you will need

Since you begin with all the items on Shore1,  set the variables for Shore1 to "True" and variables to Shore2 as "False". Since the cabbage and the sheep are not dead, set the variables for tracking their vital signs to "False".

Build Shore1: Objects and States

For the sake of brevity, I will refer to the wolf, the sheep and the cabbage as "animals" (sorry, cabbage!). Speaking of which, in addition to the obvious objects, such as a boat and the animals, you will also need to create the following:
  • a shape for the cargo hold (this is what makes the animal "appear" on the ship)
  • a "Submit" button (this submits the user's choice and triggers the navigation to the other slides)
Important objects on Shore 1

You can create the "Submit" button in any shape or format or even use the button in a player. In my example I'm using a hotspot positioned over the ship. Go ahead and create your "Submit" button in any shape you desire.

In my example, the users needs to click on the animals to move them to or from the boat. What happens in Storyline is that when the user clicks one of the animals, the state of the shape "Cargo" changes it's state to match the selected animal. The state of the selected animal changes as well. Let's build this.

Format the "Cargo" shape to have no fill and no outline. Then create a new state called, let's say, "Cabbage" and add a picture of the cabbage to it. You can also add a brief "Fade" animation to this picture. Repeat this for the states "Sheep" and "Wolf". This is what you should have in the end:

States of the "Cargo" shape.

Create states for the animals. In my examples I've created the "Hover", "Selected" and "Disabled" states. You can omit the "Hover" and "Disabled" state, if you like, the important part is to have the "Selected" state. For the ease of following this tutorial, I recommend including the "Disabled" state as well (otherwise you will need to use "Hidden" state in all triggers that use the "Disabled" state).

"Selected" and "Disabled" states should be empty. To achieve this, simply delete the image of the animal from the state, while in the "Edit states" mode.

This is how the states of animals should look like.

Note: make sure that you're using the built-in states and not creating custom states that you name "Hover", "Selected", etc.

Shore 1: Loading the Animals (Triggers and Variables)

Time to make all of the objects interactive. What we're trying to achieve is the following:

  • When an animal on the shore is clicked, the state of the "Cargo" should change and the variable "XonShore1" (X is the name of the animal) should change to "False"
  • When an animal in cargo is clicked, the state of the animal on the shore and the state of Cargo should change, and the variable "XonShore1" should become "True".

First, select all three animals and add them to the Button Set. Then, select and animal (e.g. wolf) and create the following two triggers:

Repeat for each animal

Add similar triggers to the sheep and the cabbage. Preview your slide - it should look like this:

Animals are moving into cargo

Now we can load the animals, but we also need to have an option to unload them, without picking up a new passenger. Essentially, what should happen, is that the state of the animal on the shore should change from "Selected" to "Normal", when the user clicks the animal in the "Cargo". Which animal's state will change will depend on which animal is currently standing on this ship. In addition, we need to adjust the variable "Animal on Shore 1" to "True".

Click on the "Cargo" and create the trigger to make the animal in cargo appear on the shore (I'm starting with a wolf in this example):

The state of the Wolf will change if the Wolf is in Cargo.

Next, create a trigger to adjust the variable "WolfonShore1" to "True" when the user clicks the Wolf in Cargo:

Don't forget the condition.

Repeat the same for the remaining animals. In the end, the list of triggers for object "Cargo" should look like this:

Two triggers for each animal.

Finally, add a trigger to change the state of the Cargo itself:

Very easy

Make sure that this trigger appears at the end of the triggers associated with the object "Cargo". You can add conditions here (such as "If state of 'Cargo' is not 'Normal'") but they are not particularly necessary.

When I work with variables, I often add a text box with the variable references to the slide, to have a clear visual indicator that they change correctly. For example:

These can be found in Insert > Reference menu in Storyline 2

Preview the slide and try to load and unload the animals. You should see something like this:

As you see, the variables are not always changing correctly. If I click on the sheep while the wolf is in cargo, the variable "WolfOnShore1" will remain false, even though the wolf is on the shore (this is also true for other animals). Let's fix this.

We need a trigger that starts when the user clicks on an animal on the shore while another animal is staying in cargo. Select the wolf and add the following trigger:

Is the sheep in cargo? Then it will be moved to the shore if we click another animal.

Add the similar trigger to the wolf, but this time with the cabbage. When you're done, take a look at the list of triggers associated with the wolf. If you've been following this tutorial exactly, you will note that one of the triggers - "Change State of Cargo to "Wolf" when the user clicks" - is located somewhere at the top of the list. Move this trigger to the very bottom, so that all variables get adjusted before Cargo changes it's state:

Repeat for other animals. In the end, the animals should have the following triggers associated with them:

Note the order of triggers.

Preview the slide once again - the variables should adjust correctly no matter where and how you click. At any time, there should be only one "false" variable, as at this point only one animal can be on the boat (not on the shore).

Shore 1: Set Sail

With the loading and unloading taking care of, it's time to turn our attention to the "Submit" object we created earlier and add the navigational triggers.

This is a relatively easy part, as we will need to achieve the following:

  • Show user the "Sheep Eaten" slide, if the wolf and the sheep are on the same shore
  • Show user the "Cabbage Eaten" slide, if the sheep and cabbage are on the same shore
  • Show user the "Shore 2" slide, if both the sheep and the cabbage are safe.

Create the following triggers for the "Submit" object:

The point of this is to, firstly, verify if either the Sheep or the Cabbage are "dead" and then navigate the users to the appropriate slide, depending on what happened.

Note that since at this point we don't have the slide for Shore 2 yet, I used "jump to next slide" option as a placeholder. We will correct this in a moment. Also, don't forget to use "AND" in this trigger - the user should proceed only if both objects are safe.

Building Shore 2: Setting the base

Duplicate the slide "Shore 1" and rename it to "Shore 2". Go back to "Shore 1" and update the "jump to next slide" trigger I mentioned earlier, so that it points to "Shore 2" instead. 

Now, go through the list of all triggers on slide "Shore 2" and change anything that contains "Shore 1" in its name to the matching variable named "Shore 2" instead. For example, if you have a trigger that adjust variable "SheepOnShore1", change the variable to "SheepOnShore2", without changing anything else. It's a bit of a slog, but better than writing from scratch. Once you're done, preview the slide and check that variables for Shore 2 are changing appropriately (note that you will start with them as "False" at this stage - this is intended.

Shore 2: Reflecting the Reality

At this stage, when you preview "Shore 2", you will see all animals on the shore, even though they shouldn't be there. Let's change that and add these three slide triggers to disable the animals, when they are not on Shore 2:

In addition, we need to make sure that the user can see the correct animal on the ship. You may remember that when an animal is on the ship, it is, in terms of variable values, neither on Shore 1 nor Shore 2. In other words, if both these variables are "False", the animal is in the cargo. Let's add slide-level triggers to reflect that:

To check if everything works, preview the whole scene and click around.

Back to Shore 1

In your preview you may have noticed that if you sail back to Shore 1, it will not show you the correct status of pretty much anything. Let's fix that. 

First, adjust the slide properties for Shore 1 and Shore 2 to "reset to initial state". Then, go back to slide "Shore 1" and add the same triggers that we created previously for "Shore 2", but this time replace all variables with the ones that are relevant for "Shore 1". Essentially, you can simply copy the 6 triggers we created in the previous steps from "Shore 2" to "Shore 1" and make necessary adjustments. In the end, you should have the following Slide Triggers on "Shore 1":

Final Feedback

Finally, let's go back to Shore 2 and add an option to let the user know that the task was completed. In my example, I used a layer that appears when the last animal have been moved to the shore. 

Go ahead and create a layer with the matching message. In layer properties, select the option to "Prevent the user from clicking on the base layer". Then go back to the main layer, select "Cargo" shape, and add the following trigger to it: 

Preview your project and enjoy the ride - you have built your own river crossing puzzle. If something is not working, you can download the Storyline 2 file used in this tutorial from here.

Questions? Comments? Anything missing or can be done better? Let me know in the comments or connect on LinkedIn.

October 25, 2016

Three Best Things: Tinkering with Animations in Storyline

I like being inspired by other artists. I've recently noticed this interaction concept on Dribbble that seemed to transfer very well into Storyline.

Originally, I intended this as an exercise in professional stealing - for practice, I stalk Dribbble artists and secretly try to recreate what they did, but in the Storyline or in other software programs. I do it as an exercise in understanding the creative process.

So, a couple of moments later and I came up with this:

In this case I used an oversized parallelogram shape that was set to move down on the straight animation path at the start of the slide's timeline. Each slide used "Push" transition. Text has "Fade" animation.

In the original example, my focus point was on the animated yellow shape. In the original we can see the edge of the yellow shape, which moves down when the next photo slides into the screen. This is not something easily achieved in Storyline as it deals with individual slides (it's not impossible, but not easy). Apart from transitions, there are no out-of-box solutions for making the slides interact together. In this case, however, I didn't want to go for anything overly complicated. 

So, while my first version was not bad, it was still lacking the grace of the original. At the same time, I knew that I won't be able to recreate the same animation and go to bed at a reasonable time. So I had to improvise and think about what's possible in Storyline. I played around with different transitions and added extra shapes in the screen, to suggest that there are more screens coming up:

With a little shape on the right side.

I liked the movement, but the added shapes themselves felt weird to me, particularly on the first slide. The last slide was also a bit odd because it didn't have any. Perhaps, it would have been less strange if I had more slides, but for just three it wasn't really good. 

So, I kept experimenting and instead of the sideways "Push" transition, tried the up/down one. It didn't work too well at first, because, contrary to the original design, my shapes had different colors. 
The easy solution would have been to make the color of each shape the same, as I didn't really have any reason to make them different, other than for pure decorative purposes. Still, out of curiosity, I came up with another version of this interaction. I did the following:

  • Added a custom state to each shape, changing it's color to the color of the shape from the previous screen. 
  • Added a trigger to change the shape's state to normal based on the timeline (0.5 seconds).

Here's the result:

Vertical transitions and the color change.

While I was pleased with the result, the obvious weak point in this case is the reverse navigation. Specifically, if you click "Back" button, the shape colors will not match. Again, purely out of curiosity, I wanted to see if there was any way to work around that, apart from creating duplicate slides as I did previously.

With this in mind, I added a custom "Back" state to each shape and added a trigger to change the state of each shape to "Back" when the user clicks "Back" button. Here's what happened:

Going forward and then back with changing states.

It was an interesting outcome, but, in my opinion, too chaotic. As I mentioned before, there was no real reason to change the color of the animated shape and in this case I could really see that it adds more confusion / chaos to the whole concept. With this in mind, my final version of this interaction was this (click here for the published version):

Final version.

Simple yet still interesting and works quite well when going forwards and backwards (if I do say so myself). In any case, definitely an interesting technical exercise.

If you want to play around or use any of the versions of this interaction, you can download the complete Storyline file here. Or grab just the final version. Note that you will need Peace Sans font, which you can download for free.

October 16, 2016

8 Ways to Make or Break a Software Training

I've spent a lot of time facilitating, observing or receiving software training. For example, teaching someone how to use an LMS, a CMS, or a mysterious internal software. Today I want to share this:

 8-Step Plan for the Worst Software Training

  1. Sit down at the "teacher's desk" in front of the audience projecting your screen on the tiniest projector possible.
  2. Bonus points: make sure the projector is slightly out of focus, and flashes messages about the lamp needing to be changed, or other  technical errors.
  3. Be very apologetic about the complex subject matter and repeatedly remind the audience that "This is a lot to take in at once". 
  4. Bonus points: humorously blame it on "them", who didn't give you enough time to prepare or forced you to stick to a poor lesson plan.
  5. Start explaining every single element of the user interface one by one, going from left to right.
  6. Bonus points: explain EVERY SINGLE element/feature. Even those nobody is using. Make sure to talk about an element for a while, then say: "But we're never going to use it, so don't be afraid if you can't remember it".
  7. More bonus points: show and explain even the most obvious elements of the interface. Everyone needs to hear that "Save" button saves the document or that "File name" shows the name of the file. Throw in a knowledge check, too!
  8. Even more bonus points: cheer the students up by saying: "Don't worry, nobody can understand it from the start, it takes time". Or "Nobody expects you to memorize all this, I'm sure it will all become clear with time". Let them wander why are they listening to you now, rather than getting that experience.

If this sounds familiar and you want to stop apologising for your training, do this:

8-Step Path Towards Better Software Training

1. Stop placing the blame and do your job. 

This sounds harsh, but this is where you start. Don't make excuses about the quality of training! If you are an instructional designer, it is your job to take the raw subject matter in its primordial form and  transform it to make learning possible. That's what instructional design actually is. If you are the trainer, it is up to you to make the best of the material and work with the developers to make it better. In any case, any conflicts between you and the "them" have no place in the training room.

2. Stop telling learners how complex the tool is and how nobody can understand it.

This  does not motivate anyone. Stand in front of the mirror and tell yourself that you will definitely fail, but it's ok, everyone does. Does that provide a sense of relief?

3. Make a list of tasks that learners should be able to do by the end of the training.

Stop thinking "Tool training" and start thinking "Task training". In other words, forget about "They need to know..." and focus on "They need to do...". It is highly likely that the real goal of the training is not "to know the tool", but to perform specific tasks. For example: find data, correct data, cancel orders, save documents, etc.  Find out what these tasks are and write them down. Did you notice that I didn't say "learning objectives"? This is to prevent objectives like "Will be able to understand the elements of the user interface".

4. Rank the task list.

Depending on your context, you may end up with a huge list of tasks the learners need to do. In this case, talk to SMEs and the actual users of the tool to find out which tasks they do more often, or which tasks are more critical. Keep in mind that SMEs may not be reliable in their feelings about task frequency or importance. If you have access to data - use it as well. For example, in a call center, you will most likely have information about contact drivers. Map the most popular contacts to the tasks done in the tool.

If applicable, compare what you've learned from SMEs with the vision of the management who asked you to develop the training. Highlight any discrepancies. Make sure these are discussed and cleared before you proceed with the training design.

5. Design the activities that match the tasks. 

If you identified the tasks that should be performed, you probably have a good idea on what activities you need. The main challenge here is often a sequence of activities, particularly for complex tools.

To design a sequence of learning activities I consider the following about the task I am teaching: frequency, ease, tool elements and functions used. As a hypothetical example: opening a customer's account is easy and is done very often. Investigating a fraudulent activity on a hacked account is complicated and rare. Thus, opening customer's account should be the first task/activity to learn in this case.

Keep in mind the cognitive load as well. The learners should be comfortable with the basics of the tool before dealing with tasks that need analytical thinking or creative application. For example, for those who have never created an e-learning, leaning how to create simple shapes and inserting pre-made buttons in Storyline should probably precede learning how to create custom buttons with your own triggers.

6. Engage your learners early on. 

Provide instructions and guidance for anything that's not obvious, but let learners figure out information that is more obvious. With some tools, what the learners need to know is the procedure: the sequence of actions they should do. What they can often (well, depending on your tool) figure out is to how to do these actions in the tool. In this case, give learners the list of actions to do, such as:

  1. Open customer's account, 
  2. Open customer's address 
  3. Change address 

and allow the learner to find  the right buttons. Especially if the buttons names are "open account", "open address", "change address". If you're making an e-learning, add an option to get a hint on demand. In the classroom encourage peer support between  advanced learners and focus your attention on the others.

Another option to teach a tool is to provide a step-by-step written guide (think tutorials) and ask the learners to follow the steps in the actual tool on their own. In an e-learning, consider adding on-demand videos for learners to play if they need help. Instead of only watching someone doing something, learners will actually be doing it themselves. The benefit of this approach in a classroom is that it frees you up from lecturing and lets you manage the class by walking around and helping learners as needed.

7. Include activities that learners will complete on their own.

Activities should be relevant to the learning objectives and rely on what has been learned, but should be slightly different from what has been taught in the course. These should be completed without guidance. The purpose of this is a) to practice and b) to verity the learning. These activities does not have to happen immediately. In fact, distributed practice works even better.

8.Encourage your learners. 

Instead of constantly highlighting that everyone is an idiot (and that's what the 'encouragement' like "nobody can understand it the first time" really says), highlight their successes, reassure them that they are doing great (if they do), provide developmental feedback, if they don't.

Got any more ideas to make or break a software training? Disagree with something? Let me know in the comments or Twitter - @GamayunTraining

October 9, 2016

Creating Adaptive Menu in Storyline

I have always been fascinated by the idea of learning objects and adaptive learning systems. This interest is partially selfish - as a learner I rebel against training materials that force me to click through every single thing on every single slide, without letting me skip anything that I might not need, or already know.

For this week's ELH Challenge, I decided to approach the topic of checklists from my favorite perspective of adaptability. In my concept demo, I am assuming an e-learning module with three chapters.  I included three questions, which represent a pre-test on the training content. Based on the results of each asessment, the module chapters are shown as "mastered", "suggested for review" or highlighted as "focus points" for learning.

Animated menu

In this post I will explain how the concept was created in the Storyline 2. However,  the biggest challenge of this concept (assuming you would like to flesh it out) is not the development of the list, but the instructional design. Mostly because you will need to create a good assessment that will actually measure the learner's knowledge of the subject matter. Note that you will need to have as many sets of questions as you will have chapters in your e-learning.

I will assume that you are not aiming to report the results of this pre-assessment to the LMS or otherwise track it, as it's not an intended purpose of this interaction.

Creating Assessment 

There are two ways to implement the assessment:

The former will give you more control, the latter will save time.

If you wish to draw questions randomly, divide them into questions banks (one bank per chapter) and add random draws from question banks to your project. It is interesting to note that you can add any slide to the question bank. You don't have to use the built-in question slides or freeform interactions.

If you're not using your own variables, you will also need to create a separate scene and add three blank results slides - one slide per each set of questions. Essentially, you're aiming for this:

It doesn't matter how your slide results look, since they will not be visited by learners - this is the reason they are in a separate scene. The only reason we need them is to have the quiz result variables. Specifically, we'll be interested in ResultsX.ScorePercent or ResultsX.ScorePoints - depending on whether you want to use percentage or points for the final scoring.

Creating the Menu

Fancy Checkboxes

You will notice that self-ticking checkboxes are animated. To recreate this, first, create three separate objects: a "Normal" checkbox, a "Filled" checkbox, and a Check Mark. Note that I used a regular shape rather than a button.

Create a custom state for "Normal" checkbox - let's call it "Checked".
Add desired animations to the "Filled" checkbox and the Check Mark. Select both of these objects, cut (Ctrl+X) and paste them into the "Checked" state of the "Normal" checkbox. In other words, you'll have three objects inside an object:
Check box state using two additional objects.
Don't forget to create three custom states to signal the chapter status, for example:

Custom states of the "Chapter status" label

I used "Shape" animation for the filled checkbox, "Grow" for the check mark, and "Wipe" for the chapter status labels. I do recommend using animations for elements that need to turn from invisible into one of several possible states (such as chapter status labels in this example). Otherwise before changing to the custom state, they will flicker the "Normal" state for a second. Adding even a tiny bit of "Fade" solves this issue. Of course, you can try to create a "normal" state that is invisible, but consider animations, too.

Create the rest of the objects and adjust their positions on the timeline, as you wish. In my example, I have arranged everything as follows:

The elements of one chapter (title, background, status, checkbox) are grouped together and arranged on the timeline. The group is then copied to preserve the timings.

To save time, I would recommend creating the elements for the first chapter "card", add animations, timeline positions, etc. Then group the elements and copy/paste the group as many times as you have chapters. This way you will keep all properties and timeline positions of all the elements within a group - no need to adjust each of the elements over and over.

This is the way I created the "spinner" on the "Processing results" slide: I created and timed a group of 4 circles, grouped them and pasted the group several times on top of itself.


Finally, create the triggers to change the states of checkboxes and  chapter status labels, based on the assessment results. For example:

The checkbox will be set to status "Completed" if the learner scores 85% or more on the first assessment.
You're done!

Not Visiting Results Slides?

Yes, you don't actually have to visit the results slide to generate the quiz results. You can, if you like, create a "Loading screen" slide and add the "Submit quiz results" trigger to it (it will work for more than one assessment):

While I haven't tested this concept on an LMS, I haven't had any issues with the menu items changing states with the described setup when viewing the published output online. As I mentioned before, the ability to report these results was not intended for this interaction, but if you need to report these results, feel free to experiment with the results slides or any other workarounds and let me know what you find out.

Good Assessment: What is "Good"?

As instructional designers, we often have to either write ourselves or use SME-written assessments, quizzes and other varieties of multiple (or not so multiple) choice questions. And often we're horrible at it. Before investing time into learning about the principles behind assessment writing, I did every single mistake that was possible, once again proving that learning without theory is not learning.

A Good Assessment is...

While proper evaluation of assessment quality, which would be necessary for high-stakes assessment, requires specialised knowledge and skills, in my daily life I found it extremely helpful to be aware of the underlying complexity of assessment development. Which is why in this post I would like to provide a simplified overview of assessment quality criteria, as even basic knowledge of these helped me design better assessments.

So, in simple terms, an assessment is good, when it is valid and reliable.

Understanding Validity

Validity, put simply, means that your assessment is measuring what it is supposed to. There are multiple ways of measuring the validity of your test, but they ultimately depend on the purpose of your assessment, which should always be formulated clearly. For example, your assessment may claim that anyone who scores X out of Y points will be able to perform a job or a task they were taught. In this case,in order for your assessment to be valid, you need to confirm the correlation between the test score and the job performance. If you find that people who failed your assessment are performing just as good as anyone who passed, your assessment is most likely invalid.

More often, however, you will most likely want to assess the achievement of learning objectives. In this case, the validity of  your assessment can be evaluated by verifying if:

  • Questions are linked to the learning objectives. In fact, each test item should measure only one learning objective or only one "thing". While it may seem like a good idea to create complex questions that relate to two or more learning objectives, this creates an issue with identifying the reasons behind failing the question. How can you know which learning objective wasn't achieved? 
  • There is an acceptable proportion of questions per each objective. For example, a learning objective "Will be able to save a document" can most likely be assessed with one test question, while "Will be able to troubleshoot technical issues" most certainly needs more than one. 
  • The cognitive level of test items is adequate. For example, if your objective is "To analyse the impact of World War II on global economy", the question like "In which year was World War II started" is not valid for this learning objective. This is the case where Bloom's taxonomy and well-formulated learning objectives are vital.
  • The difficulty of test items and the assessment as a whole is appropriate. While defining the difficulty can be quite complex, at the very least you can ask a group of SME  as well as someone who most closely resembles the assessment target audience, to review your assessment draft and establish the level of difficulty.  

Understanding Reliability

A valid test measures what it's supposed to. Reliable test does it constantly. For a quick example of the difference between validity and reliability, take a look at this ruler:

It is very reliable (it will constantly measure the same distance), but invalid (because it actually has 8 centimeters instead of the declared 9).

Assessment reliability measurement is usually complex and requires specific knowledge. Therefore, if you need to create a high-stakes assessment, such as a final exam or a certification program which has impact on the learners' employment or study outcomes, I would highly recommend hiring a professionally trained assessment specialist to assist you in this process. However, for the development of low-stakes assessment it is helpful to keep in mind the general principles.

Generally speaking, assessment reliability can be verified by:

  • Test/Retest - administering the same test to the same audience over the period of time. For example, if you have ever taken the same personality test several times and got a different result each time (and there's nothing in your life that could dramatically affect your personality), then the test may not be reliable.  
  • Equivalence - successive administration of the parallel forms of the same test to two groups. In very simple terms, this means that you will need to create two copies of the test, with questions that are different, but assess the same learning objectives / skills / knowledge areas.
  • Item reliability - a degree to which the test questions that address the same learning objective / skill / knowledge produce similar results. 
  • When evaluating test questions, it is also helpful to consider if the item discrimination, or can the item identify good performers from the poor. For example, are there test questions that high performers find constantly confusing? 
  • Inter-rater reliability - if you're using human graders (for essay questions or observations), you should verify to which extent the grades for each test items can vary between individual graders.

Based on my practical experience with assessments following internal on-boarding programs, the reliability of assessment can be compromised by unreliable training processes. This is particularly true if your training program is being run across multiple locations, by training teams that are not operating jointly or even belong to different legal entities within your corporation. Local issues, miscommunication between departments, lack of support for trainers - all of these factors should be considered before making conclusion about either quality of the assessment or the learners' performance.

Check Your LMS

It may not always be obvious, but some learning management systems (LMS) can actually help you assess your assessment. Particularly Moodle is extremely helpful in this regard. For example, it can identify test questions that have poor discrimination or inadequate difficulty. At the very least, your LMS should provide you with the percentage of learners answering each question correctly/incorrectly (and this is what is called "difficulty") - pay attention to questions which have extreme percentages (whether low or high) and particularly the 50/50 splits.  This can indicate either an issue with the question, or an issue outside of the test, e.g. an outdated or inadequate training.

The Worst Practice to Avoid

Creating an assessment that has 20 questions and a passing grade of 80% without any reason behind it, other than this being the so-called "best practice". Plainly speaking, neither the number of test items, nor passing grade should be set arbitrarily, as this compromises the validity of the assessment. In fact, assessment based solely on these two or either one of the numbers, usually leads to disputes and confusion, as there is no clarity of what these numbers mean or what is the purpose of assessment. There are specific principles that can help you make a reasonable decisions about the number of test items, as well as the passing grade - these should be applied to all assessments.

In general, having poorly written assessment is at the very least a waste of time (and at worst - highly unethical or even illegal), both yours as an assessment writer, and the learner's. In this case, no assessment is better than a bad assessment!

Further Reading

In addition to this extensive summary of the views on validity and reliability and this (less academic) overview, I found the following printed resources helpful, practical and accessible to more general audiences:

  • Anderson, P., Morgan, G. (2008) ‘Developing Tests and Questionnaires for a National Assessment of Educational Achievement’, vol. 2, World Bank Publication.
  • Burton et al. (1991) ‘How to Prepare Better Multiple-Choice Test Items: Guidelines for University Faculty’, Brigham Young University
  • Downing, S., Haladyna, T. (1997) ‘Test Item Development: Validity Evidence From Quality Assurance Procedures’, Applied Measurement in Education, nr. 10, vol. 1, pp.61-82.

Most of them may be found through Google Scholar.

August 14, 2016

ELH Challenge 140: Slide Transitions and Back Navigation

For this week's ELH Challenge (ADDIE model) I wanted to make something short. Therefore, highly interactive branching scenarios were out of scope, as well as anything that would require custom writing (as you can see by the frequency of my blogging, I'm not exactly a fan of writing in my spare time). Therefore, I settled on this demo which describes ADDIE phases and touches upon their possible combinations/order.

Since my submission was relatively straightforward (no variables here!), in this post I would like to talk about one of the aspects that is probably very easily missed - back navigation (what happens when the learner wants to go back). In the context of this challenge, I will focus specifically on slide transitions and the impact of back navigation on my design choices for this ELH Challenge.

While most of the time back navigation is straightforward, in certain cases, particularly if you're tailoring animations and slide transitions to make your course look good while the learner progresses towards the end, overlooking back navigation may be detrimental to the experience you designed. For instance, let's review the slide transitions in my submission:

Final version of navigation + transitions

As you will notice, when we move from the title slide ("ADDIE") to the "Analyze" slide, we zoom into it. The slides for the other phases (Design, Develop, etc.) use "Cover: From Left" transition instead. My logic behind this choice was to create a feeling that we are "zooming" (= looking closer) into the ADDIE abbreviation and then move from letter to letter to explore their meaning. This is why the first slide use "zooming", while the other slides use "Cover" (more on "why Cover and not something else?" later).

In the example above you will also notice, that if we go to slide "Design" and then navigate back, the slide "Analyze" will use a different transition. Since I was trying to create a feeling of moving between letters, I wanted to avoid this: 

Immersion-breaking transitions for back navigation

This is not disastrous, of course, but it does contradict the idea I am trying to communicate and is quite jarring.

To ensure smooth back navigation, I added a duplicated "Analyze" slide and adjusted the navigation tied to the player buttons:

Better user experience but more maintenance

As you can see, the only way to get to slide 1.2 is from the title screen. All other navigation paths include slide 1.3. (which is an exact duplicate of slide 1.2 but with a different transition animation).

This might not be the best solution for the courses where you would need to update the course content very often (as you will need to make corrections in both copies of the slide), but for a stable content this might be a good workaround which would not require too many triggers/conditions or animation paths.

As mentioned before, I should note the reasoning behind the "Cover" transition for other slides. Usually, to create an illusion of movement or uninterrupted flow between slides, I would recommend using "Push" transition, particularly if you're not allowing back navigation. However, look at what happens if I use "Push" transition in this case (remember, we want to feel that we're moving from left to right of the "ADDIE" abbreviation):

Confusing transitions for back navigation

As you can see, this presents two issues. Firstly, the off-slide parts of the letters are cut-off, creating a rather ugly transition effect. Of course, this could be fixed by creating a different design, for example, by placing the letters so that they do not bleed off the left or the right edge of the screen. However, there is a second issue. When you click "Next", the slides are moving to the right, which is also the the direction that the "Next" button is indicating. When you click "Previous", you subconsciously expect that the previous slide will slide into the frame from the left slide, pushing the currently displayed side to the right. However, this doesn't happen. Instead, when you click "Previous", you see the same animation as if you were be navigating forward. I found this highly confusing and I'm confused by my own module, that's usually a sign that I need a different design solution! 

With this in mind, after trying different combinations of transitions, I settled with "Cover", as it seemed to be the least confusing of all. Of course, there is a chance to create an amazing transition with the help of animation paths and other effects, but this will be my exploration opportunity for the next challenge.

Do you know a better way or have seen examples with great experience for back navigation? Drop me a line in the comments (or Twitter - @GamayunTraining).