Skip to main content

Outline

This chilly October week (at least at LSW HQ) we're diving into new research about assessments and question types. Our article specifically answers the question:

  • How can open-ended questions be used effectively in workplace education?

What does Jeff Goldblum think about this week’s featured research?

It will change the world as we know it!

Hmmm… that’s a strong recommendation, how about we just let you decide on your own. 😆

Let’s Reflect 💭

When considering creating assessments, it's often heard that people like to stay away from open-ended questions due to the load (i.e., manual “grading”). In a large course, grading a multitude of long-form answers in an assessment is daunting. The problem with this is - we know that reflection is an incredibly powerful mechanism! When we prompt learners to reflect on their learning, self-efficacy and self-regulated learning can improve (see: LSW Issue #60). Thus, a scalable way to evaluate learner reflections is an important development for workplace education. In this study, Barthakur et al. (2022) evaluated an automated analysis of reflective writing in a workplace learning environment.

Researchers gathered the data of 771 learners from a Leadership Skills MOOC, which was developed for the employees of “a global US corporation.” The course, which was 4 weeks long, consisted of “video and lecture modules,” a discussion forum, and reviews. The reviews included multiple-choice questions, as well as 3 to 4 reflective questions per week. The research team developed a coding scheme which informed the classification system. To rate the depth of the reflection answers, the following classification system was used (Barthakur et al., 2022):

  • No Reflection - Answers that lack “significant reflective thought” and mostly mirror words provided.
  • Understanding - Answers that illustrate an understanding, but lack “specific details… or real-life experiences.”
  • Simple Reflection - Answers that show understanding alongside practical application, but lack an explanation of “future actions or outcomes.”
  • Critical Reflection - Answers that have the above, as well as addressing “what they are likely to do in the future,” what they need to work on, or a change in perspective.

The automated classifier was built in Python “using scikit-learn machine learning library” (Barthakur et al., 2022). When analyzing the depth of reflection answers across the course, learners were more likely to provide answers that fell into the “Understanding” and “Simple Reflection” categories. This was in line with expectations, given that deep perspective changes are rare. Further, given learners within the course, professionals are unlikely to provide an answer with “no reflection” (Barthakur et al., 2022). Importantly, the automatic classifier performed very well! The authors posit that the “automated system has great potential to 'evaluate professionals’ reflective writing at scale” (Barthakur et al., 2022).

Key takeaway: Reflection is a powerful tool in workplace learning, but it is often difficult to implement and track at scale. These results illustrate that a machine learning approach can be utilized to implement an automated classification system for open-ended questions, which allows for open-ended questions to be used in assessments at scale.

Read More ($): Barthakur et al. (2022). "Understanding depth of reflective writing in workplace learning assessments using machine learning classification." IEEE Transactions on Learning Technologies.

The Content Conundrum

Research we featured in Issue #79 explored content types. One article specifically answered the question, “When should we use text, video, or both?” The key takeaway was that although text and video are both useful, keep it to one or the other!

Looking for more resources about content types and content authoring? Check these out:

If you will be in Las Vegas next week, stop by the Intellum booth to nerd out about best practices in learning science, assessments, instructional design, and content authoring with us!

Our Evolve expert, Helen Bailey, will be there to showcase our powerful Evolve Authoring tool. Come say Hi 👋🏼 between sessions.

Pets of Learning Science Weekly

We’ve got a celebrity feature this week! Say hello to Jeff Goldblum! Okay, okay, okay, it’s not the real Jeff Goldblum, but a gorgeous tabby from Georgia!

Our reader Yuki W. says, “seven years ago, he ran under my car at an intersection so that I couldn't move, and kind strangers helped me grab him out of the wheel well while I blocked traffic.” He has been named the neighborhood's “resident cat chongus and cheese thief.”

In the words of his namesake “Nice To Meet You, Mr. Dude.”

white faced cat sitting on a wooden table wearing a hat with fuzzy green bunny ears

Wondering why we’re including animal photos in a learning science newsletter? It may seem weird, we admit. But we’re banking on the baby schema effect and the “power of Kawaii.” So, send us your cute pet pics -- you’re helping us all learn better!

The LSW Crew

Learning Science Weekly is written and edited through collaboration with the Intellum content and learning science teams.

Have something to share? Want to see something in next week's issue? Send your suggestions to editor@learningscienceweekly.com