Skip to main content

Outline

Hello, hello! This week, we’re looking at a study that focused on emotion. Specifically, they sought to answer:

  • Do confusion and frustration boost learning from working erroneous examples?

We also take a related flashback to an article from Issue #22, that asked:

  • Is it better to have learners who are frustrated or ones who are bored?

Confusion, Frustration.. No, Confrustion!

When we consider learning, it’s important to think about how our learners are feeling. Emotions can greatly impact attention, memory, problem solving, and more (Tyng et al., 2017). A recent article published in Computers & Education explored whether confusion and frustration impacted learning from erroneous examples (Richey et al., 2019). Specifically, the study focused on math - where past work has looked at: worked examples (i.e., those “with solution steps provided”), examples with instructional explanations (i.e., conceptual instructions with each step), and erroneous examples (i.e., examples that have already been worked but with an incorrect solution). For the “erroneous examples,” I usually think back to all of the carets used in school when editing first drafts.

We know that the types of problems used for various learning scenarios are more or less effective for learners based on both individual characteristics and outcomes we want to achieve. So, this study aimed to evaluate confusion and frustration when working with erroneous examples (Richey et al., 2019). Confusion and frustration are emotions that have been studied quite a bit in recent years; findings show that self-report of these emotions are not linked to learning, but rather behavioral indicators are. Further, research has illustrated that “confusion and frustration might represent two points on the same continuum” (Liu et al., 2013). Thus, the birth of evaluating “confrustion” rather than two separate constructs. The researchers also chose erroneous examples for this specific topic, decimal number properties, as working erroneous examples may be more beneficial for topics with a large number of misconceptions; essentially, “disproving” the misconception (Richey et al., 2010).

The research was conducted with middle school students over the course of 6 days. Learners were split into two groups: erroneous examples (ErrEx) or problem-solving (PS). The ErrEx group was presented with problems from a hypothetical student, told they were erroneous, and tasked with correcting it while presenting the “underlying principles.” On the other hand, the PS group was tasked with finding the correct solution to the same scenario, but from a word problem format (Richey et al., 2019). See image for an ErrEx scenario.

Richey et al. (2019)

Both groups followed the same timeline. In days 1 to 5, learners completed the pre-test, intervention materials with the “computer-based intelligent tutoring system,” and the post-test. During the course of the second week, unrelated math work was taught by the instructor. Finally, learners completed the delayed post-test one week later (Richey et al., 2019). Now, how the researchers evaluated confrustion was one of my favorite parts of the paper (and a big reason I chose to cover it)! To detect confrustion, researchers created a “confrustion detector” to evaluate confrustion through “text rplay coding on log data” (Richey et al., 2019). While there’s a plethora of information on detecting affect through log data (see: Acheampong et al. (2020) for an open-access review), building an affect detector for a newly defined emotion was neat to read about.

While there was not a significant difference between groups on the immediate post-test, the ErrEx group performed significantly better on the delayed post-test! I think this is a great reminder to make sure we’re not just evaluating immediate impact, but to also check for retention over time ⏱️. Confrustion told, well, a confrustating story… When looking at all learners together, confrustion was negatively correlated with test performance - which, might tell us that confrustion is particularly *not good.* However, results showed that ErrEx learners experience higher levels (and longer durations) of confrustion than PS learners (Richey et al., 2019). So, what’s going on here? In short, a moderation! Learners in the ErrEx group experiences less of a negative impact from an “increase in confrustion.” Another interesting finding was that learners experienced less confrustion as they progressed through the math program (Richey et al., 2019).

Thus, while the researchers were hoping to see if confrustion pushes learners when working erroneous examples, it appears that people learn from erroneous examples despite confrustion (not because of it). A cool win from the study was also supporting the growing field of affect detectors, as an affect detector has now been successfully used with erroneous examples. Future work should continue to delve into *what exactly* makes erroneous examples so helpful for learning!

To note: This study was conducted with young people and not adults. Thus, we cannot state the findings would transfer. More research should be done in this area to understand generalizability. Although, I’m not sure I would know all of the decimal misconceptions either 😆.

Key Takeaway(s): Those learning from erroneous examples performed better on a delayed post-test. Further, although erroneous examples may lead to higher levels of “confrustion” (confusion & frustration), learner outcomes are less negatively impacted when learning from erroneous examples rather than conventional problems. Lastly, the results illustrated that data logging can be an effective tool for understanding learner “confrustion” in an online learning environment.

Read More ($): Richey, J. E., Andres-Bray, J. M. L., Mogessie, M., Scruggs, R., Andres, J. M. A. L., Star, J. R., Baker, R. S., & McLaren, B. M. (2019). More confusion and frustration, better learning: The impact of erroneous examples. Computers & Education, 139 , 173-190.

"​​One general recommendation that can be drawn from this paper is that creating the type of data logging available in the system studied here can be a powerful tool for understanding learning better. By logging every student action in a fine-grained fashion, it was possible not only to study performance on specific skills over time, but also to conduct retrospective analyses on affect that were not envisioned at the initial time of data collection.” - Richey et al. (2019)

Yawn… (⚡🔙.✌️.Issue #22)

Is it better to have learners who are frustrated or ones who are bored? The quick answer: frustrated. Researchers investigated learners’ emotions and found that “that boredom was very persistent across learning environments and was associated with poorer learning and problem behaviors, such as gaming the system,” whereas frustrated users did actually learn something, even if they weren’t enjoying it.

Key Takeaway: There’s something to be said for keeping learners psychologically engaged (and awake).

Read More ($): Baker, S.J.D.R., D’Mello, S., Rodrigo, M.M.T., & Graesser, A.C. (2010). Better to be frustrated than bored: The incidence, persistence, and impact of learners’ cognitive–affective states during interactions with three different computer-based learning environments. International Journal of Human-Computer Studies, 68(4), p. 223-241.

Pets of Learning Science Weekly

This week, we've been #blessed with not one, but TWO cuties! Reader Judith R. shared Zavier, the Greyhound, and Pinkey Brown, "a Shih Tzu-Terrier mix (maybe)." These adorable bubs both came from rescues; "Zavier is a retired racer, and Pinky is from an amazing rescue group in Crawfordville, Florida."

Send us your pet pics at editor@learningscienceweekly.com.

Wondering why we’re including animal photos in a learning science newsletter? It may seem weird, we admit. But we’re banking on the baby schema effect and the “power of Kawaii.” So, send us your cute pet pics -- you’re helping us all learn better!

The LSW Crew

Learning Science Weekly is written and edited by Katie Erhardt, Ph.D.

Have something to share? Want to see something in next week's issue? Send your suggestions: editor@learningscienceweekly.com