Why teachers should address misconceptions about critical thinking
Never assume a student is trying to learn the same thing as you are trying to teach. This applies universally, but I stumbled across a research paper from five years ago that made a special case for it when teaching critical thinking. I won’t summarize the whole paper (Stupple et al., 2017), but it describes how a measure of student’ attitudes towards critical thinking was constructed and applied.
The authors motivate their study as follows:
In one study, psychology students and tutors had very different understandings of what is meant by terms such as critical evaluation, development of argument, and use of evidence, with tutors’ descriptions emphasizing internal cognitive processes, whereas students’ descriptions emphasized the selection and manipulation of external material (Harrington et al., 2006a, Harrington et al., 2006b).
Harrington et al. argue that ‘critical evaluation’ is often understood by students to mean negative criticism with emphasis on weaknesses and limitations, while ‘argument’ is often understood as an adversarial conflict rather than an academic skill. From this evidence, it is apparent that a measure of beliefs, attitudes and behaviours about critical thinking, specifically targeted at psychology students, would have the potential to offer great benefit.
They go on to describe how they built such a measure: the Critical Thinking Toolkit (CriTT) which is structured around three factors:
- Confidence in Critical Thinking
- Valuing Critical Thinking
- Misconceptions
The intentions behind the first two are self-explanatory and the third was a measure of common misconceptions among students (e.g. critical thinking is aimed at pointing out weaknesses).
The authors then proceed to look at how these different factors predict scores on the Critical Reflection Task (CRT, a set of questions that challenges participants to suppress heuristics, much like the bat-and-ball-like tasks in an earlier post) and the Argument Evaluation Test (AET, which measures whether arguments are rated on the merit of their strength or through motivated reasoning).
For the full picture, I refer to Stupple et al., but there’s one result I want to spotlight: students who scored high on the factor Misconceptions tended to do less well when required to assess argument strengths. Regardless of the causality that explains this relation, this suggests it’s worthwhile to address and test such misconceptions in critical thinking education.
References
Harrington, K., Elander, J., Norton, L., Reddy, P., & Pitt, E. (2006a). A qualitative analysis of staff-student differences in understandings of assessment criteria.
Harrington, K., Elander, J., Lusher, J., Norton, L., Aiyegbayo, O., Pitt, E., Robinson, H. & Reddy, P. (2006b). Using core assessment criteria to improve essay writing. Innovative assessment in higher education, 110-119.
Stupple, E. J., Maratos, F. A., Elander, J., Hunt, T. E., Cheung, K. Y., & Aubeeluck, A. V. (2017). Development of the Critical Thinking Toolkit (CriTT): A measure of student attitudes and beliefs about critical thinking. Thinking Skills and Creativity, 23, 91-100.
Member discussion