Peer Instruction Network member Claire, a K-12 English teacher from Michigan, worries that far too many of her students are falling through the cracks. She is looking for ways to reach them. Could student-generated questions be one possible low-threshold intervention? This post digs into this question. However, if you are looking for a quick protocol for student-generated questions, skip to the bottom.
It’s that moment. An hour before class starts and for a variety of reasons (committee meetings, chair beating down your door for this that or another thing, kids with the flu, etc.) you didn’t have time to prepare the totally engaging, mind blowing experience that usually rocks your students’ worlds during class.
What to do?
I faced this predicament when I last taught my education theory course at TC. Being someone who takes their teaching extraordinarily seriously, I was panicked and feeling guilty about being under prepared. So, I whipped open Keynote and started to eek out a few clicker questions. Halfway through, I stopped, dejected, when I realized that in rushing I could only come up with a few lame memorization or fact recall prompts. Scratching my head but determined to figure something worthwhile out, I decided to try an idea I had been batting around in my mind for weeks.
At this point in the semester, the students were used to my flipped approach: they were diligently doing their readings and Just-in-Time Teaching (JiTT) exercises before class and working on application questions in class via Peer Instruction and ConcepTests. For one of their JiTT exercises, I planned to have students write ConcepTests/clicker questions about the material. My goal was to engage students in the higher levels of Bloom’s (e.g. creating, synthesizing material) by getting them to create questions in a low-stakes environment.
I also planned to use some of the questions as ConcepTests during class.
On this one bad day, I decided to fast track that plan and have students spend the first 15 minutes in groups writing clicker questions that we would then run in class.
While a few of the student questions were terrific, most were too easy, a small handful were terrible. I tried this exercise a few more times, but soon took it out of my active learning tool box. I simply did not find enough student-generated questions that hit that sweet spot of not too hard and not too easy.
Did I abandon ship too quickly? Maybe.
What does research say about student-generated questions?
Recently, a colleague at the University of Texas at Austin, Zachary Williamson, pointed me to a study by Berry and Chew (2008), which evaluated the potential benefits of student-generated questions. Berry and Chew found that in a study of undergraduate psychology students (n=102), “generating questions related positively to improved student performance,” particularly for low- performing students.
A caution: The students who opted to participate by generating questions about course content, had performed worse on their first two exams than those who opted out. As such, there is some noise there – was it motivation to improve, more intense study, or was it the act of generating the questions? Hard to tell.
However, Berry and Chew also documented that for the students who generated questions, there was an upward trajectory in their exam performance (they did better on the third exam than would be predicted based on their previous performance). The students who did not participate in the student-generated question exercises did worse than statistically expected. Furthermore, the more questions the lower performing students generated, the more they improved their performance. The improvement appeared to be related to frequency of question and independent of the depth or quality of their questions. Other similar studies in other disciplines such as physics found the opposite effect. For example, Harper, Etkina, and Lin (2003) found that the number of questions posed was unimportant, but the quality or depth of the question was. Berry and Chew explain this by suggesting that the effects of question generation may have some discipline-specific differences.
A possible intervention to help struggling students?
While the jury is still out, I am intrigued by the possibility of such a low-threshold instructional intervention to help teachers like Claire and her lower-performing students. In Berry and Chew’s study, instructors used the same prompt to solicit questions every time and did not even give the students feedback about their questions, they simply responded with an email awarding the extra credit. (Note: I make a hard stop at advocating lack of feedback and I think feedback experts would probably discount this tactic as well.)
I do think it is worth trying student-generated questions to see if they might help under-performing students in our classes improve. Even in interactive teaching environments, with master teachers, there are students who show early warning signs of struggle. It is possible that creating incentives for generating questions could be an easy way to help them succeed.
Protocols
If you want to test student-generated questions in your class – here is the prompt Berry and Chew (2008) used:
“Provide us with three questions that you would like answered concerning the topics covered in your textbook readings or in lecture. These can be any questions you might have, as long as the questions are about the material or are stimulated by the material. They can be questions about concepts you are still unclear about, about further information you would like to have, or questions about how some issue applies to your own life or to other course concepts” (p. 306).
A recent blog post by Jackie Gerstein also has some protocols developed by others.
Here is my distilled protocol based on my research into this topic:
Online systems for student-generated questions
There are fantastic online systems that facilitate student-generated questions in a higher-threshold way. My favorite is PeerWise. PeerWise is free and also provides a platform for peer and instructor social interaction.
IN CLOSING, I do not know if my efforts to get students to generate clicker questions had any statistically significant effect on their actual learning. When I abandoned ship, I assumed that the lack of depth of the questions my students generated was a sign that the exercise was not very useful. But maybe I was being too teacher centered–just because most of the questions were not useful for me does not mean creating them was not useful for my students.
Claire, I think trying out a structured approach to student-generated questions may be worth a shot.
READERS: Provide us with three questions you would like answered concerning the topics covered in this post in the comments section below…
Berry, J. & Chew, S. (2008). Improving learning through interventions of student-generated questions and concept maps. Teaching of Psychology, 35, 305-312.
Harper, K. A., Etkina, E., & Lin, Y.-F. (2003). Encouraging and analyzing student questions in a large physics course: Meaningful patterns for instructors. Journal of Research in Science Teaching, 40, 776–791.
Simon Bates
I’ve just been browsing through some of the (nearly) 2000 questions that our first year non-majors Physics class have produced during 2 in-course assessments with PeerWise, selecting one or two for the upcoming final exam. Very few are trivial or irrelevant, most are pretty solid, a substantial fraction are exceptional.
One key thing we’ve found in getting students to create their own content, is that scaffolding the activity -literally helping them ‘know what it is they don’t know’ — seems to be one of the key ingredients to get good quality questions that go beyond simple factual recall.