Earlier this year I wrote about an experience I had using randomized cards to call on students. Using a practice recommended by NCWIT, I create a deck of cards with a picture of each student on it. I shuffle the cards and then use them when I want to get feedback from my students. Students can answer the question, ask a significant question in response to mine, or pass. After the experience I had in the Winter quarter of last academic year, I’ve been more dedicated about using it. I also sort not completely randomly since those with fewer check marks, who have thus been called on less, are put at the front of the deck. I’m happy with the results of my new technique and my increased dedication to using it.
What surprised me this week though was what happened when I called on a student who didn’t know the answer to the question I was asking. Normally students in that situation pass, which is fine. I try to be completely neutral about the fact that a student is passing, as well as looking at it in the most positive light when they give me an incorrect answer. I will tell them in those circumstances that incorrect answers are far more interesting because they produce a better discussion. But I almost never have students who willing give an incorrect answer.
Yesterday one of my students started to pass and then stopped herself. She said she had something to share but that she didn’t think it was correct. I encouraged her to share anyway, and she did. As it turns out it did have a flaw, which we discussed. But it was useful for a harder version of the problem I had assigned to the faster students, which I pointed out. I commented it out and came back to it when we were solving the more difficult problem.
After the class was over it struck me how much courage it took that student to volunteer to give me an incorrect answer. It’s incredibly intimidating to speak up in class, which makes volunteering to put something incorrect up on a screen for 28 other people to see a brave step. And it makes me very happy that she felt she could do it in my classroom. I think it’s the best thing that will happen to me this week.
The quarter started a week ago, and I’m back teaching the first Python programming classes which I haven’t done in a year. I’m doing both the first class for novices and the first class for people with some programming experience who aren’t yet ready for data structures. I’m particularly excited about this quarter because my students seem so great. They’re already interacting well with me and with each other, which has me looking forward to how things will progress.
As is typical for these classes, some of the students appear to have problems with confidence. In a particularly obvious case, I had a student in the accelerated course come with me to my office after the first class. He was questioning whether he should be in the accelerated course or whether he should move into the course for novices. When I asked him about his background he said he had taken AP CS as a sophomore in high school and earned a B. When I asked if he had taken the AP exam he said no because he didn’t think that he had learned enough and was afraid that if he passed the exam he might move into courses he wasn’t ready for. After some more conversation I diagnosed him with a confidence problem and told him that. He didn’t disagree. I’m happy to say that he’s still in the accelerated course.
In both courses we use Python Tutor for visualization, so I was particularly happy when I came across Philip Guo’s essay on silent technical privilege today. It talks about a lot of things I’ve both experienced first-hand and also seen happen with others. So I shared the essay with my students today, noting that the person who developed the visualization tool we’ll use wrote it. I suspect it’ll mean more to them coming from someone other than me.
As of July 1st I became SIGCSE’s past chair, a position I’ll hold for the next three years. I’m incredibly proud of all the work that the 2016-2019 SIGCSE Board accomplished, something that I talked about extensively in my last ACM Inroads column. My thanks again to everyone who served on the 2016-2019 Board, and welcome to everyone who joined the 2019-2022 SIGCSE Board. It’s going to be great to work with you.
I have to admit though that I’m very happy to be done being SIGCSE chair. As much as we accomplished and as much as I was honored to have had the chance to serve, it was at times an overwhelming amount of work. Just about all aspects of my life took a hit during the tough weeks, and it’s a relief to have a more manageable schedule now.
In fact, one of the delights of this summer has been planning what comes next. Yes, I took a break from a lot of work in the past two months, binge-watched Netflix, and did a long-overdue purge in our house. But I’ve also found myself drawn back into research. I currently have three papers and a panel under development, and spending my work time this summer writing has reminded me how much I love it. I have a research break scheduled January – March 2020 part of which will be spent in Auckland working on a new project. It also looks like I’ll get to travel to Ireland in December to brainstorm with someone on a new collaboration. I’ve also added reviewing back into my life, and I’m looking forward to reading other people’s research and giving them feedback on it.
I’m not sure what’s next on the horizon, but I’m looking forward to it. Oh, and I will have more work to do for SIGCSE, as past chair, as ITiCSE conference liaison, and as chair of the Travel Grant and Speaker’s Fund review committee. But it feels like a new chapter, and one I think I’ll enjoy.
The spring quarter ended in mid-June, so I recently had another chance to review a set of course evaluations. I value course evaluations in general, even though I know that they can have significant problems. I like hearing back from the students about what they felt, and I occasionally get some great ideas on how to improve my classes.
But the lack of consistency in the comments from students in the same class genuinely bothers me. For example, I had the following comments in my course evaluation from my Python class this quarter when asked about grading procedures and exams:
- “The exams were straightforward but grading on homework was not.”
- “It’s actually fairer than other professors I had”
- “Wouldn’t change”
- “Great process”
- “Too strict”
Is the grading fair and shouldn’t change? Is the grading too strict? Is the grading not straightforward? Is the process of grading a great one? You could believe any of these reading the set of comments I got.
I tend to simply disregard all the comments when they diverge so significantly, but I’m not sure that’s the right approach either. And I doubt that there is anything that can be done to improve the process of obtaining comments on course evaluations, since I suspect that opinions simply vary significantly between students in the class. It does help to know I’m not alone in the divergent comments, since I’ve heard/read jokes from colleagues about it for many years now.
I enjoy going to conferences for all the obvious reasons: I get to see people from around the world doing interesting work in computing education and visit fun places I wouldn’t otherwise see in the process. But the one thing I don’t like about conferences during the academic year is having to make up classes. Travelling is stressful by itself, and having to find time to schedule make-up classes as well as get students to come just adds to that stress.
So I am thrilled that DePaul has a license for Zoom and integrated it with our course management system. I’m about to miss a full week of classes due to a conference in China, which would normally mean multiple days trekking downtown to make-up classes. This time around I scheduled a few hours of Zoom meetings with my students, and got to make up the classes from my home office. They’re recorded automatically, and I can easily post a link for the students who couldn’t make it. Zoom makes the interactivity I like in my classes easy, and students even communicate with each other using chat during the session. The only thing I don’t get is the look on their faces, which I miss, but I’m willing to take the hit given that the sessions can include my online students who would normally be excluded.
I think this is a great change. I just hope the students are as happy as I am.
We’ve reached the second week of the Spring quarter, which is when people who aren’t serious about sticking with a class start to drop since the end of this week is the deadline to get your tuition back. One of the classes that I’m teaching is a transitional course between the introductory Python sequence and the Java data structures classes, and it has both in-person and online sections. Due to scheduling constraints the in-person section is smaller (less than 10 students), and today when I went to print my roster so I could take attendance I noticed that the only woman on the roster had dropped. So the in-person section is now a class where I am the only woman. I’m sorry to say it’s not the first time this has happened. But after being spoiled by my linked-courses learning community which targeted underrepresented groups, it’s making me more sad than previous incarnations.
The Spring quarter started this week, and as part of my activities wrapping up the Winter quarter I reviewed my course evaluations. Course evaluations are both a blessing and a curse, and I try to not use them as the sole measure of my teaching because of their many problems. But I have in the past found them to be a useful source of feedback, so I always read them. As usual, I was glad I did.
Before I get to the comments from the Winter quarter, you need a bit of background. As a part of work that I did with NCWIT starting two years ago, I adopted one of their classroom practices. They recommend using a deck of cards to call on students. The cards have students’ names and pictures on them and are shuffled at the beginning of class. Students answer questions when their card comes to the top of the deck and have one of three possible responses: answer the question, ask a clarifying question, or pass. It’s a really handy way to rein in the tendency for the class to be dominated by a few know-it-all students, which can be a problem in introductory programming classes.
I use the technique in all of my classes, except in the Winter quarter I backed off from it about 2/3 of the way through the term in my Python class. It can sometimes feel like pulling teeth to use the technique since students can be resistant, and for whatever reason (exhaustion?) last quarter I didn’t push through and force it. As it turns out that was a mistake, because I got the following comments on the evaluations when prompted about my weaknesses:
- It is also good that she calls on students which forces them to pay attention, but the same six students always get called on.
- Bit too strict, does not seem to call on everyone.
- I felt like at times she chose favorites in the class, and that discouraged me from participating as much because I felt a little intimidated by her.
So while this is only three comments from 18 responses, I think it’s important feedback that shows students both appreciated the card system and noticed when it was relaxed. I haven’t ever had comments about favorites on evaluations, either before or after I started using the deck of cards. So the technique is making them notice the classroom dynamics, and at least some of them reacted badly when it was relaxed.
Needless to say I’m recommitting to the practice, but this time with a slight change: I’m going to be marking the cards for each person every time I call on them. That way if my random shuffle accidentally favors some students, I’ll have a way of noticing it and correcting it. I don’t want students to believe that I have favorites, and I think that the changed approach will help. I’ll also push through and use it consistently no matter how resistant they may appear (or tired I may be).