If you read Mark Guzdial’s blog, and you should if you don’t already, you know that one of his calls for action is for computer science educators to focus on measurable improvements to the way we teach.  He even posted an entry on the topic in August 2010 in which he said:

After 30 years, why hasn’t somebody beaten the Rainfall Problem?  Why can’t someone teach a course with the explicit goal of their students doing much better on the Rainfall Problem — then publish how they did it?  We ought to make measurable progress.

I just finished teaching an introductory computer science class that teaches Python and problem solving, so I decided to take Mark’s suggestion to heart and see how well my students did on the Rainfall Problem.  One of the questions my Python students had to answer on the final exam was the problem as described by Venables, Tan, and Lister in their 2009 ICER paper.

So what’s the punchline?  The average on the question was 23.2/25 with a median of 25/25 and a standard deviation of 2.89.  Even the poorer students in the class mostly got the problem right.  They really nailed it!

There are some caveats, of course.  The class starts having students work with lists during the second week of the quarter, and the students really, really understand them by the end.  I even had one student, and not one of the strong ones, tell me: “I am just really comfortable with lists.”  The Rainfall Problem lends itself well to using a list, and most of my students solved it that way.  The final exam is taken on a computer in a lab, which helps them to find their mistakes more easily.  The students in the class are also computer science majors, as opposed to information systems or information technology or any of our other majors.  And I didn’t apply for IRB approval before giving the exam so I won’t be doing a more careful analysis and publishing the results.

Still, I’m happy to have tried this in my class.  It’s inspiring me to understand better the way the class is taught and organized so that I can figure out if this is a fluke, and if not, what can be distilled from it for others.  Perhaps there is a paper on this in the future.

Thanks, Mark, for the inspiration.  Let him (and me) know if you try something like this in your class in the future.