Student Life

Accuracy of Course Evaluations and Making Them Count

Now as the semester ends, Concord will be sending out reminders about course evaluations beginning Thanksgiving break. Though typically somewhat lengthy, these questionnaires seek to gauge the success of the class, in particular the skill and knowledge of the instructor. But how effective are course evaluations in measuring these things?

    Last semester, student advocates at Concord presented their issues with the course evaluations as they were sent out to students to the Student Government Association. As it stood at the time, the character count for the comments section was low, and the system was confusing; some instructors wanted a paper evaluation done in class, and others wanted it done online. These student advocates successfully raised the character count of the comments section from its original hundred characters.

    According to Philip B. Stark and Richard Freishtat’s article in “ScienceOpen,” Concord made the correct decision in allowing an increase in character count. They propose in their article “An Evaluation of Course Evaluations” that the comments section, if provided in a course evaluation, is the most crucial aspect, completely setting aside numbers.

    Largely, this has to do with the vagueness of “rate such and such on a scale of one to 10” and the discrepancy between two different student’s ideas of the distance between two given numbers. For example, one student might think more relatively than another. The jump from say three to five might be much greater for one student than another. In this case, the number averages provided about a certain instructor’s clarity, for example, could be meaningless. On top of this, the more variation allowed, the more likely a student is to choose a less extreme number, according to Stark and Frieshtat. For example, a student will feel less inclined to rate a professor a two in a single category on a one to 10 scale than they are even on a one to nine scale.

            Another problem with the accuracy of course evaluations posed by researchers and evident in the Concord community is response rate. In general, only students who felt strongly about the class one way or another typically fill evaluation forms unless the instructor or the school makes it mandatory. In this case, the data will yield wide variation—student one thinks that the class was horrible and gives a score of one for everything, and student two thinks it was brilliant and gives a score of 10 for everything. No number cruncher can hope for even a remotely accurate assessment of the class or instructor.

    Wide variation can also occur when the class has a mixture of elective or general studies students and those in a required class for their major. Those that completed more prerequisites and who have gained an advanced level of competency will typically rate the class differently than those taking it for elective reasons or for general studies credit. They may rate it more harshly or find fewer flaws with the course than those outside the major.

    The comments section, therefore, provides a better place for students to express what went wrong in the course or what went well in the course. Using Stark and Frieshtat’s logic, it is not only in the best interest of the instructor but also of the student to make sure that the comments section is not neglected during the course evaluation. Several students making the same detailed complaint will draw attention better than several students giving an instructor a three in most categories. The numbers are meaningless without explanation.