The problems with Ratemyprofessors.com and course evaluations as measures of teaching efficacy: A scientific and anecdotal approach
As you explore this site, you will notice a profound disconnect between the anonymous feedback that I receive from students in my own surveys and the feedback that is left for me on RMP's archaic website. While there are issues with anonymous data collection in general, I can assure you that what is posted on RMP is not reflective of students' actual experiences.
In higher education, we try to move away from putting any weight on such a laughable platform, but nonetheless, I still see A) plenty of students consulting the site when choosing a professor and B) other faculty referencing others' RMP ratings as points of contention. This an ethical concern for many reasons. Below, I hope to provide a compelling argument to sway you away from consulting RMP. References are included below.
I want to make a note here that the purpose of this section is not to make myself seem infallible or to make professors seem like victims. Rather, the contrary is true; this information should confirm that all of us, myself included, have much room to improve in how we approach (and perceive) the learning environment. I employ plenty of opportunities for feedback, seek professional development, and implement as many best practices as I feasibly can, but I will always have to work hard to improve and grow. RMP just represents the lowest bar of all.
Professors are not exempt from the issues I present below. That deserves a different post entirely. The information below serves to target the issues with RMP specifically.
Only the happiest and angriest students choose to leave comments, which presents a polarized (and inaccurate) representation of that professor. For example, some students claim that I am the kindest, nicest person ever (not true), and other students write that I completely neglect them in online courses (also not true).
Measures of central tendency and polarization.
RMP provides a mean (average) rating for the professors. This is problematic when you account for the polarization mentioned above. If half of the students give me a 0/5 and the other half give me a 5/5, that leaves me with an average of "2.5." Notice that nobody rated me with a 2.5. It is representative of neither the actual course experience nor the actual ratings on the website.
Rating the wrong professor.
For years, I had a negative comment about how I would always 'look down my glasses' at students in an intimidating way when they asked questions and took forever to grade their group projects. I don't wear glasses and don't use group projects. Another time, a student told me that they loved my class and raved that had given me a fantastic rating on RMP. Then they called me "Dr. Jones" on the way out.
On my course evaluation one year, a student reported that I discriminated against students for whom English was a second language. Upon further review, the student had misinterpreted the function of my optional tutoring services. I had hired a TA with a certification in TESL (teaching English as a second language) to offer additional tutoring sessions to those who needed more one-on-one, individualized teaching to succeed in the course.
No option to counterpoint (despite what the site says).
I have tried several times to counter inaccurate posts on RMP to no avail. I've never received any follow-ups from RMP's staff.
Fake posting from professors.
Did you know that it's super common for professors to make fake accounts and give themselves awesome ratings? I knew a former colleague at a different school who gave himself 37 fake positive ratings to inflate his score.
Sometimes, the same student will leave several comments just to bring my overall rating down. RMP doesn't moderate its posts unless they contain profanity, so I'm left with one student resubmitting complaints without consequence.
Problems you may not have considered
You're being intentionally lied to.
From Clayson & Haley (2011): "Thirty-one percent of the respondents admitted to recording false information on the scale questions of the evaluations, while 19.4 percent admitted to adding untrue written comments on the evaluations. Combining both scales and written additions, in total, 37% of the students stated that they had submitted information in some form on the evaluations they knew were not deserved or were purposely false."
Imagine how inflated this statistic is for RMP! The converse is also true for positive ratings. Students will see negative comments for professors they like and counter with equally exorbitant positive ratings. Neither are accurate. Neither fix the problem.
I notice that most of my negative ratings come in right after grades are posted, exam scores are released, or students are caught (and earn consequences for) cheating. You wouldn't know that though since you weren't there. All you see is the rant. One semester, a student of mine complained about me to the Provost and recommended disciplinary action be taken. When I asked the student why she did this, she answered, "Oh, I was just mad at the time. I didn't mean it." Consider this if you must still peruse RMP, where the (inaccurate) comments are thrown up onto the page in the heat of the moment and long forgotten.
judgments versus accuracy
Students don't actually know what's best for their learning.
"Students are not particularly good at evaluating their own learning, and they hold many false assumptions about how people learn. Students have a strong tendency to prefer instructional approaches that enhance their subjective impressions of learning, but that have been shown through empirical research to be ineffective or even counterproductive for learning." Carpenter et al. (2020).
Negative ratings don't always reflect bad teaching.
By now, this shouldn't be a surprise, but this information is also important. Here is a recent negative RMP post on my page featured the following complaint:
Here is what this person failed to mention:
This is why it is a problem that professors do not get to counter these public complaints. Students who are in a hurry, have pre-existing biases, or are stressed won't think very critically about how valid these complaints are (which is known as the peripheral route of processing persuasive arguments). They will use availability heuristics (relying on the sheer volume of low ratings in contrast to the occasional positive one, or vice versa) as their primary decision point. And who can blame them? I would do the same thing.
Because students enter college with misconceptions about how learning works, they often interpret their own failures as faults of the professor. Being intellectually challenged for the first time is a huge hit to the ego. It makes sense to avoid a potentially stressful professor if the ratings are low, but consider the following research:
Examples of other negative ratings I've received that aren't actually bad practices:
Good ratings don't always reflect good teaching.
Neath (1996) concluded that one of the top three ways to boost course evaluations (and subsequently, RMP) is to grade leniently. The other two tips were to be white and male (*shrug*). RMP is full of recommendations for professors who don't thoroughly challenge students in ways that enhance their autonomy.
A confirmation bias is when you form an initial attitude based on some belief, then confirm that belief by noticing/selectively attending to information that confirms that belief.
When you peruse RMP, you form biases about that professor. Subsequently, you unconsciously look for instances that confirm your now pre-existing bias. According to a plethora of recent studies (see references), students are more likely to have negative encounters with professors after reading negative RMP comments compared to those who read positive or neutral comments beforehand. Benign, neutral comments from professors are interpreted sharply as criticisms, apathy, or attacks. Students who read negative RMP ratings beforehand interpret everything about that professor to fit with their preconceived notion. Plus, if students go into a course expecting the professor to be unreasonable, they are less likely to seek help in the first place. They then continue to experience a subpar course and further confirm their expectations. Reading negative RMP ratings will only hurt you.
To summarize the results from Boswell & Sohr-Preston (2020):
For some perspective, Boswell & Sohr-Preston say:
When you form an attitude by reading RMP ratings, you develop an existing bias about that professor. This bias affects the way that YOU behave/interact with that professor. Since this bias is rooted in negativity, your behavior (whether you know it or not) negatively affects your relationship with the professor. The vibe you give makes the professor behave defensively/cautiously toward you, which confirms your belief. Round and round the cycle goes.
THE BOTTOM LINE: By reading through/believing RMP, you miss out on a good educational opportunity while contributing to an ongoing problem.
Evidence of sexism/Gender discrimination
Buckle up! It's about to get real.
RMP is a cesspool of systematic racism, sexism, ableism, classism, and every other -ism out there. While no survey system is without flaws, the opportunity to leave anonymous, deindividualized, consequence-free, emotionally-laden "reviews" of any human is an open invitation for discrimination. While the same is true for semester-end course evaluations, the option to submit multiple ratings at any point during the year means more room for flagrance. Here are some key findings from the latest literature on systemic racism and sexism in student evaluations.
Male professors > Female professors
Male professors are consistently rated more positively than female professors for literally no reason. This is true across all disciplines and levels. In a cleverly controlled experiment, students took the exact same online course. At the end of the course, they were told that the professor was male or female. Despite having the EXACT same course with the EXACT same automated emails, students still gave the "male" professor higher ratings on all qualities.
Critically, the students rated both "instructors" as great; but still, the "male" instructor was rewarded more. This disparity can be observed in many cultural facets outside of academia as well.
Specifically (from MacNell et al):
Sexist language use in RMP ratings
The following slideshow depicts words that real students used in real RMP ratings. You can clearly see a major difference in language choice when rating male versus female professors. Retrieved from this site.
In line with the additional research, students more harshly judge female professors for qualities that are generally benign, and choose to praise them on qualities that fit with mothering expectations (as opposed to intelligence, accomplishments, or professionalism). Male professors are described as more laid-back and lenient, which (according to the research presented in this post) is a luxury afforded by status, privilege, and students' preconceived notions.
Before you jump to the argument that there are more male professors in the first place (which means there are more opportunities to give them high ratings), consider the fact that these trends are robust even in very rare "female-dominated" disciplines.
Students' extraordinary expectations of their female professors
Students have biased, implicit expectations of female professors and react more harshly when those expectations are not met (El-Alayli et al., 2018). In many ways, RMP often reflects whether a professor measured up to students' ideals about gender roles rather than their teaching efficacy. To illustrate, here are some well-established behaviors with which students disproportionately burden their female, but not male, professors (supplemented with real female professors' anecdotes).
Asking for special treatment.
At the end of every semester, our inboxes are flooded with "can you please bump up my grade?" and (a major pet peeve of mine) "I figured it didn't hurt to ask." Sure, this happens to male professors as well, but not nearly at the volume it does female professors.
Challenging the legitimacy of grading and other decisions.
Students are generally more likely to disagree with our policies, procedures, and really any decision female professors make, even in the face of a rubric.
Every semester we have handfuls of students who devour an enormous portion of our mental bandwidth. These students blatantly interfere during class for a number of reasons, and it forces us spend extensive mental resources monitoring their behavior, watching for problematic behaviors, and being generally "ready to defend." These specific outbursts serve to prove a point about our inaptitude (asking ostentatious, impossible questions) or impress other students (making inappropriate comments to get a reaction).
Take this video of five random people (not even students) who chose my class to light up a gigantic joint in the middle of my Intro Psych class "for the 'Gram":
At a glance, this is no big deal. Some dumb kids played a prank on me and left. Right? This is where you're wrong.
There are a few things I want you to know about this situation.
A few semesters ago, a Youtuber burst into my classroom during an exam (again in Intro Psych). He made some loud racist comments as he slowly approached me at the podium. When I continuously told him to leave, he progressed to sexual comments about me in front of all of my students (who at this point were absolutely distracted and, of course, had their phones out to video the ordeal). This guy also refused to back down until I threatened him with calling the cops. One of my students was the plant and videoed the event for him. My other students fought to reclaim their attention and many suffered some anxiety seeing me be disrespected at that level.
Do I care about people blatantly smoking weed? Absolutely not. Do I resent the fact that they stole valuable class time from my students and me? Yes. I care that I am seen as a target. I care that they made it an explicit mission to undermine my hard-earned and fragile authority throughout the entire process.
Another interruption/prank involving non-students sneaking into my class occurred October of 2021. These were the only photos I was sent of the perpetrators: pictures that I took with a students' phone, and pictures OF ME taking pictures of the two men. One video was sent to me, but all it contains is a short video of (you guessed it) ME storming up the steps to address the behavior. What's going on in these pictures? A random guy from the street set up his all-in-one and blasted it at full volume, yelling at the screen. He also wouldn't leave when I demanded it. He also apparently couldn't wear his mask appropriately. He clearly didn't have much going for him.
The fact that I was unable to get him to leave was most infuriating to me. Our class time is precious, and I view any violation of this kind as a threat to my (and my students') safety and integrity. Where is the investigation to track these people down? Where are the protective measures to prevent this from happening every semester? How far will this go in the future before someone is actually hurt in the process? Why is nothing ever done about it?
Expecting, and then abusing, a friendly relationship.
Whereas students expect male faculty members to be too professional/busy to talk to in a friendly manner, female professors are often approached with "Hey" or by their first names (a major pet peeve for most of us along with "miss" or "Mrs."). Students are more likely to overshare in an effort to befriend the professor. I personally enjoy such casual relationships, but the problem lies in the next step: abusing the relationship once a need arises. This usually manifests as a request for special treatment in which I would bend the rules just for them.
Engaging in benevolent sexism.
If we are praised for anything, the language reflects appreciation mostly for our motherly qualities. Whereas male professors are praised for their intelligence and wit, female professors are rated highly for being "warm," "caring," and "loving." RMP exacerbates these issues because highly rated professors are those who students believe conform to their beliefs about gender roles, not who employs best practices.
Expecting chronic availability and personalized attention.
This falls under the embedded gender role that female professors desire to be mothers to their students. Whereas the male professors are expected to be available within a certain timeframe, female professors do not have the same luxury.
Overtly harassing (and threatening) the professor.
Assuming that students don't automatically jump rank and complain to the dean, they are more likely to threaten to do so for female professors.
Covertly harassing the professor.
This one is more common and problematic. Borderline behaviors create an uncomfortable environment, but don't reach the threshold to be supported in an official complaint.
From El-Alayli et al (2018):
From Landrum (2019):
THE BOTTOM LINE:
Does this mean male professors don't deserve high ratings or that female professors deserve better ratings? No. It DOES mean that the criteria students use changes based on professors' demographics. It DOES mean that we need to:
Evidence of Racism
Not only do professors from minority groups have to fight extraordinarily hard to attain the same positions as their white coworkers, they have to deal with significantly more embedded racism from students and their institutions. Despite claiming that they "don't see color" and "aren't racist," students still discriminate against professors of minority populations. I'm not talking about derogatory word use- I'm talking about giving lower ratings on teaching evaluations, challenging professors' decisions, retaliating, and just generally having a harder time believing that the professor is qualified. Black professors have to work harder just to prove themselves to be equal in everyday academia. It is extremely emotionally taxing.
Unfortunately, this is not limited to academia. Kang et al. (2016) demonstrated the power of "resume whitening" (altering non-white race-indicative information, such as one's name, organizations, etc.) on job application callback rates:
Kang et al (2016). Persistent, implicit racism in the job market for Black and Asian, but not White, Americans.
Just consider how racism is embedded into RMP and course evaluations.